Nov 24 19:08:09 localhost kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 24 19:08:09 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 24 19:08:09 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 19:08:09 localhost kernel: BIOS-provided physical RAM map:
Nov 24 19:08:09 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 24 19:08:09 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 24 19:08:09 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 24 19:08:09 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 24 19:08:09 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 24 19:08:09 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 24 19:08:09 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 24 19:08:09 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 24 19:08:09 localhost kernel: NX (Execute Disable) protection: active
Nov 24 19:08:09 localhost kernel: APIC: Static calls initialized
Nov 24 19:08:09 localhost kernel: SMBIOS 2.8 present.
Nov 24 19:08:09 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 24 19:08:09 localhost kernel: Hypervisor detected: KVM
Nov 24 19:08:09 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 24 19:08:09 localhost kernel: kvm-clock: using sched offset of 4304362967 cycles
Nov 24 19:08:09 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 24 19:08:09 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 24 19:08:09 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 24 19:08:09 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 24 19:08:09 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 24 19:08:09 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 24 19:08:09 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 24 19:08:09 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 24 19:08:09 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 24 19:08:09 localhost kernel: Using GB pages for direct mapping
Nov 24 19:08:09 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 24 19:08:09 localhost kernel: ACPI: Early table checksum verification disabled
Nov 24 19:08:09 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 24 19:08:09 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 19:08:09 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 19:08:09 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 19:08:09 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 24 19:08:09 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 19:08:09 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 19:08:09 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 24 19:08:09 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 24 19:08:09 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 24 19:08:09 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 24 19:08:09 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 24 19:08:09 localhost kernel: No NUMA configuration found
Nov 24 19:08:09 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 24 19:08:09 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 24 19:08:09 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 24 19:08:09 localhost kernel: Zone ranges:
Nov 24 19:08:09 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 24 19:08:09 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 24 19:08:09 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 19:08:09 localhost kernel:   Device   empty
Nov 24 19:08:09 localhost kernel: Movable zone start for each node
Nov 24 19:08:09 localhost kernel: Early memory node ranges
Nov 24 19:08:09 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 24 19:08:09 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 24 19:08:09 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 19:08:09 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 24 19:08:09 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 24 19:08:09 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 24 19:08:09 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 24 19:08:09 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 24 19:08:09 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 24 19:08:09 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 24 19:08:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 24 19:08:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 24 19:08:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 24 19:08:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 24 19:08:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 24 19:08:09 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 24 19:08:09 localhost kernel: TSC deadline timer available
Nov 24 19:08:09 localhost kernel: CPU topo: Max. logical packages:   8
Nov 24 19:08:09 localhost kernel: CPU topo: Max. logical dies:       8
Nov 24 19:08:09 localhost kernel: CPU topo: Max. dies per package:   1
Nov 24 19:08:09 localhost kernel: CPU topo: Max. threads per core:   1
Nov 24 19:08:09 localhost kernel: CPU topo: Num. cores per package:     1
Nov 24 19:08:09 localhost kernel: CPU topo: Num. threads per package:   1
Nov 24 19:08:09 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 24 19:08:09 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 24 19:08:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 24 19:08:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 24 19:08:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 24 19:08:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 24 19:08:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 24 19:08:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 24 19:08:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 24 19:08:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 24 19:08:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 24 19:08:09 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 24 19:08:09 localhost kernel: Booting paravirtualized kernel on KVM
Nov 24 19:08:09 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 24 19:08:09 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 24 19:08:09 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 24 19:08:09 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 24 19:08:09 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 24 19:08:09 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 24 19:08:09 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 19:08:09 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 24 19:08:09 localhost kernel: random: crng init done
Nov 24 19:08:09 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 24 19:08:09 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 24 19:08:09 localhost kernel: Fallback order for Node 0: 0 
Nov 24 19:08:09 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 24 19:08:09 localhost kernel: Policy zone: Normal
Nov 24 19:08:09 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 24 19:08:09 localhost kernel: software IO TLB: area num 8.
Nov 24 19:08:09 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 24 19:08:09 localhost kernel: ftrace: allocating 49298 entries in 193 pages
Nov 24 19:08:09 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 24 19:08:09 localhost kernel: Dynamic Preempt: voluntary
Nov 24 19:08:09 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 24 19:08:09 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 24 19:08:09 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 24 19:08:09 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 24 19:08:09 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 24 19:08:09 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 24 19:08:09 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 24 19:08:09 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 24 19:08:09 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 19:08:09 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 19:08:09 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 19:08:09 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 24 19:08:09 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 24 19:08:09 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 24 19:08:09 localhost kernel: Console: colour VGA+ 80x25
Nov 24 19:08:09 localhost kernel: printk: console [ttyS0] enabled
Nov 24 19:08:09 localhost kernel: ACPI: Core revision 20230331
Nov 24 19:08:09 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 24 19:08:09 localhost kernel: x2apic enabled
Nov 24 19:08:09 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 24 19:08:09 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 24 19:08:09 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 24 19:08:09 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 24 19:08:09 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 24 19:08:09 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 24 19:08:09 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 24 19:08:09 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 24 19:08:09 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 24 19:08:09 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 24 19:08:09 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 24 19:08:09 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 24 19:08:09 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 24 19:08:09 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 24 19:08:09 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 24 19:08:09 localhost kernel: x86/bugs: return thunk changed
Nov 24 19:08:09 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 24 19:08:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 24 19:08:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 24 19:08:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 24 19:08:09 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 24 19:08:09 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 24 19:08:09 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 24 19:08:09 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 24 19:08:09 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 24 19:08:09 localhost kernel: landlock: Up and running.
Nov 24 19:08:09 localhost kernel: Yama: becoming mindful.
Nov 24 19:08:09 localhost kernel: SELinux:  Initializing.
Nov 24 19:08:09 localhost kernel: LSM support for eBPF active
Nov 24 19:08:09 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 19:08:09 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 19:08:09 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 24 19:08:09 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 24 19:08:09 localhost kernel: ... version:                0
Nov 24 19:08:09 localhost kernel: ... bit width:              48
Nov 24 19:08:09 localhost kernel: ... generic registers:      6
Nov 24 19:08:09 localhost kernel: ... value mask:             0000ffffffffffff
Nov 24 19:08:09 localhost kernel: ... max period:             00007fffffffffff
Nov 24 19:08:09 localhost kernel: ... fixed-purpose events:   0
Nov 24 19:08:09 localhost kernel: ... event mask:             000000000000003f
Nov 24 19:08:09 localhost kernel: signal: max sigframe size: 1776
Nov 24 19:08:09 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 24 19:08:09 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 24 19:08:09 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 24 19:08:09 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 24 19:08:09 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 24 19:08:09 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 24 19:08:09 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 24 19:08:09 localhost kernel: node 0 deferred pages initialised in 10ms
Nov 24 19:08:09 localhost kernel: Memory: 7765960K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616280K reserved, 0K cma-reserved)
Nov 24 19:08:09 localhost kernel: devtmpfs: initialized
Nov 24 19:08:09 localhost kernel: x86/mm: Memory block size: 128MB
Nov 24 19:08:09 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 24 19:08:09 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 24 19:08:09 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 24 19:08:09 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 24 19:08:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 24 19:08:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 24 19:08:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 24 19:08:09 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 24 19:08:09 localhost kernel: audit: type=2000 audit(1764011287.010:1): state=initialized audit_enabled=0 res=1
Nov 24 19:08:09 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 24 19:08:09 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 24 19:08:09 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 24 19:08:09 localhost kernel: cpuidle: using governor menu
Nov 24 19:08:09 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 24 19:08:09 localhost kernel: PCI: Using configuration type 1 for base access
Nov 24 19:08:09 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 24 19:08:09 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 24 19:08:09 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 24 19:08:09 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 24 19:08:09 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 24 19:08:09 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 24 19:08:09 localhost kernel: Demotion targets for Node 0: null
Nov 24 19:08:09 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 24 19:08:09 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 24 19:08:09 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 24 19:08:09 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 24 19:08:09 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 24 19:08:09 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 24 19:08:09 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 24 19:08:09 localhost kernel: ACPI: Interpreter enabled
Nov 24 19:08:09 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 24 19:08:09 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 24 19:08:09 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 24 19:08:09 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 24 19:08:09 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 24 19:08:09 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 24 19:08:09 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [3] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [4] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [5] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [6] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [7] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [8] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [9] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [10] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [11] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [12] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [13] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [14] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [15] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [16] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [17] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [18] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [19] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [20] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [21] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [22] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [23] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [24] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [25] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [26] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [27] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [28] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [29] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [30] registered
Nov 24 19:08:09 localhost kernel: acpiphp: Slot [31] registered
Nov 24 19:08:09 localhost kernel: PCI host bridge to bus 0000:00
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 24 19:08:09 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 24 19:08:09 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 24 19:08:09 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 24 19:08:09 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 24 19:08:09 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 24 19:08:09 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 24 19:08:09 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 24 19:08:09 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 19:08:09 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 24 19:08:09 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 24 19:08:09 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 24 19:08:09 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 24 19:08:09 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 24 19:08:09 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 24 19:08:09 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 24 19:08:09 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 24 19:08:09 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 19:08:09 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 24 19:08:09 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 24 19:08:09 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 19:08:09 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 24 19:08:09 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 24 19:08:09 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 24 19:08:09 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 24 19:08:09 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 24 19:08:09 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 24 19:08:09 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 24 19:08:09 localhost kernel: iommu: Default domain type: Translated
Nov 24 19:08:09 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 24 19:08:09 localhost kernel: SCSI subsystem initialized
Nov 24 19:08:09 localhost kernel: ACPI: bus type USB registered
Nov 24 19:08:09 localhost kernel: usbcore: registered new interface driver usbfs
Nov 24 19:08:09 localhost kernel: usbcore: registered new interface driver hub
Nov 24 19:08:09 localhost kernel: usbcore: registered new device driver usb
Nov 24 19:08:09 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 24 19:08:09 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 24 19:08:09 localhost kernel: PTP clock support registered
Nov 24 19:08:09 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 24 19:08:09 localhost kernel: NetLabel: Initializing
Nov 24 19:08:09 localhost kernel: NetLabel:  domain hash size = 128
Nov 24 19:08:09 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 24 19:08:09 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 24 19:08:09 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 24 19:08:09 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 24 19:08:09 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 24 19:08:09 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 24 19:08:09 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 24 19:08:09 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 24 19:08:09 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 24 19:08:09 localhost kernel: vgaarb: loaded
Nov 24 19:08:09 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 24 19:08:09 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 24 19:08:09 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 24 19:08:09 localhost kernel: pnp: PnP ACPI init
Nov 24 19:08:09 localhost kernel: pnp 00:03: [dma 2]
Nov 24 19:08:09 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 24 19:08:09 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 24 19:08:09 localhost kernel: NET: Registered PF_INET protocol family
Nov 24 19:08:09 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 24 19:08:09 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 24 19:08:09 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 24 19:08:09 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 24 19:08:09 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 24 19:08:09 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 24 19:08:09 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 24 19:08:09 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 19:08:09 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 19:08:09 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 24 19:08:09 localhost kernel: NET: Registered PF_XDP protocol family
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 24 19:08:09 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 24 19:08:09 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 24 19:08:09 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 24 19:08:09 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 98114 usecs
Nov 24 19:08:09 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 24 19:08:09 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 24 19:08:09 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 24 19:08:09 localhost kernel: ACPI: bus type thunderbolt registered
Nov 24 19:08:09 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 24 19:08:09 localhost kernel: Initialise system trusted keyrings
Nov 24 19:08:09 localhost kernel: Key type blacklist registered
Nov 24 19:08:09 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 24 19:08:09 localhost kernel: zbud: loaded
Nov 24 19:08:09 localhost kernel: integrity: Platform Keyring initialized
Nov 24 19:08:09 localhost kernel: integrity: Machine keyring initialized
Nov 24 19:08:09 localhost kernel: Freeing initrd memory: 85868K
Nov 24 19:08:09 localhost kernel: NET: Registered PF_ALG protocol family
Nov 24 19:08:09 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 24 19:08:09 localhost kernel: Key type asymmetric registered
Nov 24 19:08:09 localhost kernel: Asymmetric key parser 'x509' registered
Nov 24 19:08:09 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 24 19:08:09 localhost kernel: io scheduler mq-deadline registered
Nov 24 19:08:09 localhost kernel: io scheduler kyber registered
Nov 24 19:08:09 localhost kernel: io scheduler bfq registered
Nov 24 19:08:09 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 24 19:08:09 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 24 19:08:09 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 24 19:08:09 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 24 19:08:09 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 24 19:08:09 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 24 19:08:09 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 24 19:08:09 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 24 19:08:09 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 24 19:08:09 localhost kernel: Non-volatile memory driver v1.3
Nov 24 19:08:09 localhost kernel: rdac: device handler registered
Nov 24 19:08:09 localhost kernel: hp_sw: device handler registered
Nov 24 19:08:09 localhost kernel: emc: device handler registered
Nov 24 19:08:09 localhost kernel: alua: device handler registered
Nov 24 19:08:09 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 24 19:08:09 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 24 19:08:09 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 24 19:08:09 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 24 19:08:09 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 24 19:08:09 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 24 19:08:09 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 24 19:08:09 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 24 19:08:09 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 24 19:08:09 localhost kernel: hub 1-0:1.0: USB hub found
Nov 24 19:08:09 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 24 19:08:09 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 24 19:08:09 localhost kernel: usbserial: USB Serial support registered for generic
Nov 24 19:08:09 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 24 19:08:09 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 24 19:08:09 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 24 19:08:09 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 24 19:08:09 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 24 19:08:09 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 24 19:08:09 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 24 19:08:09 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T19:08:08 UTC (1764011288)
Nov 24 19:08:09 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 24 19:08:09 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 24 19:08:09 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 24 19:08:09 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 24 19:08:09 localhost kernel: usbcore: registered new interface driver usbhid
Nov 24 19:08:09 localhost kernel: usbhid: USB HID core driver
Nov 24 19:08:09 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 24 19:08:09 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 24 19:08:09 localhost kernel: Initializing XFRM netlink socket
Nov 24 19:08:09 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 24 19:08:09 localhost kernel: Segment Routing with IPv6
Nov 24 19:08:09 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 24 19:08:09 localhost kernel: mpls_gso: MPLS GSO support
Nov 24 19:08:09 localhost kernel: IPI shorthand broadcast: enabled
Nov 24 19:08:09 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 24 19:08:09 localhost kernel: AES CTR mode by8 optimization enabled
Nov 24 19:08:09 localhost kernel: sched_clock: Marking stable (1257004242, 153046595)->(1528804106, -118753269)
Nov 24 19:08:09 localhost kernel: registered taskstats version 1
Nov 24 19:08:09 localhost kernel: Loading compiled-in X.509 certificates
Nov 24 19:08:09 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 19:08:09 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 24 19:08:09 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 24 19:08:09 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 24 19:08:09 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 24 19:08:09 localhost kernel: Demotion targets for Node 0: null
Nov 24 19:08:09 localhost kernel: page_owner is disabled
Nov 24 19:08:09 localhost kernel: Key type .fscrypt registered
Nov 24 19:08:09 localhost kernel: Key type fscrypt-provisioning registered
Nov 24 19:08:09 localhost kernel: Key type big_key registered
Nov 24 19:08:09 localhost kernel: Key type encrypted registered
Nov 24 19:08:09 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 24 19:08:09 localhost kernel: Loading compiled-in module X.509 certificates
Nov 24 19:08:09 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 19:08:09 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 24 19:08:09 localhost kernel: ima: No architecture policies found
Nov 24 19:08:09 localhost kernel: evm: Initialising EVM extended attributes:
Nov 24 19:08:09 localhost kernel: evm: security.selinux
Nov 24 19:08:09 localhost kernel: evm: security.SMACK64 (disabled)
Nov 24 19:08:09 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 24 19:08:09 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 24 19:08:09 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 24 19:08:09 localhost kernel: evm: security.apparmor (disabled)
Nov 24 19:08:09 localhost kernel: evm: security.ima
Nov 24 19:08:09 localhost kernel: evm: security.capability
Nov 24 19:08:09 localhost kernel: evm: HMAC attrs: 0x1
Nov 24 19:08:09 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 24 19:08:09 localhost kernel: Running certificate verification RSA selftest
Nov 24 19:08:09 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 24 19:08:09 localhost kernel: Running certificate verification ECDSA selftest
Nov 24 19:08:09 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 24 19:08:09 localhost kernel: clk: Disabling unused clocks
Nov 24 19:08:09 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 24 19:08:09 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 24 19:08:09 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 24 19:08:09 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 24 19:08:09 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 24 19:08:09 localhost kernel: Run /init as init process
Nov 24 19:08:09 localhost kernel:   with arguments:
Nov 24 19:08:09 localhost kernel:     /init
Nov 24 19:08:09 localhost kernel:   with environment:
Nov 24 19:08:09 localhost kernel:     HOME=/
Nov 24 19:08:09 localhost kernel:     TERM=linux
Nov 24 19:08:09 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64
Nov 24 19:08:09 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 24 19:08:09 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 24 19:08:09 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 24 19:08:09 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 24 19:08:09 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 24 19:08:09 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 24 19:08:09 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 24 19:08:09 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 19:08:09 localhost systemd[1]: Detected virtualization kvm.
Nov 24 19:08:09 localhost systemd[1]: Detected architecture x86-64.
Nov 24 19:08:09 localhost systemd[1]: Running in initrd.
Nov 24 19:08:09 localhost systemd[1]: No hostname configured, using default hostname.
Nov 24 19:08:09 localhost systemd[1]: Hostname set to <localhost>.
Nov 24 19:08:09 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 24 19:08:09 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 24 19:08:09 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 19:08:09 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 24 19:08:09 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 24 19:08:09 localhost systemd[1]: Reached target Local File Systems.
Nov 24 19:08:09 localhost systemd[1]: Reached target Path Units.
Nov 24 19:08:09 localhost systemd[1]: Reached target Slice Units.
Nov 24 19:08:09 localhost systemd[1]: Reached target Swaps.
Nov 24 19:08:09 localhost systemd[1]: Reached target Timer Units.
Nov 24 19:08:09 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 19:08:09 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 24 19:08:09 localhost systemd[1]: Listening on Journal Socket.
Nov 24 19:08:09 localhost systemd[1]: Listening on udev Control Socket.
Nov 24 19:08:09 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 24 19:08:09 localhost systemd[1]: Reached target Socket Units.
Nov 24 19:08:09 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 24 19:08:09 localhost systemd[1]: Starting Journal Service...
Nov 24 19:08:09 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 19:08:09 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 24 19:08:09 localhost systemd[1]: Starting Create System Users...
Nov 24 19:08:09 localhost systemd[1]: Starting Setup Virtual Console...
Nov 24 19:08:09 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 19:08:09 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 24 19:08:09 localhost systemd[1]: Finished Create System Users.
Nov 24 19:08:09 localhost systemd-journald[304]: Journal started
Nov 24 19:08:09 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/e19f0d46fa864b57a68a08490f1ee667) is 8.0M, max 153.6M, 145.6M free.
Nov 24 19:08:09 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Nov 24 19:08:09 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Nov 24 19:08:09 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 24 19:08:09 localhost systemd[1]: Started Journal Service.
Nov 24 19:08:09 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 19:08:09 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 19:08:09 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 19:08:09 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 19:08:09 localhost systemd[1]: Finished Setup Virtual Console.
Nov 24 19:08:09 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 24 19:08:09 localhost systemd[1]: Starting dracut cmdline hook...
Nov 24 19:08:09 localhost dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Nov 24 19:08:09 localhost dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 19:08:09 localhost systemd[1]: Finished dracut cmdline hook.
Nov 24 19:08:09 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 24 19:08:09 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 24 19:08:09 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 24 19:08:09 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 24 19:08:09 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 24 19:08:09 localhost kernel: RPC: Registered udp transport module.
Nov 24 19:08:09 localhost kernel: RPC: Registered tcp transport module.
Nov 24 19:08:09 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 24 19:08:09 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 24 19:08:09 localhost rpc.statd[440]: Version 2.5.4 starting
Nov 24 19:08:09 localhost rpc.statd[440]: Initializing NSM state
Nov 24 19:08:10 localhost rpc.idmapd[445]: Setting log level to 0
Nov 24 19:08:10 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 24 19:08:10 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 19:08:10 localhost systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 19:08:10 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 19:08:10 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 24 19:08:10 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 24 19:08:10 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 24 19:08:10 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 24 19:08:10 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 19:08:10 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 24 19:08:10 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 19:08:10 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 19:08:10 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 24 19:08:10 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 19:08:10 localhost systemd[1]: Reached target Network.
Nov 24 19:08:10 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 19:08:10 localhost systemd[1]: Starting dracut initqueue hook...
Nov 24 19:08:10 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 24 19:08:10 localhost systemd[1]: Reached target System Initialization.
Nov 24 19:08:10 localhost systemd[1]: Reached target Basic System.
Nov 24 19:08:10 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 24 19:08:10 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 24 19:08:10 localhost kernel:  vda: vda1
Nov 24 19:08:10 localhost systemd-udevd[459]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 19:08:10 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 19:08:10 localhost kernel: libata version 3.00 loaded.
Nov 24 19:08:10 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 24 19:08:10 localhost kernel: scsi host0: ata_piix
Nov 24 19:08:10 localhost kernel: scsi host1: ata_piix
Nov 24 19:08:10 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 24 19:08:10 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 24 19:08:10 localhost systemd[1]: Reached target Initrd Root Device.
Nov 24 19:08:10 localhost kernel: ata1: found unknown device (class 0)
Nov 24 19:08:10 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 24 19:08:10 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 24 19:08:10 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 24 19:08:10 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 24 19:08:10 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 24 19:08:10 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 24 19:08:10 localhost systemd[1]: Finished dracut initqueue hook.
Nov 24 19:08:10 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 19:08:10 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 24 19:08:10 localhost systemd[1]: Reached target Remote File Systems.
Nov 24 19:08:10 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 24 19:08:10 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 24 19:08:10 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 24 19:08:10 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Nov 24 19:08:10 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 19:08:10 localhost systemd[1]: Mounting /sysroot...
Nov 24 19:08:11 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 24 19:08:11 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 24 19:08:11 localhost kernel: XFS (vda1): Ending clean mount
Nov 24 19:08:11 localhost systemd[1]: Mounted /sysroot.
Nov 24 19:08:11 localhost systemd[1]: Reached target Initrd Root File System.
Nov 24 19:08:11 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 24 19:08:11 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 24 19:08:11 localhost systemd[1]: Reached target Initrd File Systems.
Nov 24 19:08:11 localhost systemd[1]: Reached target Initrd Default Target.
Nov 24 19:08:11 localhost systemd[1]: Starting dracut mount hook...
Nov 24 19:08:11 localhost systemd[1]: Finished dracut mount hook.
Nov 24 19:08:11 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 24 19:08:11 localhost rpc.idmapd[445]: exiting on signal 15
Nov 24 19:08:11 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 24 19:08:11 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 24 19:08:11 localhost systemd[1]: Stopped target Network.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Timer Units.
Nov 24 19:08:11 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 24 19:08:11 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Basic System.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Path Units.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Remote File Systems.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Slice Units.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Socket Units.
Nov 24 19:08:11 localhost systemd[1]: Stopped target System Initialization.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Local File Systems.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Swaps.
Nov 24 19:08:11 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped dracut mount hook.
Nov 24 19:08:11 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 24 19:08:11 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 24 19:08:11 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 24 19:08:11 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 24 19:08:11 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 24 19:08:11 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 24 19:08:11 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 24 19:08:11 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 24 19:08:11 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 24 19:08:11 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 24 19:08:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 24 19:08:11 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 24 19:08:11 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Closed udev Control Socket.
Nov 24 19:08:11 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Closed udev Kernel Socket.
Nov 24 19:08:11 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 24 19:08:11 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 24 19:08:11 localhost systemd[1]: Starting Cleanup udev Database...
Nov 24 19:08:11 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 24 19:08:11 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 24 19:08:11 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Stopped Create System Users.
Nov 24 19:08:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 24 19:08:11 localhost systemd[1]: Finished Cleanup udev Database.
Nov 24 19:08:11 localhost systemd[1]: Reached target Switch Root.
Nov 24 19:08:11 localhost systemd[1]: Starting Switch Root...
Nov 24 19:08:11 localhost systemd[1]: Switching root.
Nov 24 19:08:11 localhost systemd-journald[304]: Journal stopped
Nov 24 19:08:12 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Nov 24 19:08:12 localhost kernel: audit: type=1404 audit(1764011291.870:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 24 19:08:12 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 19:08:12 localhost kernel: SELinux:  policy capability open_perms=1
Nov 24 19:08:12 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 19:08:12 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 24 19:08:12 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 19:08:12 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 19:08:12 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 19:08:12 localhost kernel: audit: type=1403 audit(1764011292.043:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 24 19:08:12 localhost systemd[1]: Successfully loaded SELinux policy in 176.996ms.
Nov 24 19:08:12 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.898ms.
Nov 24 19:08:12 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 19:08:12 localhost systemd[1]: Detected virtualization kvm.
Nov 24 19:08:12 localhost systemd[1]: Detected architecture x86-64.
Nov 24 19:08:12 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:08:12 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 24 19:08:12 localhost systemd[1]: Stopped Switch Root.
Nov 24 19:08:12 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 24 19:08:12 localhost systemd[1]: Created slice Slice /system/getty.
Nov 24 19:08:12 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 24 19:08:12 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 24 19:08:12 localhost systemd[1]: Created slice User and Session Slice.
Nov 24 19:08:12 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 19:08:12 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 24 19:08:12 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 24 19:08:12 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 24 19:08:12 localhost systemd[1]: Stopped target Switch Root.
Nov 24 19:08:12 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 24 19:08:12 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 24 19:08:12 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 24 19:08:12 localhost systemd[1]: Reached target Path Units.
Nov 24 19:08:12 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 24 19:08:12 localhost systemd[1]: Reached target Slice Units.
Nov 24 19:08:12 localhost systemd[1]: Reached target Swaps.
Nov 24 19:08:12 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 24 19:08:12 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 24 19:08:12 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 24 19:08:12 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 24 19:08:12 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 24 19:08:12 localhost systemd[1]: Listening on udev Control Socket.
Nov 24 19:08:12 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 24 19:08:12 localhost systemd[1]: Mounting Huge Pages File System...
Nov 24 19:08:12 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 24 19:08:12 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 24 19:08:12 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 24 19:08:12 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 19:08:12 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 24 19:08:12 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 19:08:12 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 24 19:08:12 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 24 19:08:12 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 24 19:08:12 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 24 19:08:12 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 24 19:08:12 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 24 19:08:12 localhost systemd[1]: Stopped Journal Service.
Nov 24 19:08:12 localhost kernel: fuse: init (API version 7.37)
Nov 24 19:08:12 localhost systemd[1]: Starting Journal Service...
Nov 24 19:08:12 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 19:08:12 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 24 19:08:12 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 19:08:12 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 24 19:08:12 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 24 19:08:12 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 24 19:08:12 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 24 19:08:12 localhost systemd-journald[678]: Journal started
Nov 24 19:08:12 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 19:08:12 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 24 19:08:12 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 24 19:08:12 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 24 19:08:12 localhost kernel: ACPI: bus type drm_connector registered
Nov 24 19:08:12 localhost systemd[1]: Started Journal Service.
Nov 24 19:08:12 localhost systemd[1]: Mounted Huge Pages File System.
Nov 24 19:08:12 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 24 19:08:12 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 24 19:08:12 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 24 19:08:12 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 19:08:12 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 19:08:12 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 19:08:12 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 24 19:08:12 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 24 19:08:12 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 24 19:08:12 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 24 19:08:12 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 24 19:08:12 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 24 19:08:12 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 24 19:08:12 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 24 19:08:12 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 24 19:08:12 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 24 19:08:12 localhost systemd[1]: Mounting FUSE Control File System...
Nov 24 19:08:12 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 19:08:12 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 24 19:08:12 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 24 19:08:12 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 24 19:08:12 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 24 19:08:12 localhost systemd[1]: Starting Create System Users...
Nov 24 19:08:12 localhost systemd[1]: Mounted FUSE Control File System.
Nov 24 19:08:12 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 19:08:12 localhost systemd-journald[678]: Received client request to flush runtime journal.
Nov 24 19:08:12 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 24 19:08:12 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 24 19:08:12 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 19:08:12 localhost systemd[1]: Finished Create System Users.
Nov 24 19:08:12 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 19:08:12 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 24 19:08:12 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 19:08:12 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 24 19:08:12 localhost systemd[1]: Reached target Local File Systems.
Nov 24 19:08:12 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 24 19:08:12 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 24 19:08:12 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 24 19:08:12 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 24 19:08:12 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 24 19:08:12 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 24 19:08:12 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 19:08:12 localhost bootctl[697]: Couldn't find EFI system partition, skipping.
Nov 24 19:08:12 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 24 19:08:13 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 19:08:13 localhost systemd[1]: Starting Security Auditing Service...
Nov 24 19:08:13 localhost systemd[1]: Starting RPC Bind...
Nov 24 19:08:13 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 24 19:08:13 localhost auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 24 19:08:13 localhost auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 24 19:08:13 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 24 19:08:13 localhost systemd[1]: Started RPC Bind.
Nov 24 19:08:13 localhost augenrules[708]: /sbin/augenrules: No change
Nov 24 19:08:13 localhost augenrules[723]: No rules
Nov 24 19:08:13 localhost augenrules[723]: enabled 1
Nov 24 19:08:13 localhost augenrules[723]: failure 1
Nov 24 19:08:13 localhost augenrules[723]: pid 703
Nov 24 19:08:13 localhost augenrules[723]: rate_limit 0
Nov 24 19:08:13 localhost augenrules[723]: backlog_limit 8192
Nov 24 19:08:13 localhost augenrules[723]: lost 0
Nov 24 19:08:13 localhost augenrules[723]: backlog 2
Nov 24 19:08:13 localhost augenrules[723]: backlog_wait_time 60000
Nov 24 19:08:13 localhost augenrules[723]: backlog_wait_time_actual 0
Nov 24 19:08:13 localhost augenrules[723]: enabled 1
Nov 24 19:08:13 localhost augenrules[723]: failure 1
Nov 24 19:08:13 localhost augenrules[723]: pid 703
Nov 24 19:08:13 localhost augenrules[723]: rate_limit 0
Nov 24 19:08:13 localhost augenrules[723]: backlog_limit 8192
Nov 24 19:08:13 localhost augenrules[723]: lost 0
Nov 24 19:08:13 localhost augenrules[723]: backlog 4
Nov 24 19:08:13 localhost augenrules[723]: backlog_wait_time 60000
Nov 24 19:08:13 localhost augenrules[723]: backlog_wait_time_actual 0
Nov 24 19:08:13 localhost augenrules[723]: enabled 1
Nov 24 19:08:13 localhost augenrules[723]: failure 1
Nov 24 19:08:13 localhost augenrules[723]: pid 703
Nov 24 19:08:13 localhost augenrules[723]: rate_limit 0
Nov 24 19:08:13 localhost augenrules[723]: backlog_limit 8192
Nov 24 19:08:13 localhost augenrules[723]: lost 0
Nov 24 19:08:13 localhost augenrules[723]: backlog 4
Nov 24 19:08:13 localhost augenrules[723]: backlog_wait_time 60000
Nov 24 19:08:13 localhost augenrules[723]: backlog_wait_time_actual 0
Nov 24 19:08:13 localhost systemd[1]: Started Security Auditing Service.
Nov 24 19:08:13 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 24 19:08:13 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 24 19:08:13 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 24 19:08:13 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 19:08:13 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 24 19:08:13 localhost systemd[1]: Starting Update is Completed...
Nov 24 19:08:13 localhost systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 19:08:13 localhost systemd[1]: Finished Update is Completed.
Nov 24 19:08:13 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 19:08:13 localhost systemd[1]: Reached target System Initialization.
Nov 24 19:08:13 localhost systemd[1]: Started dnf makecache --timer.
Nov 24 19:08:13 localhost systemd[1]: Started Daily rotation of log files.
Nov 24 19:08:13 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 24 19:08:13 localhost systemd[1]: Reached target Timer Units.
Nov 24 19:08:13 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 19:08:13 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 24 19:08:13 localhost systemd[1]: Reached target Socket Units.
Nov 24 19:08:13 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 24 19:08:13 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 19:08:13 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 24 19:08:13 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 19:08:13 localhost systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 19:08:13 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 19:08:13 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 19:08:13 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 24 19:08:13 localhost systemd[1]: Reached target Basic System.
Nov 24 19:08:13 localhost dbus-broker-lau[764]: Ready
Nov 24 19:08:13 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 24 19:08:13 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 24 19:08:13 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 24 19:08:13 localhost systemd[1]: Starting NTP client/server...
Nov 24 19:08:13 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 24 19:08:13 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 24 19:08:13 localhost chronyd[782]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 19:08:13 localhost chronyd[782]: Loaded 0 symmetric keys
Nov 24 19:08:13 localhost chronyd[782]: Using right/UTC timezone to obtain leap second data
Nov 24 19:08:13 localhost chronyd[782]: Loaded seccomp filter (level 2)
Nov 24 19:08:13 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 24 19:08:13 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 24 19:08:13 localhost systemd[1]: Started irqbalance daemon.
Nov 24 19:08:13 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 24 19:08:13 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 19:08:13 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 19:08:13 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 19:08:13 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 24 19:08:13 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 24 19:08:13 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 24 19:08:13 localhost systemd[1]: Starting User Login Management...
Nov 24 19:08:13 localhost systemd[1]: Started NTP client/server.
Nov 24 19:08:13 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 24 19:08:13 localhost kernel: kvm_amd: TSC scaling supported
Nov 24 19:08:13 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 24 19:08:13 localhost kernel: kvm_amd: Nested Paging enabled
Nov 24 19:08:13 localhost kernel: kvm_amd: LBR virtualization supported
Nov 24 19:08:13 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 24 19:08:13 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 24 19:08:13 localhost kernel: Console: switching to colour dummy device 80x25
Nov 24 19:08:13 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 24 19:08:13 localhost kernel: [drm] features: -context_init
Nov 24 19:08:13 localhost kernel: [drm] number of scanouts: 1
Nov 24 19:08:13 localhost kernel: [drm] number of cap sets: 0
Nov 24 19:08:13 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 24 19:08:13 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 24 19:08:13 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 24 19:08:13 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 24 19:08:13 localhost systemd-logind[795]: New seat seat0.
Nov 24 19:08:13 localhost systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 19:08:13 localhost systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 19:08:13 localhost systemd[1]: Started User Login Management.
Nov 24 19:08:13 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 24 19:08:13 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 24 19:08:13 localhost iptables.init[787]: iptables: Applying firewall rules: [  OK  ]
Nov 24 19:08:13 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 24 19:08:14 localhost cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 24 Nov 2025 19:08:14 +0000. Up 7.15 seconds.
Nov 24 19:08:14 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 24 19:08:14 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 24 19:08:14 localhost systemd[1]: run-cloud\x2dinit-tmp-tmptn57k9g6.mount: Deactivated successfully.
Nov 24 19:08:14 localhost systemd[1]: Starting Hostname Service...
Nov 24 19:08:14 localhost systemd[1]: Started Hostname Service.
Nov 24 19:08:14 np0005534003.novalocal systemd-hostnamed[853]: Hostname set to <np0005534003.novalocal> (static)
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Reached target Preparation for Network.
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Starting Network Manager...
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.1603] NetworkManager (version 1.54.1-1.el9) is starting... (boot:b3da1bfc-5c9f-4e84-9159-06370a5e0bee)
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.1608] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.1807] manager[0x5579a4bb2080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.1871] hostname: hostname: using hostnamed
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.1871] hostname: static hostname changed from (none) to "np0005534003.novalocal"
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.1876] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2016] manager[0x5579a4bb2080]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2018] manager[0x5579a4bb2080]: rfkill: WWAN hardware radio set enabled
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2114] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2114] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2115] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2115] manager: Networking is enabled by state file
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2118] settings: Loaded settings plugin: keyfile (internal)
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2150] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2174] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2207] dhcp: init: Using DHCP client 'internal'
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2209] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2223] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2234] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2242] device (lo): Activation: starting connection 'lo' (41cae1ef-0d4d-447a-80d8-eb6262a5c804)
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2251] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2254] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2308] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2311] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2313] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2315] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2317] device (eth0): carrier: link connected
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2319] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2325] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2331] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2334] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2335] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2337] manager: NetworkManager state is now CONNECTING
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2338] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2343] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2346] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2389] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2396] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2415] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Started Network Manager.
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Reached target Network.
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2714] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2717] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2718] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2725] device (lo): Activation: successful, device activated.
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2731] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2735] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2738] device (eth0): Activation: successful, device activated.
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2744] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 19:08:15 np0005534003.novalocal NetworkManager[857]: <info>  [1764011295.2747] manager: startup complete
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Reached target NFS client services.
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Reached target Remote File Systems.
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 24 19:08:15 np0005534003.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 24 Nov 2025 19:08:15 +0000. Up 8.34 seconds.
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: |  eth0  | True |         38.102.83.22         | 255.255.255.0 | global | fa:16:3e:47:c8:68 |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe47:c868/64 |       .       |  link  | fa:16:3e:47:c8:68 |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 24 19:08:15 np0005534003.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 19:08:16 np0005534003.novalocal useradd[986]: new group: name=cloud-user, GID=1001
Nov 24 19:08:16 np0005534003.novalocal useradd[986]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 24 19:08:16 np0005534003.novalocal useradd[986]: add 'cloud-user' to group 'adm'
Nov 24 19:08:16 np0005534003.novalocal useradd[986]: add 'cloud-user' to group 'systemd-journal'
Nov 24 19:08:16 np0005534003.novalocal useradd[986]: add 'cloud-user' to shadow group 'adm'
Nov 24 19:08:16 np0005534003.novalocal useradd[986]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: Generating public/private rsa key pair.
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: The key fingerprint is:
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: SHA256:IJiBlg2fY50VW8nZL/0FlzyGYjPh0smRyM+i5bZCiwo root@np0005534003.novalocal
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: The key's randomart image is:
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: +---[RSA 3072]----+
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: | o=    o+.=oo o .|
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |.o.=o o o*+Bo..=.|
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |. o=.o.. .+=* .o.|
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |  . .. . o.+ o  .|
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |        S . . . .|
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |       o o     . |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: | E    o o .      |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |  .  . o .       |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |   ..   .        |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: Generating public/private ecdsa key pair.
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: The key fingerprint is:
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: SHA256:wbaxRECPGt4rMftCIA+JAJLUkA1zddYgunkGNCgnsvQ root@np0005534003.novalocal
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: The key's randomart image is:
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: +---[ECDSA 256]---+
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |=*B+.+o=+        |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |B+=oo += .       |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |*=.o. . B        |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |* .E++ o =       |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: | + +=o. S        |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |  . ++ .         |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |   .o .          |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |    .o           |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |     ..          |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: Generating public/private ed25519 key pair.
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: The key fingerprint is:
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: SHA256:q1OVLmXzIzL+gLEb5E81qcFbs85qkefDZkeZL7NlDSE root@np0005534003.novalocal
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: The key's randomart image is:
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: +--[ED25519 256]--+
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |                 |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |                 |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |           .E .  |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |       .  *. . . |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |      o S**o o.  |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |     o +BB++*  o |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |      =o*O.o oo .|
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |      .*o+* +o.  |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: |      ooo=+o.+   |
Nov 24 19:08:16 np0005534003.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Reached target Network is Online.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Starting System Logging Service...
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 24 19:08:17 np0005534003.novalocal sm-notify[1002]: Version 2.5.4 starting
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Starting Permit User Sessions...
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Finished Permit User Sessions.
Nov 24 19:08:17 np0005534003.novalocal sshd[1004]: Server listening on 0.0.0.0 port 22.
Nov 24 19:08:17 np0005534003.novalocal sshd[1004]: Server listening on :: port 22.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Started Command Scheduler.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Started Getty on tty1.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Reached target Login Prompts.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 24 19:08:17 np0005534003.novalocal rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Nov 24 19:08:17 np0005534003.novalocal rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 24 19:08:17 np0005534003.novalocal crond[1007]: (CRON) STARTUP (1.5.7)
Nov 24 19:08:17 np0005534003.novalocal crond[1007]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Started System Logging Service.
Nov 24 19:08:17 np0005534003.novalocal crond[1007]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 63% if used.)
Nov 24 19:08:17 np0005534003.novalocal crond[1007]: (CRON) INFO (running with inotify support)
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Reached target Multi-User System.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 24 19:08:17 np0005534003.novalocal rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 19:08:17 np0005534003.novalocal kdumpctl[1010]: kdump: No kdump initial ramdisk found.
Nov 24 19:08:17 np0005534003.novalocal kdumpctl[1010]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 24 19:08:17 np0005534003.novalocal cloud-init[1097]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 24 Nov 2025 19:08:17 +0000. Up 10.08 seconds.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 24 19:08:17 np0005534003.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 24 19:08:17 np0005534003.novalocal dracut[1265]: dracut-057-102.git20250818.el9
Nov 24 19:08:17 np0005534003.novalocal cloud-init[1266]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 24 Nov 2025 19:08:17 +0000. Up 10.55 seconds.
Nov 24 19:08:17 np0005534003.novalocal cloud-init[1283]: #############################################################
Nov 24 19:08:17 np0005534003.novalocal cloud-init[1284]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 24 19:08:17 np0005534003.novalocal cloud-init[1286]: 256 SHA256:wbaxRECPGt4rMftCIA+JAJLUkA1zddYgunkGNCgnsvQ root@np0005534003.novalocal (ECDSA)
Nov 24 19:08:17 np0005534003.novalocal cloud-init[1288]: 256 SHA256:q1OVLmXzIzL+gLEb5E81qcFbs85qkefDZkeZL7NlDSE root@np0005534003.novalocal (ED25519)
Nov 24 19:08:17 np0005534003.novalocal cloud-init[1290]: 3072 SHA256:IJiBlg2fY50VW8nZL/0FlzyGYjPh0smRyM+i5bZCiwo root@np0005534003.novalocal (RSA)
Nov 24 19:08:17 np0005534003.novalocal cloud-init[1291]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 24 19:08:17 np0005534003.novalocal cloud-init[1292]: #############################################################
Nov 24 19:08:18 np0005534003.novalocal cloud-init[1266]: Cloud-init v. 24.4-7.el9 finished at Mon, 24 Nov 2025 19:08:18 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.73 seconds
Nov 24 19:08:18 np0005534003.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 24 19:08:18 np0005534003.novalocal systemd[1]: Reached target Cloud-init target.
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 24 19:08:18 np0005534003.novalocal dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 19:08:19 np0005534003.novalocal sshd-session[1474]: Connection reset by 38.102.83.114 port 40230 [preauth]
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 19:08:19 np0005534003.novalocal sshd-session[1497]: Unable to negotiate with 38.102.83.114 port 40246: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 19:08:19 np0005534003.novalocal sshd-session[1506]: Connection reset by 38.102.83.114 port 40262 [preauth]
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 19:08:19 np0005534003.novalocal sshd-session[1514]: Unable to negotiate with 38.102.83.114 port 40278: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 19:08:19 np0005534003.novalocal sshd-session[1523]: Unable to negotiate with 38.102.83.114 port 40290: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 24 19:08:19 np0005534003.novalocal sshd-session[1534]: Connection reset by 38.102.83.114 port 40300 [preauth]
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 19:08:19 np0005534003.novalocal sshd-session[1555]: Unable to negotiate with 38.102.83.114 port 40310: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 19:08:19 np0005534003.novalocal sshd-session[1564]: Unable to negotiate with 38.102.83.114 port 40316: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 24 19:08:19 np0005534003.novalocal sshd-session[1544]: Connection closed by 38.102.83.114 port 40308 [preauth]
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: memstrack is not available
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: memstrack is not available
Nov 24 19:08:19 np0005534003.novalocal dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 19:08:20 np0005534003.novalocal dracut[1268]: *** Including module: systemd ***
Nov 24 19:08:20 np0005534003.novalocal chronyd[782]: Selected source 142.4.192.253 (2.centos.pool.ntp.org)
Nov 24 19:08:20 np0005534003.novalocal chronyd[782]: System clock TAI offset set to 37 seconds
Nov 24 19:08:20 np0005534003.novalocal dracut[1268]: *** Including module: fips ***
Nov 24 19:08:20 np0005534003.novalocal dracut[1268]: *** Including module: systemd-initrd ***
Nov 24 19:08:20 np0005534003.novalocal dracut[1268]: *** Including module: i18n ***
Nov 24 19:08:20 np0005534003.novalocal dracut[1268]: *** Including module: drm ***
Nov 24 19:08:21 np0005534003.novalocal dracut[1268]: *** Including module: prefixdevname ***
Nov 24 19:08:21 np0005534003.novalocal dracut[1268]: *** Including module: kernel-modules ***
Nov 24 19:08:21 np0005534003.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 24 19:08:21 np0005534003.novalocal dracut[1268]: *** Including module: kernel-modules-extra ***
Nov 24 19:08:21 np0005534003.novalocal dracut[1268]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 24 19:08:21 np0005534003.novalocal dracut[1268]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 24 19:08:21 np0005534003.novalocal dracut[1268]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 24 19:08:21 np0005534003.novalocal dracut[1268]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 24 19:08:21 np0005534003.novalocal dracut[1268]: *** Including module: qemu ***
Nov 24 19:08:22 np0005534003.novalocal dracut[1268]: *** Including module: fstab-sys ***
Nov 24 19:08:22 np0005534003.novalocal dracut[1268]: *** Including module: rootfs-block ***
Nov 24 19:08:22 np0005534003.novalocal dracut[1268]: *** Including module: terminfo ***
Nov 24 19:08:22 np0005534003.novalocal dracut[1268]: *** Including module: udev-rules ***
Nov 24 19:08:22 np0005534003.novalocal dracut[1268]: Skipping udev rule: 91-permissions.rules
Nov 24 19:08:22 np0005534003.novalocal dracut[1268]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 24 19:08:22 np0005534003.novalocal dracut[1268]: *** Including module: virtiofs ***
Nov 24 19:08:22 np0005534003.novalocal dracut[1268]: *** Including module: dracut-systemd ***
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]: *** Including module: usrmount ***
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]: *** Including module: base ***
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]: *** Including module: fs-lib ***
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]: *** Including module: kdumpbase ***
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]:   microcode_ctl module: mangling fw_dir
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]:     microcode_ctl: configuration "intel" is ignored
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 24 19:08:23 np0005534003.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: IRQ 25 affinity is now unmanaged
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: IRQ 31 affinity is now unmanaged
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: IRQ 28 affinity is now unmanaged
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: IRQ 32 affinity is now unmanaged
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: IRQ 30 affinity is now unmanaged
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 24 19:08:24 np0005534003.novalocal irqbalance[789]: IRQ 29 affinity is now unmanaged
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]: *** Including module: openssl ***
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]: *** Including module: shutdown ***
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]: *** Including module: squash ***
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]: *** Including modules done ***
Nov 24 19:08:24 np0005534003.novalocal dracut[1268]: *** Installing kernel module dependencies ***
Nov 24 19:08:25 np0005534003.novalocal dracut[1268]: *** Installing kernel module dependencies done ***
Nov 24 19:08:25 np0005534003.novalocal dracut[1268]: *** Resolving executable dependencies ***
Nov 24 19:08:25 np0005534003.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 19:08:27 np0005534003.novalocal dracut[1268]: *** Resolving executable dependencies done ***
Nov 24 19:08:27 np0005534003.novalocal dracut[1268]: *** Generating early-microcode cpio image ***
Nov 24 19:08:27 np0005534003.novalocal dracut[1268]: *** Store current command line parameters ***
Nov 24 19:08:27 np0005534003.novalocal dracut[1268]: Stored kernel commandline:
Nov 24 19:08:27 np0005534003.novalocal dracut[1268]: No dracut internal kernel commandline stored in the initramfs
Nov 24 19:08:27 np0005534003.novalocal dracut[1268]: *** Install squash loader ***
Nov 24 19:08:28 np0005534003.novalocal dracut[1268]: *** Squashing the files inside the initramfs ***
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: *** Squashing the files inside the initramfs done ***
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: *** Hardlinking files ***
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: Mode:           real
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: Files:          50
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: Linked:         0 files
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: Compared:       0 xattrs
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: Compared:       0 files
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: Saved:          0 B
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: Duration:       0.000912 seconds
Nov 24 19:08:29 np0005534003.novalocal dracut[1268]: *** Hardlinking files done ***
Nov 24 19:08:30 np0005534003.novalocal sshd-session[2190]: Invalid user alex from 14.63.196.175 port 42694
Nov 24 19:08:30 np0005534003.novalocal dracut[1268]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 24 19:08:30 np0005534003.novalocal sshd-session[2190]: Received disconnect from 14.63.196.175 port 42694:11: Bye Bye [preauth]
Nov 24 19:08:30 np0005534003.novalocal sshd-session[2190]: Disconnected from invalid user alex 14.63.196.175 port 42694 [preauth]
Nov 24 19:08:30 np0005534003.novalocal kdumpctl[1010]: kdump: kexec: loaded kdump kernel
Nov 24 19:08:30 np0005534003.novalocal kdumpctl[1010]: kdump: Starting kdump: [OK]
Nov 24 19:08:30 np0005534003.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 24 19:08:30 np0005534003.novalocal systemd[1]: Startup finished in 1.665s (kernel) + 2.908s (initrd) + 18.837s (userspace) = 23.411s.
Nov 24 19:08:45 np0005534003.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 19:10:23 np0005534003.novalocal sshd-session[4297]: Accepted publickey for zuul from 38.102.83.114 port 46230 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 24 19:10:23 np0005534003.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 24 19:10:23 np0005534003.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 24 19:10:23 np0005534003.novalocal systemd-logind[795]: New session 1 of user zuul.
Nov 24 19:10:23 np0005534003.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 24 19:10:23 np0005534003.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Queued start job for default target Main User Target.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Created slice User Application Slice.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Reached target Paths.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Reached target Timers.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Starting D-Bus User Message Bus Socket...
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Starting Create User's Volatile Files and Directories...
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Finished Create User's Volatile Files and Directories.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Listening on D-Bus User Message Bus Socket.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Reached target Sockets.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Reached target Basic System.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Reached target Main User Target.
Nov 24 19:10:23 np0005534003.novalocal systemd[4301]: Startup finished in 148ms.
Nov 24 19:10:23 np0005534003.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 24 19:10:23 np0005534003.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 24 19:10:23 np0005534003.novalocal sshd-session[4297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:10:23 np0005534003.novalocal python3[4385]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:10:27 np0005534003.novalocal python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:10:33 np0005534003.novalocal python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:10:34 np0005534003.novalocal python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 24 19:10:36 np0005534003.novalocal python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDihXW/RZBSAWLuxQ8s85wbzsubB91/H4Bkn6A+rp7NmVfPBiK5fq67BmRboVqsKzQTDxh6nVuV/MDfGQQ+X2d32ZQDYrDgxZMfYbwfIsHvRMHw4xqSFs152GLXWNppMl95SdspnF8IXt2MocSK1f3Cp6d+4udPilsB/rruIMr/afDonbWdLbGzda48x26RECd+q+qjMtbEdc28gFRUx1lc6eL85iI55aq9S4t47kIVa166kqQ1szsj4MZZqfNbEsn+eatcCjyQoykHWzUUz13lpVA04sWeCaWJxqBYQvane0D6cUfFrnlmYq+bBQ6PI03tC+vP2B9C0SdrhxP7gHBo1s7fwGq8R+w7w/7zuRUCbPz/DwE+9l+b14ndRgCOkZDypve0v/UPCncFw7L827HnNFoOf/JBgJTCRGRGfr+mBkRn/xTs/I7sXBcmzbnrDrDeNJZsSK1tyMPSLMAdBsXGPkK4uFU1wIgfNc1hpMJEb5b6BPqfHXyW5m5fVsLFYV8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:36 np0005534003.novalocal python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:37 np0005534003.novalocal python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:10:37 np0005534003.novalocal python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764011437.1145155-207-1394191957323/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=acd30624e9c64bfca3f3045a1e5a8bb8_id_rsa follow=False checksum=4bdadc27dafd1ca3d05767cc2d5f938993160926 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:38 np0005534003.novalocal python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:10:38 np0005534003.novalocal python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764011438.0693219-240-237466166882323/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=acd30624e9c64bfca3f3045a1e5a8bb8_id_rsa.pub follow=False checksum=cbfb62c9bedebdce45685c7b28da406b1bd35ce4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:40 np0005534003.novalocal python3[4973]: ansible-ping Invoked with data=pong
Nov 24 19:10:41 np0005534003.novalocal python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:10:42 np0005534003.novalocal python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 24 19:10:43 np0005534003.novalocal python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:44 np0005534003.novalocal python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:44 np0005534003.novalocal python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:44 np0005534003.novalocal python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:44 np0005534003.novalocal python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:45 np0005534003.novalocal python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:46 np0005534003.novalocal sudo[5231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtsjesseclqlratsxqmdefhtzpmyzykx ; /usr/bin/python3'
Nov 24 19:10:46 np0005534003.novalocal sudo[5231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:10:46 np0005534003.novalocal python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:46 np0005534003.novalocal sudo[5231]: pam_unix(sudo:session): session closed for user root
Nov 24 19:10:47 np0005534003.novalocal sudo[5309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npdrqunwrcegwwqyjsnrytlpoyvnvyok ; /usr/bin/python3'
Nov 24 19:10:47 np0005534003.novalocal sudo[5309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:10:53 np0005534003.novalocal python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:10:53 np0005534003.novalocal sudo[5309]: pam_unix(sudo:session): session closed for user root
Nov 24 19:10:53 np0005534003.novalocal sudo[5382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqjtghkwzjyyznqwpmamkjxnfuhkkfum ; /usr/bin/python3'
Nov 24 19:10:53 np0005534003.novalocal sudo[5382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:10:53 np0005534003.novalocal python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764011447.0710707-21-16229250393317/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:10:53 np0005534003.novalocal sudo[5382]: pam_unix(sudo:session): session closed for user root
Nov 24 19:10:54 np0005534003.novalocal python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:54 np0005534003.novalocal python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:54 np0005534003.novalocal python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:54 np0005534003.novalocal python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:55 np0005534003.novalocal python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:55 np0005534003.novalocal python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:55 np0005534003.novalocal python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:56 np0005534003.novalocal python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:56 np0005534003.novalocal python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:56 np0005534003.novalocal python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:56 np0005534003.novalocal python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:57 np0005534003.novalocal python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:57 np0005534003.novalocal python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:57 np0005534003.novalocal python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:58 np0005534003.novalocal python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:58 np0005534003.novalocal python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:58 np0005534003.novalocal python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:59 np0005534003.novalocal python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:59 np0005534003.novalocal python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:10:59 np0005534003.novalocal python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:11:00 np0005534003.novalocal python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:11:00 np0005534003.novalocal python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:11:00 np0005534003.novalocal python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:11:00 np0005534003.novalocal python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:11:01 np0005534003.novalocal python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:11:01 np0005534003.novalocal python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:11:03 np0005534003.novalocal sudo[6056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfbrvswavhwjfmwgoipcaweuvyektuer ; /usr/bin/python3'
Nov 24 19:11:03 np0005534003.novalocal sudo[6056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:11:04 np0005534003.novalocal python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 19:11:04 np0005534003.novalocal systemd[1]: Starting Time & Date Service...
Nov 24 19:11:04 np0005534003.novalocal systemd[1]: Started Time & Date Service.
Nov 24 19:11:04 np0005534003.novalocal systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Nov 24 19:11:04 np0005534003.novalocal sudo[6056]: pam_unix(sudo:session): session closed for user root
Nov 24 19:11:04 np0005534003.novalocal irqbalance[789]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 24 19:11:04 np0005534003.novalocal irqbalance[789]: IRQ 27 affinity is now unmanaged
Nov 24 19:11:04 np0005534003.novalocal sudo[6087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvtqhzjrgqgjitzdqxjpbxaymovfjgql ; /usr/bin/python3'
Nov 24 19:11:04 np0005534003.novalocal sudo[6087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:11:04 np0005534003.novalocal python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:11:04 np0005534003.novalocal sudo[6087]: pam_unix(sudo:session): session closed for user root
Nov 24 19:11:05 np0005534003.novalocal python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:11:05 np0005534003.novalocal python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764011464.7332726-153-69871783868254/source _original_basename=tmp62pok9pf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:11:05 np0005534003.novalocal python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:11:06 np0005534003.novalocal python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764011465.5831664-183-132345996434370/source _original_basename=tmpq4zzxhdg follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:11:06 np0005534003.novalocal sudo[6507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcrqpmwmqmzkybbxreoxmkwfxennjbam ; /usr/bin/python3'
Nov 24 19:11:06 np0005534003.novalocal sudo[6507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:11:07 np0005534003.novalocal python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:11:07 np0005534003.novalocal sudo[6507]: pam_unix(sudo:session): session closed for user root
Nov 24 19:11:07 np0005534003.novalocal sudo[6580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqefyjznbxbkcczrczdnovkxusbsgzce ; /usr/bin/python3'
Nov 24 19:11:07 np0005534003.novalocal sudo[6580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:11:07 np0005534003.novalocal python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764011466.7876747-231-8198592890969/source _original_basename=tmp7_ms6atm follow=False checksum=0a5264336eaf669ce906803fabc64043ef3757da backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:11:07 np0005534003.novalocal sudo[6580]: pam_unix(sudo:session): session closed for user root
Nov 24 19:11:08 np0005534003.novalocal python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:11:08 np0005534003.novalocal python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:11:08 np0005534003.novalocal sudo[6734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfhqgoeoesemvbcwncwwmpenwdqadma ; /usr/bin/python3'
Nov 24 19:11:08 np0005534003.novalocal sudo[6734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:11:08 np0005534003.novalocal python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:11:08 np0005534003.novalocal sudo[6734]: pam_unix(sudo:session): session closed for user root
Nov 24 19:11:09 np0005534003.novalocal sudo[6807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbrelgrkwspknbncpmasjvzpsxzsrwlj ; /usr/bin/python3'
Nov 24 19:11:09 np0005534003.novalocal sudo[6807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:11:09 np0005534003.novalocal python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764011468.576161-273-169907831977167/source _original_basename=tmpea05pww0 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:11:09 np0005534003.novalocal sudo[6807]: pam_unix(sudo:session): session closed for user root
Nov 24 19:11:09 np0005534003.novalocal sudo[6858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzomohfmekvlzhahptecpgdzhdkldviw ; /usr/bin/python3'
Nov 24 19:11:09 np0005534003.novalocal sudo[6858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:11:09 np0005534003.novalocal python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-d1f4-66f7-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:11:09 np0005534003.novalocal sudo[6858]: pam_unix(sudo:session): session closed for user root
Nov 24 19:11:10 np0005534003.novalocal python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-d1f4-66f7-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 24 19:11:11 np0005534003.novalocal python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:11:31 np0005534003.novalocal sudo[6940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvkdzvqfuwxqkjwlrhswvqlicwbivopy ; /usr/bin/python3'
Nov 24 19:11:31 np0005534003.novalocal sudo[6940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:11:31 np0005534003.novalocal python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:11:31 np0005534003.novalocal sudo[6940]: pam_unix(sudo:session): session closed for user root
Nov 24 19:11:34 np0005534003.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 19:12:09 np0005534003.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 19:12:09 np0005534003.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 24 19:12:09 np0005534003.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 24 19:12:09 np0005534003.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 24 19:12:09 np0005534003.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 24 19:12:09 np0005534003.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 24 19:12:09 np0005534003.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 24 19:12:09 np0005534003.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 24 19:12:09 np0005534003.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 24 19:12:09 np0005534003.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4420] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 19:12:09 np0005534003.novalocal systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4682] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4729] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4737] device (eth1): carrier: link connected
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4741] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4754] policy: auto-activating connection 'Wired connection 1' (3f6c124f-2186-3ad9-bc47-40d15759b6fb)
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4763] device (eth1): Activation: starting connection 'Wired connection 1' (3f6c124f-2186-3ad9-bc47-40d15759b6fb)
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4765] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4770] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4779] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:12:09 np0005534003.novalocal NetworkManager[857]: <info>  [1764011529.4789] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 19:12:10 np0005534003.novalocal python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-07bf-c04a-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:12:20 np0005534003.novalocal sudo[7050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybllkjwbkmkzvgwgjulktducwcmcnlsm ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 19:12:20 np0005534003.novalocal sudo[7050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:12:20 np0005534003.novalocal python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:12:20 np0005534003.novalocal sudo[7050]: pam_unix(sudo:session): session closed for user root
Nov 24 19:12:20 np0005534003.novalocal sudo[7123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahvpdwltjtpxbkcjfmctjjwcpcbdylrd ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 19:12:20 np0005534003.novalocal sudo[7123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:12:21 np0005534003.novalocal python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764011540.4041686-102-16187071426763/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d2110510302ce6649a929be0dec09647b22b1630 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:12:21 np0005534003.novalocal sudo[7123]: pam_unix(sudo:session): session closed for user root
Nov 24 19:12:21 np0005534003.novalocal sudo[7173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuusrocbkznomxmiiniatzeyocabzpwl ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 19:12:21 np0005534003.novalocal sudo[7173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:12:22 np0005534003.novalocal python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Stopping Network Manager...
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[857]: <info>  [1764011542.0527] caught SIGTERM, shutting down normally.
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[857]: <info>  [1764011542.0539] dhcp4 (eth0): canceled DHCP transaction
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[857]: <info>  [1764011542.0539] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[857]: <info>  [1764011542.0539] dhcp4 (eth0): state changed no lease
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[857]: <info>  [1764011542.0543] manager: NetworkManager state is now CONNECTING
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[857]: <info>  [1764011542.0636] dhcp4 (eth1): canceled DHCP transaction
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[857]: <info>  [1764011542.0637] dhcp4 (eth1): state changed no lease
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[857]: <info>  [1764011542.0794] exiting (success)
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Stopped Network Manager.
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: NetworkManager.service: Consumed 1.898s CPU time, 10.2M memory peak.
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Starting Network Manager...
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.1566] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:b3da1bfc-5c9f-4e84-9159-06370a5e0bee)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.1570] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.1628] manager[0x55b1b6dab070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Starting Hostname Service...
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Started Hostname Service.
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2638] hostname: hostname: using hostnamed
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2639] hostname: static hostname changed from (none) to "np0005534003.novalocal"
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2645] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2652] manager[0x55b1b6dab070]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2652] manager[0x55b1b6dab070]: rfkill: WWAN hardware radio set enabled
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2683] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2683] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2684] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2685] manager: Networking is enabled by state file
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2688] settings: Loaded settings plugin: keyfile (internal)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2692] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2725] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2736] dhcp: init: Using DHCP client 'internal'
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2738] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2743] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2749] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2757] device (lo): Activation: starting connection 'lo' (41cae1ef-0d4d-447a-80d8-eb6262a5c804)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2763] device (eth0): carrier: link connected
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2767] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2772] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2773] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2779] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2785] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2790] device (eth1): carrier: link connected
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2793] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2798] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3f6c124f-2186-3ad9-bc47-40d15759b6fb) (indicated)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2798] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2802] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2809] device (eth1): Activation: starting connection 'Wired connection 1' (3f6c124f-2186-3ad9-bc47-40d15759b6fb)
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Started Network Manager.
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2817] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2825] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2828] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2831] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2834] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2839] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2844] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2847] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2867] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2878] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2880] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2895] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2902] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2923] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2931] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 19:12:22 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011542.2942] device (lo): Activation: successful, device activated.
Nov 24 19:12:22 np0005534003.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 24 19:12:22 np0005534003.novalocal sudo[7173]: pam_unix(sudo:session): session closed for user root
Nov 24 19:12:22 np0005534003.novalocal python3[7240]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-07bf-c04a-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:12:23 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011543.7095] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 24 19:12:23 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011543.7106] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 19:12:23 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011543.7181] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 19:12:23 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011543.7219] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 19:12:23 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011543.7221] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 19:12:23 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011543.7224] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 19:12:23 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011543.7228] device (eth0): Activation: successful, device activated.
Nov 24 19:12:23 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011543.7232] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 19:12:33 np0005534003.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 19:12:39 np0005534003.novalocal systemd[4301]: Starting Mark boot as successful...
Nov 24 19:12:39 np0005534003.novalocal systemd[4301]: Finished Mark boot as successful.
Nov 24 19:12:52 np0005534003.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.2850] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 19:13:07 np0005534003.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 19:13:07 np0005534003.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3199] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3202] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3211] device (eth1): Activation: successful, device activated.
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3217] manager: startup complete
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3219] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <warn>  [1764011587.3227] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3237] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 24 19:13:07 np0005534003.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3357] dhcp4 (eth1): canceled DHCP transaction
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3358] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3359] dhcp4 (eth1): state changed no lease
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3376] policy: auto-activating connection 'ci-private-network' (64497e37-9e92-5f20-a47f-5c77436a71c0)
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3380] device (eth1): Activation: starting connection 'ci-private-network' (64497e37-9e92-5f20-a47f-5c77436a71c0)
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3381] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3386] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3394] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3405] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3452] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3455] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:13:07 np0005534003.novalocal NetworkManager[7191]: <info>  [1764011587.3463] device (eth1): Activation: successful, device activated.
Nov 24 19:13:17 np0005534003.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 19:13:22 np0005534003.novalocal sshd-session[4312]: Received disconnect from 38.102.83.114 port 46230:11: disconnected by user
Nov 24 19:13:22 np0005534003.novalocal sshd-session[4312]: Disconnected from user zuul 38.102.83.114 port 46230
Nov 24 19:13:22 np0005534003.novalocal sshd-session[4297]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:13:22 np0005534003.novalocal systemd-logind[795]: Session 1 logged out. Waiting for processes to exit.
Nov 24 19:13:23 np0005534003.novalocal sshd-session[7288]: Accepted publickey for zuul from 38.102.83.114 port 33422 ssh2: RSA SHA256:7SvGaq0vO1tX0FCwphjOH0o+Hv96ctrv4u16VrRbmZ0
Nov 24 19:13:23 np0005534003.novalocal systemd-logind[795]: New session 3 of user zuul.
Nov 24 19:13:23 np0005534003.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 24 19:13:23 np0005534003.novalocal sshd-session[7288]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:13:24 np0005534003.novalocal sudo[7367]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbrouwgfwazdavcmzcmmlfmpkvbvlziu ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 19:13:24 np0005534003.novalocal sudo[7367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:13:24 np0005534003.novalocal python3[7369]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:13:24 np0005534003.novalocal sudo[7367]: pam_unix(sudo:session): session closed for user root
Nov 24 19:13:24 np0005534003.novalocal sudo[7440]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkpyocjtkkjinczlrvxmweovsvxciiod ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 19:13:24 np0005534003.novalocal sudo[7440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:13:24 np0005534003.novalocal python3[7442]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764011603.9859867-267-192335590703656/source _original_basename=tmp2th7ctrm follow=False checksum=f7d248fd8acf15430c050b9735d3506c26a79f16 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:13:24 np0005534003.novalocal sudo[7440]: pam_unix(sudo:session): session closed for user root
Nov 24 19:13:27 np0005534003.novalocal sshd-session[7291]: Connection closed by 38.102.83.114 port 33422
Nov 24 19:13:27 np0005534003.novalocal sshd-session[7288]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:13:27 np0005534003.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 24 19:13:27 np0005534003.novalocal systemd-logind[795]: Session 3 logged out. Waiting for processes to exit.
Nov 24 19:13:27 np0005534003.novalocal systemd-logind[795]: Removed session 3.
Nov 24 19:15:39 np0005534003.novalocal systemd[4301]: Created slice User Background Tasks Slice.
Nov 24 19:15:39 np0005534003.novalocal systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 19:15:39 np0005534003.novalocal systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 19:18:29 np0005534003.novalocal sshd-session[7471]: Invalid user test from 185.156.73.233 port 37900
Nov 24 19:18:29 np0005534003.novalocal sshd-session[7471]: Connection closed by invalid user test 185.156.73.233 port 37900 [preauth]
Nov 24 19:19:04 np0005534003.novalocal sshd-session[7474]: Accepted publickey for zuul from 38.102.83.114 port 35432 ssh2: RSA SHA256:7SvGaq0vO1tX0FCwphjOH0o+Hv96ctrv4u16VrRbmZ0
Nov 24 19:19:04 np0005534003.novalocal systemd-logind[795]: New session 4 of user zuul.
Nov 24 19:19:04 np0005534003.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 24 19:19:04 np0005534003.novalocal sshd-session[7474]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:19:04 np0005534003.novalocal sudo[7501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfmomtbowvlukzdxcrblorxapuucxzom ; /usr/bin/python3'
Nov 24 19:19:04 np0005534003.novalocal sudo[7501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:04 np0005534003.novalocal python3[7503]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-2532-a547-000000001ccc-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:19:04 np0005534003.novalocal sudo[7501]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:05 np0005534003.novalocal sudo[7530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pweijlqlgkahmhdxztcwpedxllfydnnz ; /usr/bin/python3'
Nov 24 19:19:05 np0005534003.novalocal sudo[7530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:05 np0005534003.novalocal python3[7532]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:19:05 np0005534003.novalocal sudo[7530]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:05 np0005534003.novalocal sudo[7556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvliqtxlznazrqvyrwzcumpbikfxumwk ; /usr/bin/python3'
Nov 24 19:19:05 np0005534003.novalocal sudo[7556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:05 np0005534003.novalocal python3[7558]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:19:05 np0005534003.novalocal sudo[7556]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:05 np0005534003.novalocal sudo[7582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svosekonpishpylcpwedacnhvdrlyrto ; /usr/bin/python3'
Nov 24 19:19:05 np0005534003.novalocal sudo[7582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:05 np0005534003.novalocal python3[7584]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:19:05 np0005534003.novalocal sudo[7582]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:05 np0005534003.novalocal sudo[7608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygakouoizujpxhbsbeaexlxrhokwyimr ; /usr/bin/python3'
Nov 24 19:19:05 np0005534003.novalocal sudo[7608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:06 np0005534003.novalocal python3[7610]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:19:06 np0005534003.novalocal sudo[7608]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:06 np0005534003.novalocal sudo[7634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoyrkczlfxgckbojpbxsqytcybsmadfm ; /usr/bin/python3'
Nov 24 19:19:06 np0005534003.novalocal sudo[7634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:06 np0005534003.novalocal python3[7636]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:19:06 np0005534003.novalocal sudo[7634]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:07 np0005534003.novalocal sudo[7712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doycrbvlztslafmzvxpuixihwaqulkew ; /usr/bin/python3'
Nov 24 19:19:07 np0005534003.novalocal sudo[7712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:07 np0005534003.novalocal python3[7714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:19:07 np0005534003.novalocal sudo[7712]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:07 np0005534003.novalocal sudo[7785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxiygofmdfujichbnlulozzkwtovgxlv ; /usr/bin/python3'
Nov 24 19:19:07 np0005534003.novalocal sudo[7785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:07 np0005534003.novalocal python3[7787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764011946.8913639-472-118801029169443/source _original_basename=tmpd30svun7 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:19:07 np0005534003.novalocal sudo[7785]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:08 np0005534003.novalocal sudo[7835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvuszikdlmwzmmqulhavnxrnqkoagdgy ; /usr/bin/python3'
Nov 24 19:19:08 np0005534003.novalocal sudo[7835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:08 np0005534003.novalocal python3[7837]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 19:19:08 np0005534003.novalocal systemd[1]: Reloading.
Nov 24 19:19:08 np0005534003.novalocal systemd-rc-local-generator[7855]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:19:08 np0005534003.novalocal sudo[7835]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:09 np0005534003.novalocal sudo[7891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgvxsmcwbidpfpfeequgtoidsgvpqvgd ; /usr/bin/python3'
Nov 24 19:19:09 np0005534003.novalocal sudo[7891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:10 np0005534003.novalocal python3[7893]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 24 19:19:10 np0005534003.novalocal sudo[7891]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:10 np0005534003.novalocal sudo[7917]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndofrbyobdswhjxsxrhxcrzhykdnryho ; /usr/bin/python3'
Nov 24 19:19:10 np0005534003.novalocal sudo[7917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:10 np0005534003.novalocal python3[7919]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:19:10 np0005534003.novalocal sudo[7917]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:10 np0005534003.novalocal sudo[7945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnqydllunffmwdqfmsbnxobffvnjvnpo ; /usr/bin/python3'
Nov 24 19:19:10 np0005534003.novalocal sudo[7945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:10 np0005534003.novalocal python3[7947]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:19:10 np0005534003.novalocal sudo[7945]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:10 np0005534003.novalocal sudo[7973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyldndxuxbccmlexnpetwzgsfjjozwei ; /usr/bin/python3'
Nov 24 19:19:10 np0005534003.novalocal sudo[7973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:11 np0005534003.novalocal python3[7975]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:19:11 np0005534003.novalocal sudo[7973]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:11 np0005534003.novalocal sudo[8001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtikylemgvyiglyktqntpjnuvcnjagqp ; /usr/bin/python3'
Nov 24 19:19:11 np0005534003.novalocal sudo[8001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:11 np0005534003.novalocal python3[8003]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:19:11 np0005534003.novalocal sudo[8001]: pam_unix(sudo:session): session closed for user root
Nov 24 19:19:11 np0005534003.novalocal python3[8030]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-2532-a547-000000001cd3-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:19:12 np0005534003.novalocal python3[8060]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 19:19:14 np0005534003.novalocal sshd-session[7477]: Connection closed by 38.102.83.114 port 35432
Nov 24 19:19:14 np0005534003.novalocal sshd-session[7474]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:19:14 np0005534003.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 24 19:19:14 np0005534003.novalocal systemd[1]: session-4.scope: Consumed 4.231s CPU time.
Nov 24 19:19:14 np0005534003.novalocal systemd-logind[795]: Session 4 logged out. Waiting for processes to exit.
Nov 24 19:19:14 np0005534003.novalocal systemd-logind[795]: Removed session 4.
Nov 24 19:19:15 np0005534003.novalocal sshd-session[8067]: Accepted publickey for zuul from 38.102.83.114 port 52626 ssh2: RSA SHA256:7SvGaq0vO1tX0FCwphjOH0o+Hv96ctrv4u16VrRbmZ0
Nov 24 19:19:15 np0005534003.novalocal systemd-logind[795]: New session 5 of user zuul.
Nov 24 19:19:16 np0005534003.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 24 19:19:16 np0005534003.novalocal sshd-session[8067]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:19:16 np0005534003.novalocal sudo[8094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkcelaibisjhboiqisykedsyobmuuxvt ; /usr/bin/python3'
Nov 24 19:19:16 np0005534003.novalocal sudo[8094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:19:16 np0005534003.novalocal python3[8096]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 19:20:16 np0005534003.novalocal sshd-session[8281]: Invalid user mcserver from 14.63.196.175 port 55078
Nov 24 19:20:16 np0005534003.novalocal sshd-session[8281]: Received disconnect from 14.63.196.175 port 55078:11: Bye Bye [preauth]
Nov 24 19:20:16 np0005534003.novalocal sshd-session[8281]: Disconnected from invalid user mcserver 14.63.196.175 port 55078 [preauth]
Nov 24 19:20:36 np0005534003.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 19:20:36 np0005534003.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 19:20:36 np0005534003.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 19:20:36 np0005534003.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 19:20:36 np0005534003.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 19:20:36 np0005534003.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 19:20:36 np0005534003.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 19:20:36 np0005534003.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 19:20:45 np0005534003.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 19:20:45 np0005534003.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 19:20:45 np0005534003.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 19:20:45 np0005534003.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 19:20:45 np0005534003.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 19:20:45 np0005534003.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 19:20:45 np0005534003.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 19:20:45 np0005534003.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 19:20:53 np0005534003.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 19:20:53 np0005534003.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 19:20:53 np0005534003.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 19:20:53 np0005534003.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 19:20:53 np0005534003.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 19:20:53 np0005534003.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 19:20:53 np0005534003.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 19:20:53 np0005534003.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 19:20:54 np0005534003.novalocal setsebool[8434]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 24 19:20:54 np0005534003.novalocal setsebool[8434]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 24 19:21:05 np0005534003.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 24 19:21:05 np0005534003.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 19:21:05 np0005534003.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 19:21:05 np0005534003.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 19:21:05 np0005534003.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 19:21:05 np0005534003.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 19:21:05 np0005534003.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 19:21:05 np0005534003.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 19:21:24 np0005534003.novalocal dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 19:21:24 np0005534003.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 19:21:24 np0005534003.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 24 19:21:24 np0005534003.novalocal systemd[1]: Reloading.
Nov 24 19:21:25 np0005534003.novalocal systemd-rc-local-generator[9180]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:21:25 np0005534003.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 19:21:26 np0005534003.novalocal sudo[8094]: pam_unix(sudo:session): session closed for user root
Nov 24 19:21:28 np0005534003.novalocal python3[11220]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-a94e-8c60-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:21:30 np0005534003.novalocal kernel: evm: overlay not supported
Nov 24 19:21:30 np0005534003.novalocal systemd[4301]: Starting D-Bus User Message Bus...
Nov 24 19:21:30 np0005534003.novalocal dbus-broker-launch[12162]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 24 19:21:30 np0005534003.novalocal dbus-broker-launch[12162]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 24 19:21:30 np0005534003.novalocal systemd[4301]: Started D-Bus User Message Bus.
Nov 24 19:21:30 np0005534003.novalocal dbus-broker-lau[12162]: Ready
Nov 24 19:21:30 np0005534003.novalocal systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 19:21:30 np0005534003.novalocal systemd[4301]: Created slice Slice /user.
Nov 24 19:21:30 np0005534003.novalocal systemd[4301]: podman-11991.scope: unit configures an IP firewall, but not running as root.
Nov 24 19:21:30 np0005534003.novalocal systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Nov 24 19:21:30 np0005534003.novalocal systemd[4301]: Started podman-11991.scope.
Nov 24 19:21:30 np0005534003.novalocal systemd[4301]: Started podman-pause-0444cf35.scope.
Nov 24 19:21:30 np0005534003.novalocal sudo[12577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpzikaunipqihpjadtrmuoenclxqgnrn ; /usr/bin/python3'
Nov 24 19:21:30 np0005534003.novalocal sudo[12577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:21:31 np0005534003.novalocal python3[12595]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.17:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.17:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:21:31 np0005534003.novalocal python3[12595]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 24 19:21:31 np0005534003.novalocal sudo[12577]: pam_unix(sudo:session): session closed for user root
Nov 24 19:21:31 np0005534003.novalocal sshd-session[8070]: Connection closed by 38.102.83.114 port 52626
Nov 24 19:21:31 np0005534003.novalocal sshd-session[8067]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:21:31 np0005534003.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 24 19:21:31 np0005534003.novalocal systemd[1]: session-5.scope: Consumed 1min 13.606s CPU time.
Nov 24 19:21:31 np0005534003.novalocal systemd-logind[795]: Session 5 logged out. Waiting for processes to exit.
Nov 24 19:21:31 np0005534003.novalocal systemd-logind[795]: Removed session 5.
Nov 24 19:21:52 np0005534003.novalocal sshd-session[20861]: Unable to negotiate with 38.102.83.75 port 52246: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 24 19:21:52 np0005534003.novalocal sshd-session[20855]: Connection closed by 38.102.83.75 port 52234 [preauth]
Nov 24 19:21:52 np0005534003.novalocal sshd-session[20856]: Unable to negotiate with 38.102.83.75 port 52256: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 24 19:21:52 np0005534003.novalocal sshd-session[20859]: Connection closed by 38.102.83.75 port 52242 [preauth]
Nov 24 19:21:52 np0005534003.novalocal sshd-session[20862]: Unable to negotiate with 38.102.83.75 port 52258: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 24 19:21:56 np0005534003.novalocal sshd-session[22581]: Accepted publickey for zuul from 38.102.83.114 port 37364 ssh2: RSA SHA256:7SvGaq0vO1tX0FCwphjOH0o+Hv96ctrv4u16VrRbmZ0
Nov 24 19:21:56 np0005534003.novalocal systemd-logind[795]: New session 6 of user zuul.
Nov 24 19:21:56 np0005534003.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 24 19:21:56 np0005534003.novalocal sshd-session[22581]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:21:57 np0005534003.novalocal python3[22685]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHCMp8l65AtdUUp5X6qKYua4KpYNt6EwkIj8ywYnxg8cnd7bB4COBPFgoYAtwjC4ijErt9nABTUfvE3VpEZSQsQ= zuul@np0005534002.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:21:57 np0005534003.novalocal sudo[22935]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrxaviwutnfaamuoezwlezjeoermxgbn ; /usr/bin/python3'
Nov 24 19:21:57 np0005534003.novalocal sudo[22935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:21:57 np0005534003.novalocal python3[22945]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHCMp8l65AtdUUp5X6qKYua4KpYNt6EwkIj8ywYnxg8cnd7bB4COBPFgoYAtwjC4ijErt9nABTUfvE3VpEZSQsQ= zuul@np0005534002.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:21:57 np0005534003.novalocal sudo[22935]: pam_unix(sudo:session): session closed for user root
Nov 24 19:21:58 np0005534003.novalocal sudo[23365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsuypjiepsdvosqeydodmfkiswiqbsdl ; /usr/bin/python3'
Nov 24 19:21:58 np0005534003.novalocal sudo[23365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:21:58 np0005534003.novalocal python3[23372]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005534003.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 24 19:21:58 np0005534003.novalocal useradd[23455]: new group: name=cloud-admin, GID=1002
Nov 24 19:21:58 np0005534003.novalocal useradd[23455]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 24 19:21:58 np0005534003.novalocal sudo[23365]: pam_unix(sudo:session): session closed for user root
Nov 24 19:21:58 np0005534003.novalocal sudo[23579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eamfxwnyltavayhxwnwsrgzzswvrmvok ; /usr/bin/python3'
Nov 24 19:21:58 np0005534003.novalocal sudo[23579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:21:58 np0005534003.novalocal python3[23587]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHCMp8l65AtdUUp5X6qKYua4KpYNt6EwkIj8ywYnxg8cnd7bB4COBPFgoYAtwjC4ijErt9nABTUfvE3VpEZSQsQ= zuul@np0005534002.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 19:21:58 np0005534003.novalocal sudo[23579]: pam_unix(sudo:session): session closed for user root
Nov 24 19:21:59 np0005534003.novalocal sudo[23845]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqdylzwmnbhcymnfdrvbviekyhvfjqrm ; /usr/bin/python3'
Nov 24 19:21:59 np0005534003.novalocal sudo[23845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:21:59 np0005534003.novalocal python3[23848]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:21:59 np0005534003.novalocal sudo[23845]: pam_unix(sudo:session): session closed for user root
Nov 24 19:21:59 np0005534003.novalocal sudo[24108]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyxwxoloygcwargthpcgkamcmcacvvfn ; /usr/bin/python3'
Nov 24 19:21:59 np0005534003.novalocal sudo[24108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:21:59 np0005534003.novalocal python3[24117]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764012119.1028428-135-82187735149773/source _original_basename=tmpk0vgsuem follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:21:59 np0005534003.novalocal sudo[24108]: pam_unix(sudo:session): session closed for user root
Nov 24 19:22:00 np0005534003.novalocal sudo[24469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-demsgvpmejhxccpwmewggfxmerkbfbiq ; /usr/bin/python3'
Nov 24 19:22:00 np0005534003.novalocal sudo[24469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:22:00 np0005534003.novalocal python3[24478]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 24 19:22:00 np0005534003.novalocal systemd[1]: Starting Hostname Service...
Nov 24 19:22:00 np0005534003.novalocal systemd[1]: Started Hostname Service.
Nov 24 19:22:00 np0005534003.novalocal systemd-hostnamed[24583]: Changed pretty hostname to 'compute-0'
Nov 24 19:22:00 compute-0 systemd-hostnamed[24583]: Hostname set to <compute-0> (static)
Nov 24 19:22:00 compute-0 NetworkManager[7191]: <info>  [1764012120.8737] hostname: static hostname changed from "np0005534003.novalocal" to "compute-0"
Nov 24 19:22:00 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 19:22:00 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 19:22:00 compute-0 sudo[24469]: pam_unix(sudo:session): session closed for user root
Nov 24 19:22:01 compute-0 sshd-session[22625]: Connection closed by 38.102.83.114 port 37364
Nov 24 19:22:01 compute-0 sshd-session[22581]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:22:01 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 24 19:22:01 compute-0 systemd[1]: session-6.scope: Consumed 2.124s CPU time.
Nov 24 19:22:01 compute-0 systemd-logind[795]: Session 6 logged out. Waiting for processes to exit.
Nov 24 19:22:01 compute-0 systemd-logind[795]: Removed session 6.
Nov 24 19:22:10 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 19:22:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 19:22:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 19:22:28 compute-0 systemd[1]: man-db-cache-update.service: Consumed 59.246s CPU time.
Nov 24 19:22:28 compute-0 systemd[1]: run-r26451992edf3467d97516be7566c6259.service: Deactivated successfully.
Nov 24 19:22:30 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 19:23:15 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 24 19:23:15 compute-0 sshd-session[30186]: Invalid user support from 78.128.112.74 port 49130
Nov 24 19:23:15 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 24 19:23:15 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 24 19:23:15 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 24 19:23:15 compute-0 sshd-session[30186]: Connection closed by invalid user support 78.128.112.74 port 49130 [preauth]
Nov 24 19:26:11 compute-0 sshd-session[30194]: Accepted publickey for zuul from 38.102.83.75 port 42554 ssh2: RSA SHA256:7SvGaq0vO1tX0FCwphjOH0o+Hv96ctrv4u16VrRbmZ0
Nov 24 19:26:11 compute-0 systemd-logind[795]: New session 7 of user zuul.
Nov 24 19:26:11 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 24 19:26:11 compute-0 sshd-session[30194]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:26:12 compute-0 python3[30270]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:26:13 compute-0 sudo[30384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chwdwjpfesyljvcownyfzcupiawaylvq ; /usr/bin/python3'
Nov 24 19:26:13 compute-0 sudo[30384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:13 compute-0 python3[30386]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:26:13 compute-0 sudo[30384]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:14 compute-0 sudo[30457]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unqpmtbmjbmrhlgcvjbsofpspjyufudd ; /usr/bin/python3'
Nov 24 19:26:14 compute-0 sudo[30457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:14 compute-0 python3[30459]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764012373.5483422-34341-19270488108196/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:26:14 compute-0 sudo[30457]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:14 compute-0 sudo[30483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wknixbrrpalmrdecxzpnfwjmjlwffnta ; /usr/bin/python3'
Nov 24 19:26:14 compute-0 sudo[30483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:14 compute-0 python3[30485]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:26:14 compute-0 sudo[30483]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:14 compute-0 sudo[30556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgegxcvolektlkquvikjmajlbwawvoxb ; /usr/bin/python3'
Nov 24 19:26:14 compute-0 sudo[30556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:15 compute-0 python3[30558]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764012373.5483422-34341-19270488108196/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:26:15 compute-0 sudo[30556]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:15 compute-0 sudo[30582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhlkbbueodfkalsaqfbhyafnpoxlrlnn ; /usr/bin/python3'
Nov 24 19:26:15 compute-0 sudo[30582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:15 compute-0 python3[30584]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:26:15 compute-0 sudo[30582]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:15 compute-0 sudo[30655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uurmgnytfxpdptgurvlhzwxbsgwxlumu ; /usr/bin/python3'
Nov 24 19:26:15 compute-0 sudo[30655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:15 compute-0 python3[30657]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764012373.5483422-34341-19270488108196/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:26:15 compute-0 sudo[30655]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:15 compute-0 sudo[30681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulewpzzcqpxlhzxbtbyngpaljivjfnbl ; /usr/bin/python3'
Nov 24 19:26:15 compute-0 sudo[30681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:16 compute-0 python3[30683]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:26:16 compute-0 sudo[30681]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:16 compute-0 sudo[30754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fviaccxoxconxkgmstcjptidkyfqcycd ; /usr/bin/python3'
Nov 24 19:26:16 compute-0 sudo[30754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:16 compute-0 python3[30756]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764012373.5483422-34341-19270488108196/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:26:16 compute-0 sudo[30754]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:16 compute-0 sudo[30780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exqirsgukdrzvxfilrefeyejejbxgrpl ; /usr/bin/python3'
Nov 24 19:26:16 compute-0 sudo[30780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:16 compute-0 python3[30782]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:26:16 compute-0 sudo[30780]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:17 compute-0 sudo[30853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snvtlezwsrlbhkndczxgeyoyuvnpgtku ; /usr/bin/python3'
Nov 24 19:26:17 compute-0 sudo[30853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:17 compute-0 python3[30855]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764012373.5483422-34341-19270488108196/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:26:17 compute-0 sudo[30853]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:17 compute-0 sudo[30879]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuurntkavizcktojijduipwycpycjdkq ; /usr/bin/python3'
Nov 24 19:26:17 compute-0 sudo[30879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:17 compute-0 python3[30881]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:26:17 compute-0 sudo[30879]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:17 compute-0 sudo[30952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joevbalxhraboqxunrxfhylhrnjdaxta ; /usr/bin/python3'
Nov 24 19:26:17 compute-0 sudo[30952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:18 compute-0 python3[30954]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764012373.5483422-34341-19270488108196/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:26:18 compute-0 sudo[30952]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:18 compute-0 sudo[30978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvrjrpeoponrlmmovjwcwbnjqvyhhvwn ; /usr/bin/python3'
Nov 24 19:26:18 compute-0 sudo[30978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:18 compute-0 python3[30980]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:26:18 compute-0 sudo[30978]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:18 compute-0 sudo[31052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcibxzwoyudjjyuxbbfvekjwacdveato ; /usr/bin/python3'
Nov 24 19:26:18 compute-0 sudo[31052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:26:18 compute-0 python3[31054]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764012373.5483422-34341-19270488108196/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:26:18 compute-0 sudo[31052]: pam_unix(sudo:session): session closed for user root
Nov 24 19:26:21 compute-0 sshd-session[31081]: Unable to negotiate with 192.168.122.11 port 33674: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 24 19:26:21 compute-0 sshd-session[31080]: Connection closed by 192.168.122.11 port 33658 [preauth]
Nov 24 19:26:21 compute-0 sshd-session[31082]: Connection closed by 192.168.122.11 port 33660 [preauth]
Nov 24 19:26:21 compute-0 sshd-session[31084]: Unable to negotiate with 192.168.122.11 port 33680: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 24 19:26:21 compute-0 sshd-session[31083]: Unable to negotiate with 192.168.122.11 port 33696: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 24 19:26:33 compute-0 python3[31113]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:29:25 compute-0 sshd-session[31115]: Connection closed by authenticating user root 185.156.73.233 port 47694 [preauth]
Nov 24 19:30:39 compute-0 systemd[1]: Starting dnf makecache...
Nov 24 19:30:39 compute-0 dnf[31119]: Failed determining last makecache time.
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-openstack-barbican-42b4c41831408a8e323 313 kB/s |  13 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 2.3 MB/s |  65 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.1 MB/s |  32 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-python-stevedore-c4acc5639fd2329372142 4.7 MB/s | 131 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-python-observabilityclient-2f31846d73c 952 kB/s |  25 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-os-net-config-bbae2ed8a159b0435a473f38 9.8 MB/s | 356 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.4 MB/s |  42 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-python-designate-tests-tempest-347fdbc 735 kB/s |  18 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-openstack-glance-1fd12c29b339f30fe823e 795 kB/s |  18 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.0 MB/s |  29 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-openstack-manila-3c01b7181572c95dac462 910 kB/s |  25 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-python-whitebox-neutron-tests-tempest- 5.8 MB/s | 154 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-openstack-octavia-ba397f07a7331190208c 1.1 MB/s |  26 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-openstack-watcher-c014f81a8647287f6dcc 567 kB/s |  16 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-python-tcib-1124124ec06aadbac34f0d340b 279 kB/s | 7.4 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 4.9 MB/s | 144 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-openstack-swift-dc98a8463506ac520c469a 537 kB/s |  14 kB     00:00
Nov 24 19:30:40 compute-0 dnf[31119]: delorean-python-tempestconf-8515371b7cceebd4282 1.7 MB/s |  53 kB     00:00
Nov 24 19:30:41 compute-0 dnf[31119]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.6 MB/s |  96 kB     00:00
Nov 24 19:30:41 compute-0 dnf[31119]: CentOS Stream 9 - BaseOS                         25 kB/s | 7.3 kB     00:00
Nov 24 19:30:41 compute-0 dnf[31119]: CentOS Stream 9 - AppStream                      75 kB/s | 7.4 kB     00:00
Nov 24 19:30:41 compute-0 dnf[31119]: CentOS Stream 9 - CRB                            74 kB/s | 7.2 kB     00:00
Nov 24 19:30:41 compute-0 dnf[31119]: CentOS Stream 9 - Extras packages                68 kB/s | 8.3 kB     00:00
Nov 24 19:30:41 compute-0 dnf[31119]: dlrn-antelope-testing                            27 MB/s | 1.1 MB     00:00
Nov 24 19:30:42 compute-0 dnf[31119]: dlrn-antelope-build-deps                         15 MB/s | 461 kB     00:00
Nov 24 19:30:42 compute-0 dnf[31119]: centos9-rabbitmq                                8.6 MB/s | 123 kB     00:00
Nov 24 19:30:42 compute-0 dnf[31119]: centos9-storage                                  22 MB/s | 415 kB     00:00
Nov 24 19:30:42 compute-0 dnf[31119]: centos9-opstools                                4.9 MB/s |  51 kB     00:00
Nov 24 19:30:42 compute-0 dnf[31119]: NFV SIG OpenvSwitch                              28 MB/s | 454 kB     00:00
Nov 24 19:30:43 compute-0 dnf[31119]: repo-setup-centos-appstream                      82 MB/s |  25 MB     00:00
Nov 24 19:30:48 compute-0 dnf[31119]: repo-setup-centos-baseos                         82 MB/s | 8.8 MB     00:00
Nov 24 19:30:50 compute-0 dnf[31119]: repo-setup-centos-highavailability              3.4 MB/s | 744 kB     00:00
Nov 24 19:30:50 compute-0 dnf[31119]: repo-setup-centos-powertools                     77 MB/s | 7.3 MB     00:00
Nov 24 19:30:52 compute-0 dnf[31119]: Extra Packages for Enterprise Linux 9 - x86_64   39 MB/s |  20 MB     00:00
Nov 24 19:31:04 compute-0 dnf[31119]: Metadata cache created.
Nov 24 19:31:04 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 19:31:04 compute-0 systemd[1]: Finished dnf makecache.
Nov 24 19:31:04 compute-0 systemd[1]: dnf-makecache.service: Consumed 23.148s CPU time.
Nov 24 19:31:33 compute-0 sshd-session[30197]: Received disconnect from 38.102.83.75 port 42554:11: disconnected by user
Nov 24 19:31:33 compute-0 sshd-session[30197]: Disconnected from user zuul 38.102.83.75 port 42554
Nov 24 19:31:33 compute-0 sshd-session[30194]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:31:33 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 24 19:31:33 compute-0 systemd[1]: session-7.scope: Consumed 5.721s CPU time.
Nov 24 19:31:33 compute-0 systemd-logind[795]: Session 7 logged out. Waiting for processes to exit.
Nov 24 19:31:33 compute-0 systemd-logind[795]: Removed session 7.
Nov 24 19:31:39 compute-0 sshd-session[31221]: Invalid user amssys from 14.63.196.175 port 51640
Nov 24 19:31:39 compute-0 sshd-session[31221]: Received disconnect from 14.63.196.175 port 51640:11: Bye Bye [preauth]
Nov 24 19:31:39 compute-0 sshd-session[31221]: Disconnected from invalid user amssys 14.63.196.175 port 51640 [preauth]
Nov 24 19:36:40 compute-0 sshd-session[31226]: Accepted publickey for zuul from 192.168.122.30 port 55292 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:36:40 compute-0 systemd-logind[795]: New session 8 of user zuul.
Nov 24 19:36:40 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 24 19:36:40 compute-0 sshd-session[31226]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:36:41 compute-0 python3.9[31379]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:36:42 compute-0 sudo[31558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ingamkducowhfonywcqfimdoyxjocitn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013002.3301487-32-219040660107605/AnsiballZ_command.py'
Nov 24 19:36:42 compute-0 sudo[31558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:36:43 compute-0 python3.9[31560]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:36:51 compute-0 sudo[31558]: pam_unix(sudo:session): session closed for user root
Nov 24 19:36:52 compute-0 sshd-session[31229]: Connection closed by 192.168.122.30 port 55292
Nov 24 19:36:52 compute-0 sshd-session[31226]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:36:52 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 24 19:36:52 compute-0 systemd[1]: session-8.scope: Consumed 8.543s CPU time.
Nov 24 19:36:52 compute-0 systemd-logind[795]: Session 8 logged out. Waiting for processes to exit.
Nov 24 19:36:52 compute-0 systemd-logind[795]: Removed session 8.
Nov 24 19:37:07 compute-0 sshd-session[31617]: Accepted publickey for zuul from 192.168.122.30 port 54622 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:37:07 compute-0 systemd-logind[795]: New session 9 of user zuul.
Nov 24 19:37:07 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 24 19:37:07 compute-0 sshd-session[31617]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:37:09 compute-0 python3.9[31770]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 19:37:10 compute-0 python3.9[31944]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:37:11 compute-0 sudo[32094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zabwssjlytztuhqwowjrkgxdouisizdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013030.760184-45-217574529039613/AnsiballZ_command.py'
Nov 24 19:37:11 compute-0 sudo[32094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:37:11 compute-0 python3.9[32096]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:37:11 compute-0 sudo[32094]: pam_unix(sudo:session): session closed for user root
Nov 24 19:37:12 compute-0 sudo[32247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmuumpjlltzzlzvbyocfglcdqtrlvray ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013032.0506873-57-108524888365856/AnsiballZ_stat.py'
Nov 24 19:37:12 compute-0 sudo[32247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:37:12 compute-0 python3.9[32249]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:37:12 compute-0 sudo[32247]: pam_unix(sudo:session): session closed for user root
Nov 24 19:37:13 compute-0 sudo[32399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgnqnayvgnxsazbrdrgjhgdpiujpzrxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013032.9075844-65-250271743515535/AnsiballZ_file.py'
Nov 24 19:37:13 compute-0 sudo[32399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:37:13 compute-0 python3.9[32401]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:37:13 compute-0 sudo[32399]: pam_unix(sudo:session): session closed for user root
Nov 24 19:37:13 compute-0 sudo[32551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyksuzqhebkchfzhvfqjusfrwoerisja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013033.7106173-73-235287896139281/AnsiballZ_stat.py'
Nov 24 19:37:13 compute-0 sudo[32551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:37:14 compute-0 python3.9[32553]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:37:14 compute-0 sudo[32551]: pam_unix(sudo:session): session closed for user root
Nov 24 19:37:14 compute-0 sudo[32674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upvdpdxhwgvtolqkqqothkoveqpkcuam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013033.7106173-73-235287896139281/AnsiballZ_copy.py'
Nov 24 19:37:14 compute-0 sudo[32674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:37:14 compute-0 python3.9[32676]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013033.7106173-73-235287896139281/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:37:14 compute-0 sudo[32674]: pam_unix(sudo:session): session closed for user root
Nov 24 19:37:15 compute-0 sudo[32826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpskxwdpwoygijzdyykqemunlvavsfsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013035.0626407-88-228715892756232/AnsiballZ_setup.py'
Nov 24 19:37:15 compute-0 sudo[32826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:37:15 compute-0 python3.9[32828]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:37:15 compute-0 sudo[32826]: pam_unix(sudo:session): session closed for user root
Nov 24 19:37:16 compute-0 sudo[32982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhuvaxfuaffpkxghzpsniuwnnvznzoqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013036.099045-96-121333864910983/AnsiballZ_file.py'
Nov 24 19:37:16 compute-0 sudo[32982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:37:16 compute-0 python3.9[32984]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:37:16 compute-0 sudo[32982]: pam_unix(sudo:session): session closed for user root
Nov 24 19:37:17 compute-0 sudo[33134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrgsitonyuuwoisqurtxnjckvppehlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013036.9050643-105-148852493174432/AnsiballZ_file.py'
Nov 24 19:37:17 compute-0 sudo[33134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:37:17 compute-0 python3.9[33136]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:37:17 compute-0 sudo[33134]: pam_unix(sudo:session): session closed for user root
Nov 24 19:37:18 compute-0 python3.9[33286]: ansible-ansible.builtin.service_facts Invoked
Nov 24 19:37:22 compute-0 python3.9[33539]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:37:23 compute-0 python3.9[33689]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:37:24 compute-0 python3.9[33843]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:37:25 compute-0 sudo[33999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btpembpwzvoklthspokazoswtamqilzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013044.9579072-153-165794347729651/AnsiballZ_setup.py'
Nov 24 19:37:25 compute-0 sudo[33999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:37:25 compute-0 python3.9[34001]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:37:25 compute-0 sudo[33999]: pam_unix(sudo:session): session closed for user root
Nov 24 19:37:26 compute-0 sudo[34083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-babxlnecgvlgyaipmozdmrznmjjxhmim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013044.9579072-153-165794347729651/AnsiballZ_dnf.py'
Nov 24 19:37:26 compute-0 sudo[34083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:37:26 compute-0 python3.9[34085]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:38:24 compute-0 irqbalance[789]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 24 19:38:24 compute-0 irqbalance[789]: IRQ 26 affinity is now unmanaged
Nov 24 19:38:56 compute-0 systemd[1]: Reloading.
Nov 24 19:38:56 compute-0 systemd-rc-local-generator[34567]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:38:56 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 24 19:38:56 compute-0 systemd[1]: Reloading.
Nov 24 19:38:56 compute-0 systemd-rc-local-generator[34613]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:38:57 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 24 19:38:57 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 24 19:38:57 compute-0 systemd[1]: Reloading.
Nov 24 19:38:57 compute-0 systemd-rc-local-generator[34652]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:38:57 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 24 19:38:57 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Nov 24 19:38:57 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Nov 24 19:38:57 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Nov 24 19:40:20 compute-0 sshd-session[34825]: Connection closed by authenticating user root 185.156.73.233 port 63802 [preauth]
Nov 24 19:40:30 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Nov 24 19:40:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 19:40:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 19:40:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 19:40:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 19:40:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 19:40:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 19:40:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 19:40:30 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 24 19:40:30 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 19:40:30 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 19:40:30 compute-0 systemd[1]: Reloading.
Nov 24 19:40:31 compute-0 systemd-rc-local-generator[34966]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:40:31 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 19:40:31 compute-0 sudo[34083]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:32 compute-0 sudo[35880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nufciglbginuthnivlpufoummrsstxst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013231.8378727-165-109774428172125/AnsiballZ_command.py'
Nov 24 19:40:32 compute-0 sudo[35880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:32 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 19:40:32 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 19:40:32 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.571s CPU time.
Nov 24 19:40:32 compute-0 systemd[1]: run-r2e7907c6068b4c6091905978327e2353.service: Deactivated successfully.
Nov 24 19:40:32 compute-0 python3.9[35887]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:40:33 compute-0 sudo[35880]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:34 compute-0 sudo[36168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sffxitzcnbcxdgxbrfsnyetgelspdzut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013233.9116492-173-257443519892863/AnsiballZ_selinux.py'
Nov 24 19:40:34 compute-0 sudo[36168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:34 compute-0 python3.9[36170]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 19:40:34 compute-0 sudo[36168]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:35 compute-0 sudo[36320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkurywsavxkrtrwozedrtazzapjekgqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013235.2789087-184-50451318265673/AnsiballZ_command.py'
Nov 24 19:40:35 compute-0 sudo[36320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:35 compute-0 python3.9[36322]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 19:40:36 compute-0 sudo[36320]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:37 compute-0 sudo[36473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqagzrlhinbcssnsqlnjkqznboaefexu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013236.9922838-192-26977174608353/AnsiballZ_file.py'
Nov 24 19:40:37 compute-0 sudo[36473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:38 compute-0 python3.9[36475]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:40:38 compute-0 sudo[36473]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:38 compute-0 sudo[36625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skxnwjqxulhjyiydeqzbzqbjzqehduec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013238.366234-200-1313459890066/AnsiballZ_mount.py'
Nov 24 19:40:38 compute-0 sudo[36625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:39 compute-0 python3.9[36627]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 19:40:39 compute-0 sudo[36625]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:40 compute-0 sudo[36777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkspvfocsyocaspvhpopafsbjmwcsmbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013240.053356-228-209947845184477/AnsiballZ_file.py'
Nov 24 19:40:40 compute-0 sudo[36777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:40 compute-0 python3.9[36779]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:40:40 compute-0 sudo[36777]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:41 compute-0 sudo[36929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvcgcbmhbzmyyprjpxnzgilfvcvryquk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013240.8136837-236-136284194152822/AnsiballZ_stat.py'
Nov 24 19:40:41 compute-0 sudo[36929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:41 compute-0 python3.9[36931]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:40:41 compute-0 sudo[36929]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:41 compute-0 sudo[37052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uideitlefqbwmunuvehkbrybkdaernkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013240.8136837-236-136284194152822/AnsiballZ_copy.py'
Nov 24 19:40:41 compute-0 sudo[37052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:41 compute-0 python3.9[37054]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013240.8136837-236-136284194152822/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=de5200111fe33e8245893b12bd9b83df41ebfe0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:40:41 compute-0 sudo[37052]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:42 compute-0 sudo[37204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ploqmniczomfnzvujxbizbjersragqcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013242.48308-260-14612092245090/AnsiballZ_stat.py'
Nov 24 19:40:42 compute-0 sudo[37204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:45 compute-0 python3.9[37206]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:40:45 compute-0 sudo[37204]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:46 compute-0 sudo[37357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olniyiddlmigcecvgmvcssxkbktsykhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013245.8961296-268-12797007954835/AnsiballZ_command.py'
Nov 24 19:40:46 compute-0 sudo[37357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:46 compute-0 python3.9[37359]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:40:46 compute-0 sudo[37357]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:47 compute-0 sudo[37510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utijemicgpexoehuozawhwmgcdmjunqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013246.6667316-276-152374189898233/AnsiballZ_file.py'
Nov 24 19:40:47 compute-0 sudo[37510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:47 compute-0 python3.9[37512]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:40:47 compute-0 sudo[37510]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:48 compute-0 sudo[37662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzojpvdgqrlobrlumnbrwukrhshdfbte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013247.5985432-287-101513894766397/AnsiballZ_getent.py'
Nov 24 19:40:48 compute-0 sudo[37662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:48 compute-0 python3.9[37664]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 19:40:48 compute-0 sudo[37662]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:48 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 19:40:48 compute-0 sudo[37816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzbobqolgnbosiwznfkzqrfzglcwooyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013248.480084-295-68699151493826/AnsiballZ_group.py'
Nov 24 19:40:48 compute-0 sudo[37816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:49 compute-0 python3.9[37818]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 19:40:49 compute-0 groupadd[37819]: group added to /etc/group: name=qemu, GID=107
Nov 24 19:40:49 compute-0 groupadd[37819]: group added to /etc/gshadow: name=qemu
Nov 24 19:40:49 compute-0 groupadd[37819]: new group: name=qemu, GID=107
Nov 24 19:40:49 compute-0 sudo[37816]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:50 compute-0 sudo[37974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkezpoxyskdfbajydnvsokjkvptkxnzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013249.4349308-303-81788816042732/AnsiballZ_user.py'
Nov 24 19:40:50 compute-0 sudo[37974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:50 compute-0 python3.9[37976]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 19:40:50 compute-0 useradd[37978]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 19:40:50 compute-0 sudo[37974]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:50 compute-0 sudo[38136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfyzdibdxixkfbpaqmexrcvlwxzrdcqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013250.5595965-311-185199504454186/AnsiballZ_getent.py'
Nov 24 19:40:50 compute-0 sudo[38136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:51 compute-0 python3.9[38138]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 19:40:51 compute-0 sudo[38136]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:51 compute-0 sudo[38289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibwkxcqetdaimsgvnuoqydxbtqdrbtsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013251.3573961-319-192413957526153/AnsiballZ_group.py'
Nov 24 19:40:51 compute-0 sudo[38289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:51 compute-0 python3.9[38291]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 19:40:51 compute-0 groupadd[38292]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 24 19:40:51 compute-0 groupadd[38292]: group added to /etc/gshadow: name=hugetlbfs
Nov 24 19:40:51 compute-0 groupadd[38292]: new group: name=hugetlbfs, GID=42477
Nov 24 19:40:51 compute-0 sudo[38289]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:52 compute-0 sshd-session[38006]: Invalid user admin from 27.79.44.141 port 37550
Nov 24 19:40:52 compute-0 sshd-session[38006]: Connection closed by invalid user admin 27.79.44.141 port 37550 [preauth]
Nov 24 19:40:52 compute-0 sudo[38447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjlonhctxgubndcwzcaszrtulemozriy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013252.24125-328-60102461570595/AnsiballZ_file.py'
Nov 24 19:40:52 compute-0 sudo[38447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:52 compute-0 python3.9[38449]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 19:40:52 compute-0 sudo[38447]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:53 compute-0 sudo[38599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfvtyxpdxbwmmpiaiontbwgadajrglse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013253.215959-339-228733258969129/AnsiballZ_dnf.py'
Nov 24 19:40:53 compute-0 sudo[38599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:53 compute-0 python3.9[38601]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:40:55 compute-0 sudo[38599]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:55 compute-0 sudo[38752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agdxiccztswbsouussypnwmilbjsthlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013255.4650586-347-114540676146844/AnsiballZ_file.py'
Nov 24 19:40:55 compute-0 sudo[38752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:56 compute-0 python3.9[38754]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:40:56 compute-0 sudo[38752]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:56 compute-0 sudo[38904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmhtbbdwqxogfpygtwxwopuwwdzikrmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013256.1866076-355-86973946987262/AnsiballZ_stat.py'
Nov 24 19:40:56 compute-0 sudo[38904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:56 compute-0 python3.9[38906]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:40:56 compute-0 sudo[38904]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:57 compute-0 sudo[39027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cogvppnnvhncyuuivkvgswbgrhdqryvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013256.1866076-355-86973946987262/AnsiballZ_copy.py'
Nov 24 19:40:57 compute-0 sudo[39027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:57 compute-0 python3.9[39029]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764013256.1866076-355-86973946987262/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:40:57 compute-0 sudo[39027]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:58 compute-0 sudo[39179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbzfulrqnbkbosmealynvqcyclfsshel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013257.4629526-370-200930449732608/AnsiballZ_systemd.py'
Nov 24 19:40:58 compute-0 sudo[39179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:58 compute-0 python3.9[39181]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 19:40:58 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 19:40:58 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 24 19:40:58 compute-0 kernel: Bridge firewalling registered
Nov 24 19:40:58 compute-0 systemd-modules-load[39185]: Inserted module 'br_netfilter'
Nov 24 19:40:58 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 19:40:58 compute-0 sudo[39179]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:59 compute-0 sudo[39339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blbvdhdjefczbkzjoccstdlfmeewpwhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013258.8042214-378-44311394708797/AnsiballZ_stat.py'
Nov 24 19:40:59 compute-0 sudo[39339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:40:59 compute-0 python3.9[39341]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:40:59 compute-0 sudo[39339]: pam_unix(sudo:session): session closed for user root
Nov 24 19:40:59 compute-0 sudo[39462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyohihdhiexdvbiarhneedoopuixywtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013258.8042214-378-44311394708797/AnsiballZ_copy.py'
Nov 24 19:40:59 compute-0 sudo[39462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:00 compute-0 python3.9[39464]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764013258.8042214-378-44311394708797/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:41:00 compute-0 sudo[39462]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:00 compute-0 sudo[39614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sompaoeqwjmfznmfjdqjdqoxztivqzyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013260.4296794-396-172497304924176/AnsiballZ_dnf.py'
Nov 24 19:41:00 compute-0 sudo[39614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:01 compute-0 python3.9[39616]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:41:04 compute-0 sshd-session[39618]: Invalid user user from 27.79.44.141 port 37394
Nov 24 19:41:05 compute-0 sshd-session[39618]: Connection closed by invalid user user 27.79.44.141 port 37394 [preauth]
Nov 24 19:41:11 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Nov 24 19:41:11 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Nov 24 19:41:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 19:41:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 19:41:11 compute-0 systemd[1]: Reloading.
Nov 24 19:41:11 compute-0 systemd-rc-local-generator[39720]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:41:12 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 19:41:12 compute-0 sudo[39614]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:13 compute-0 python3.9[41021]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:41:14 compute-0 python3.9[41892]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 19:41:14 compute-0 python3.9[42580]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:41:15 compute-0 sudo[43305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyfeukcmxkftrahsltnbhsxiosxjklyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013274.970884-435-63596176464646/AnsiballZ_command.py'
Nov 24 19:41:15 compute-0 sudo[43305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:15 compute-0 python3.9[43326]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:41:15 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 19:41:16 compute-0 systemd[1]: Starting Authorization Manager...
Nov 24 19:41:16 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 19:41:16 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 19:41:16 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 19:41:16 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.417s CPU time.
Nov 24 19:41:16 compute-0 systemd[1]: run-r8a0136a9c43b4a5584b478286c7cbe21.service: Deactivated successfully.
Nov 24 19:41:16 compute-0 polkitd[44045]: Started polkitd version 0.117
Nov 24 19:41:16 compute-0 polkitd[44045]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 19:41:16 compute-0 polkitd[44045]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 19:41:16 compute-0 polkitd[44045]: Finished loading, compiling and executing 2 rules
Nov 24 19:41:16 compute-0 polkitd[44045]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 24 19:41:16 compute-0 systemd[1]: Started Authorization Manager.
Nov 24 19:41:16 compute-0 sudo[43305]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:16 compute-0 sudo[44214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tigbnajqbzuqikzlsvttkqbumpwvxbem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013276.511106-444-65019340348925/AnsiballZ_systemd.py'
Nov 24 19:41:16 compute-0 sudo[44214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:17 compute-0 python3.9[44216]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:41:17 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 19:41:17 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 19:41:17 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 19:41:17 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 19:41:17 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 19:41:17 compute-0 sudo[44214]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:18 compute-0 python3.9[44377]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 19:41:20 compute-0 sudo[44527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvhxfzuztmamwrjuyroyxmsomoquugkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013280.066754-501-96863697131892/AnsiballZ_systemd.py'
Nov 24 19:41:20 compute-0 sudo[44527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:20 compute-0 python3.9[44529]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:41:20 compute-0 systemd[1]: Reloading.
Nov 24 19:41:20 compute-0 systemd-rc-local-generator[44555]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:41:21 compute-0 sudo[44527]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:21 compute-0 sudo[44716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bufrcvkxgdnywsjitgucxsqngatfvjlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013281.2249-501-275186503884877/AnsiballZ_systemd.py'
Nov 24 19:41:21 compute-0 sudo[44716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:21 compute-0 python3.9[44718]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:41:21 compute-0 systemd[1]: Reloading.
Nov 24 19:41:21 compute-0 systemd-rc-local-generator[44747]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:41:22 compute-0 sudo[44716]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:22 compute-0 sudo[44905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikwbmmnfpuojnzyyrrveohtoqfatoxet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013282.3939373-517-73303686681624/AnsiballZ_command.py'
Nov 24 19:41:22 compute-0 sudo[44905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:22 compute-0 python3.9[44907]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:41:23 compute-0 sudo[44905]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:23 compute-0 sudo[45058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnlwfocgwjdxshxsxjwrryjatqstcvsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013283.2010946-525-60738418969461/AnsiballZ_command.py'
Nov 24 19:41:23 compute-0 sudo[45058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:23 compute-0 python3.9[45060]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:41:23 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 24 19:41:23 compute-0 sudo[45058]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:24 compute-0 sudo[45211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijedhswcukhicansqusjyxafrruuhnfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013283.9739795-533-15935495621170/AnsiballZ_command.py'
Nov 24 19:41:24 compute-0 sudo[45211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:24 compute-0 python3.9[45213]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:41:25 compute-0 sudo[45211]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:26 compute-0 sudo[45373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfojurjxgqyumzxpdatczamcjygccfrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013286.158783-541-53195806648265/AnsiballZ_command.py'
Nov 24 19:41:26 compute-0 sudo[45373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:26 compute-0 python3.9[45375]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:41:26 compute-0 sudo[45373]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:27 compute-0 sudo[45526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrjsnjhcevwhwtmxdjxowhmmbaezofvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013287.032608-549-247226727037865/AnsiballZ_systemd.py'
Nov 24 19:41:27 compute-0 sudo[45526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:27 compute-0 python3.9[45528]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 19:41:27 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 19:41:27 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 24 19:41:27 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 24 19:41:27 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 24 19:41:27 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 19:41:27 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 24 19:41:27 compute-0 sudo[45526]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:28 compute-0 sshd-session[31620]: Connection closed by 192.168.122.30 port 54622
Nov 24 19:41:28 compute-0 sshd-session[31617]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:41:28 compute-0 systemd-logind[795]: Session 9 logged out. Waiting for processes to exit.
Nov 24 19:41:28 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 24 19:41:28 compute-0 systemd[1]: session-9.scope: Consumed 2min 21.978s CPU time.
Nov 24 19:41:28 compute-0 systemd-logind[795]: Removed session 9.
Nov 24 19:41:34 compute-0 sshd-session[45558]: Accepted publickey for zuul from 192.168.122.30 port 50196 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:41:34 compute-0 systemd-logind[795]: New session 10 of user zuul.
Nov 24 19:41:34 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 24 19:41:34 compute-0 sshd-session[45558]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:41:35 compute-0 python3.9[45711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:41:36 compute-0 sudo[45865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wubecqoohfviiajyxmntloatvttilhuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013295.902839-36-69241454164623/AnsiballZ_getent.py'
Nov 24 19:41:36 compute-0 sudo[45865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:36 compute-0 python3.9[45867]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 19:41:36 compute-0 sudo[45865]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:37 compute-0 sudo[46018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssqesbwthbaljcugbcnqhmzowcwfvzjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013296.8016746-44-242417878645968/AnsiballZ_group.py'
Nov 24 19:41:37 compute-0 sudo[46018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:37 compute-0 python3.9[46020]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 19:41:37 compute-0 groupadd[46021]: group added to /etc/group: name=openvswitch, GID=42476
Nov 24 19:41:37 compute-0 groupadd[46021]: group added to /etc/gshadow: name=openvswitch
Nov 24 19:41:37 compute-0 groupadd[46021]: new group: name=openvswitch, GID=42476
Nov 24 19:41:37 compute-0 sudo[46018]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:38 compute-0 sudo[46176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnvizivjghfidmcgxuwomnjkusnqrukk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013297.7511904-52-271433784911107/AnsiballZ_user.py'
Nov 24 19:41:38 compute-0 sudo[46176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:38 compute-0 python3.9[46178]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 19:41:38 compute-0 useradd[46180]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 19:41:38 compute-0 useradd[46180]: add 'openvswitch' to group 'hugetlbfs'
Nov 24 19:41:38 compute-0 useradd[46180]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 24 19:41:38 compute-0 sudo[46176]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:39 compute-0 sudo[46336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsvmzunmyzbowyievcpeiwheyadgkomv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013298.9568384-62-69689189100466/AnsiballZ_setup.py'
Nov 24 19:41:39 compute-0 sudo[46336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:39 compute-0 python3.9[46338]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:41:39 compute-0 sudo[46336]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:40 compute-0 sudo[46420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjfbvoyqscxtyvhkwnowgdrtdssleepl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013298.9568384-62-69689189100466/AnsiballZ_dnf.py'
Nov 24 19:41:40 compute-0 sudo[46420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:40 compute-0 python3.9[46422]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 19:41:42 compute-0 sudo[46420]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:43 compute-0 sudo[46585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anmhepyduvihsmqctbryqbexzefcjiuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013302.8848326-76-44633329012259/AnsiballZ_dnf.py'
Nov 24 19:41:43 compute-0 sudo[46585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:43 compute-0 python3.9[46587]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:41:51 compute-0 sshd-session[46602]: Connection closed by authenticating user root 27.79.44.141 port 46636 [preauth]
Nov 24 19:41:54 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Nov 24 19:41:54 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 19:41:54 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 19:41:54 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 19:41:54 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 19:41:54 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 19:41:54 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 19:41:54 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 19:41:54 compute-0 groupadd[46612]: group added to /etc/group: name=unbound, GID=993
Nov 24 19:41:54 compute-0 groupadd[46612]: group added to /etc/gshadow: name=unbound
Nov 24 19:41:54 compute-0 groupadd[46612]: new group: name=unbound, GID=993
Nov 24 19:41:54 compute-0 useradd[46619]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 24 19:41:54 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 24 19:41:54 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 24 19:41:55 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 19:41:55 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 19:41:55 compute-0 systemd[1]: Reloading.
Nov 24 19:41:55 compute-0 systemd-sysv-generator[47118]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:41:55 compute-0 systemd-rc-local-generator[47115]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:41:55 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 19:41:56 compute-0 sudo[46585]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:56 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 19:41:56 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 19:41:56 compute-0 systemd[1]: run-r427b8f6b3084406a8350e050fb49775e.service: Deactivated successfully.
Nov 24 19:41:57 compute-0 sudo[47685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmacvsppymfjqgagirtwxhjbdsyjipnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013316.6264458-84-87539404603361/AnsiballZ_systemd.py'
Nov 24 19:41:57 compute-0 sudo[47685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:41:57 compute-0 python3.9[47687]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 19:41:57 compute-0 systemd[1]: Reloading.
Nov 24 19:41:57 compute-0 systemd-rc-local-generator[47718]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:41:57 compute-0 systemd-sysv-generator[47723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:41:58 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 24 19:41:58 compute-0 chown[47730]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 24 19:41:58 compute-0 ovs-ctl[47735]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 24 19:41:58 compute-0 ovs-ctl[47735]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 24 19:41:58 compute-0 ovs-ctl[47735]: Starting ovsdb-server [  OK  ]
Nov 24 19:41:58 compute-0 ovs-vsctl[47784]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 24 19:41:58 compute-0 ovs-vsctl[47804]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"2981bd26-4511-4552-b2b8-c2a668887f38\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 24 19:41:58 compute-0 ovs-ctl[47735]: Configuring Open vSwitch system IDs [  OK  ]
Nov 24 19:41:58 compute-0 ovs-ctl[47735]: Enabling remote OVSDB managers [  OK  ]
Nov 24 19:41:58 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 24 19:41:58 compute-0 ovs-vsctl[47810]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 24 19:41:58 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 24 19:41:58 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 24 19:41:58 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 24 19:41:58 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 24 19:41:58 compute-0 ovs-ctl[47854]: Inserting openvswitch module [  OK  ]
Nov 24 19:41:58 compute-0 ovs-ctl[47823]: Starting ovs-vswitchd [  OK  ]
Nov 24 19:41:58 compute-0 ovs-vsctl[47872]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 24 19:41:58 compute-0 ovs-ctl[47823]: Enabling remote OVSDB managers [  OK  ]
Nov 24 19:41:58 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 24 19:41:58 compute-0 systemd[1]: Starting Open vSwitch...
Nov 24 19:41:58 compute-0 systemd[1]: Finished Open vSwitch.
Nov 24 19:41:58 compute-0 sudo[47685]: pam_unix(sudo:session): session closed for user root
Nov 24 19:41:59 compute-0 python3.9[48024]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:42:00 compute-0 sudo[48175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcidtcwhaqvuayqyopnjyrkgguujmecx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013319.7388132-102-19168972216917/AnsiballZ_sefcontext.py'
Nov 24 19:42:00 compute-0 sudo[48175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:00 compute-0 python3.9[48177]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 19:42:01 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Nov 24 19:42:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 19:42:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 19:42:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 19:42:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 19:42:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 19:42:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 19:42:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 19:42:01 compute-0 sudo[48175]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:02 compute-0 python3.9[48332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:42:03 compute-0 sudo[48488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otdhmlhrbmzvfyxvgooyyfmifzfqcbzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013323.1770344-120-9105272324420/AnsiballZ_dnf.py'
Nov 24 19:42:03 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 24 19:42:03 compute-0 sudo[48488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:03 compute-0 python3.9[48490]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:42:04 compute-0 sudo[48488]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:05 compute-0 sudo[48641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwxoejbnorzprogulkkroleutcjddvza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013325.1688943-128-254310602302329/AnsiballZ_command.py'
Nov 24 19:42:05 compute-0 sudo[48641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:05 compute-0 python3.9[48643]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:42:06 compute-0 sudo[48641]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:07 compute-0 sudo[48928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjhnsypdocudtlywyrllljzhivqqebby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013326.81796-136-103513909867678/AnsiballZ_file.py'
Nov 24 19:42:07 compute-0 sudo[48928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:07 compute-0 python3.9[48930]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 19:42:07 compute-0 sudo[48928]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:08 compute-0 python3.9[49080]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:42:09 compute-0 sudo[49232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpffywlmetscbqowjpswdrpvwztrglfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013328.716022-152-157274182768563/AnsiballZ_dnf.py'
Nov 24 19:42:09 compute-0 sudo[49232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:09 compute-0 python3.9[49234]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:42:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 19:42:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 19:42:11 compute-0 systemd[1]: Reloading.
Nov 24 19:42:11 compute-0 systemd-rc-local-generator[49272]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:42:11 compute-0 systemd-sysv-generator[49277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:42:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 19:42:11 compute-0 sudo[49232]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 19:42:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 19:42:11 compute-0 systemd[1]: run-r8261d26c46b54ac59eb863d5954cd3e1.service: Deactivated successfully.
Nov 24 19:42:12 compute-0 sudo[49548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwzjlwmgsxdndwzbtpikuhktgoaocjnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013332.1014166-160-268410204666770/AnsiballZ_systemd.py'
Nov 24 19:42:12 compute-0 sudo[49548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:12 compute-0 python3.9[49550]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 19:42:12 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 19:42:12 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 24 19:42:12 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 24 19:42:12 compute-0 NetworkManager[7191]: <info>  [1764013332.8398] caught SIGTERM, shutting down normally.
Nov 24 19:42:12 compute-0 systemd[1]: Stopping Network Manager...
Nov 24 19:42:12 compute-0 NetworkManager[7191]: <info>  [1764013332.8416] dhcp4 (eth0): canceled DHCP transaction
Nov 24 19:42:12 compute-0 NetworkManager[7191]: <info>  [1764013332.8417] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 19:42:12 compute-0 NetworkManager[7191]: <info>  [1764013332.8417] dhcp4 (eth0): state changed no lease
Nov 24 19:42:12 compute-0 NetworkManager[7191]: <info>  [1764013332.8420] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 19:42:12 compute-0 NetworkManager[7191]: <info>  [1764013332.8500] exiting (success)
Nov 24 19:42:12 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 19:42:12 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 19:42:12 compute-0 systemd[1]: Stopped Network Manager.
Nov 24 19:42:12 compute-0 systemd[1]: NetworkManager.service: Consumed 12.130s CPU time, 4.0M memory peak, read 0B from disk, written 19.5K to disk.
Nov 24 19:42:12 compute-0 systemd[1]: Starting Network Manager...
Nov 24 19:42:12 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 19:42:12 compute-0 NetworkManager[49557]: <info>  [1764013332.9272] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:b3da1bfc-5c9f-4e84-9159-06370a5e0bee)
Nov 24 19:42:12 compute-0 NetworkManager[49557]: <info>  [1764013332.9274] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 19:42:12 compute-0 NetworkManager[49557]: <info>  [1764013332.9341] manager[0x5594851ab090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 19:42:12 compute-0 systemd[1]: Starting Hostname Service...
Nov 24 19:42:13 compute-0 systemd[1]: Started Hostname Service.
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0427] hostname: hostname: using hostnamed
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0428] hostname: static hostname changed from (none) to "compute-0"
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0431] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0434] manager[0x5594851ab090]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0434] manager[0x5594851ab090]: rfkill: WWAN hardware radio set enabled
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0451] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0458] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0459] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0459] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0460] manager: Networking is enabled by state file
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0461] settings: Loaded settings plugin: keyfile (internal)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0465] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0487] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0494] dhcp: init: Using DHCP client 'internal'
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0496] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0500] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0504] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0510] device (lo): Activation: starting connection 'lo' (41cae1ef-0d4d-447a-80d8-eb6262a5c804)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0515] device (eth0): carrier: link connected
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0518] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0522] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0522] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0527] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0532] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0537] device (eth1): carrier: link connected
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0540] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0543] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (64497e37-9e92-5f20-a47f-5c77436a71c0) (indicated)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0543] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0548] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0553] device (eth1): Activation: starting connection 'ci-private-network' (64497e37-9e92-5f20-a47f-5c77436a71c0)
Nov 24 19:42:13 compute-0 systemd[1]: Started Network Manager.
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0558] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0563] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0565] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0567] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0568] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0571] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0572] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0574] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0577] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0583] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0584] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0591] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0601] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0625] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0629] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0690] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0695] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0697] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0698] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0702] device (lo): Activation: successful, device activated.
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0707] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0709] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0712] device (eth1): Activation: successful, device activated.
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0720] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0721] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0724] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0727] device (eth0): Activation: successful, device activated.
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0730] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 19:42:13 compute-0 NetworkManager[49557]: <info>  [1764013333.0732] manager: startup complete
Nov 24 19:42:13 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 24 19:42:13 compute-0 sudo[49548]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:13 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 24 19:42:13 compute-0 sudo[49774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgvaojaqmpphjrmudmwqvbjhnitwdzjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013333.2792404-168-53840317718324/AnsiballZ_dnf.py'
Nov 24 19:42:13 compute-0 sudo[49774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:13 compute-0 python3.9[49776]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:42:19 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 19:42:19 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 19:42:19 compute-0 systemd[1]: Reloading.
Nov 24 19:42:19 compute-0 systemd-rc-local-generator[49828]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:42:19 compute-0 systemd-sysv-generator[49832]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:42:19 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 19:42:22 compute-0 sudo[49774]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:23 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 19:42:23 compute-0 sudo[50234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emxkikenpgtplbpxpnoqxwvbcxbhadan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013342.8895047-180-275169239350809/AnsiballZ_stat.py'
Nov 24 19:42:23 compute-0 sudo[50234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 19:42:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 19:42:23 compute-0 systemd[1]: run-r3577dac5e1ca4fd9971739e4db06e2ed.service: Deactivated successfully.
Nov 24 19:42:23 compute-0 python3.9[50236]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:42:23 compute-0 sudo[50234]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:24 compute-0 sudo[50388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpihyctdrlnxthcubmcqyyjpqkwqmldw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013343.7319117-189-123320045802898/AnsiballZ_ini_file.py'
Nov 24 19:42:24 compute-0 sudo[50388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:24 compute-0 python3.9[50390]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:42:24 compute-0 sudo[50388]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:25 compute-0 sudo[50542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bizddjftpxzmawxltfjllsuvduphuucz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013344.7872703-199-263636077776560/AnsiballZ_ini_file.py'
Nov 24 19:42:25 compute-0 sudo[50542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:25 compute-0 python3.9[50544]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:42:25 compute-0 sudo[50542]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:25 compute-0 sudo[50694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfvmieccmscfzumadjkxqxyxqmsbmaoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013345.4733875-199-10891738254797/AnsiballZ_ini_file.py'
Nov 24 19:42:25 compute-0 sudo[50694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:25 compute-0 python3.9[50696]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:42:25 compute-0 sudo[50694]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:26 compute-0 sudo[50846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whoboqyfcsvogebkpudutlpazxxsixjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013346.140662-214-34610912254558/AnsiballZ_ini_file.py'
Nov 24 19:42:26 compute-0 sudo[50846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:26 compute-0 python3.9[50848]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:42:26 compute-0 sudo[50846]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:27 compute-0 sudo[50998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vypzvwxcxmqkwzbfmkxadwgfanexmwwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013346.7568552-214-116929252460200/AnsiballZ_ini_file.py'
Nov 24 19:42:27 compute-0 sudo[50998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:27 compute-0 python3.9[51000]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:42:27 compute-0 sudo[50998]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:27 compute-0 sudo[51150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxsqmrcrimgeeojrsqfxrvmyytsxozvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013347.4921079-229-141843802825301/AnsiballZ_stat.py'
Nov 24 19:42:27 compute-0 sudo[51150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:27 compute-0 python3.9[51152]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:42:28 compute-0 sudo[51150]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:28 compute-0 sudo[51273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odoigzaqmettmutssgnorblakisldcwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013347.4921079-229-141843802825301/AnsiballZ_copy.py'
Nov 24 19:42:28 compute-0 sudo[51273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:28 compute-0 python3.9[51275]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013347.4921079-229-141843802825301/.source _original_basename=.4gn7zcwf follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:42:28 compute-0 sudo[51273]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:29 compute-0 sudo[51425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sobrzyrrgmworvgyytabdslevcexjfcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013348.933213-244-86511916826077/AnsiballZ_file.py'
Nov 24 19:42:29 compute-0 sudo[51425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:29 compute-0 python3.9[51427]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:42:29 compute-0 sudo[51425]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:30 compute-0 sudo[51577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whdepkijgzgxhvvpmnuajhrsfjhtvohn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013349.7115424-252-55958973087206/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 24 19:42:30 compute-0 sudo[51577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:30 compute-0 python3.9[51579]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 24 19:42:30 compute-0 sudo[51577]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:30 compute-0 sudo[51729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qekormvoaczxwcfeuckhbuumkpaovlxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013350.637776-261-275453563631974/AnsiballZ_file.py'
Nov 24 19:42:30 compute-0 sudo[51729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:31 compute-0 python3.9[51731]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:42:31 compute-0 sudo[51729]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:31 compute-0 sudo[51881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpcuuavmddmuzmkialodtfwsqsbjhrgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013351.4810674-271-192660604266761/AnsiballZ_stat.py'
Nov 24 19:42:31 compute-0 sudo[51881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:31 compute-0 sudo[51881]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:32 compute-0 sudo[52004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpxeqjkatsiefshutaiobyppvazuhesq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013351.4810674-271-192660604266761/AnsiballZ_copy.py'
Nov 24 19:42:32 compute-0 sudo[52004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:32 compute-0 sudo[52004]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:33 compute-0 sudo[52156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apjqracbujemchwqyjxfgilwtiivmfue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013352.6610534-286-279660000490585/AnsiballZ_slurp.py'
Nov 24 19:42:33 compute-0 sudo[52156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:33 compute-0 python3.9[52158]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 24 19:42:33 compute-0 sudo[52156]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:34 compute-0 sudo[52331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkdwnqqalsguohvjdhpaccleaumbuyzm ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013353.5894082-295-118008027282107/async_wrapper.py j36226959588 300 /home/zuul/.ansible/tmp/ansible-tmp-1764013353.5894082-295-118008027282107/AnsiballZ_edpm_os_net_config.py _'
Nov 24 19:42:34 compute-0 sudo[52331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:34 compute-0 ansible-async_wrapper.py[52333]: Invoked with j36226959588 300 /home/zuul/.ansible/tmp/ansible-tmp-1764013353.5894082-295-118008027282107/AnsiballZ_edpm_os_net_config.py _
Nov 24 19:42:34 compute-0 ansible-async_wrapper.py[52336]: Starting module and watcher
Nov 24 19:42:34 compute-0 ansible-async_wrapper.py[52336]: Start watching 52337 (300)
Nov 24 19:42:34 compute-0 ansible-async_wrapper.py[52337]: Start module (52337)
Nov 24 19:42:34 compute-0 ansible-async_wrapper.py[52333]: Return async_wrapper task started.
Nov 24 19:42:34 compute-0 sudo[52331]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:34 compute-0 python3.9[52338]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 24 19:42:35 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 24 19:42:35 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 24 19:42:35 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 24 19:42:35 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 24 19:42:35 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5520] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5531] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5914] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5916] audit: op="connection-add" uuid="717e5043-5415-4bca-a04d-609c3ed4e7a9" name="br-ex-br" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5929] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5931] audit: op="connection-add" uuid="ba2395e4-d710-4e91-ac3a-b618a00699b8" name="br-ex-port" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5942] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5944] audit: op="connection-add" uuid="19c626b1-686b-4924-b078-7bb0fff35b4a" name="eth1-port" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5956] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5958] audit: op="connection-add" uuid="842d8ac4-3d9f-4a30-ba96-88a9fed32c49" name="vlan20-port" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5969] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5971] audit: op="connection-add" uuid="f92d5775-b4d4-49d0-8a45-93cff2ed6be4" name="vlan21-port" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5982] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5983] audit: op="connection-add" uuid="98677d2f-5af2-4345-b830-95bca0690aea" name="vlan22-port" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5995] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.5997] audit: op="connection-add" uuid="f8d6554a-b4db-4f16-9f1e-5c3392ac67de" name="vlan23-port" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6017] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6034] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6036] audit: op="connection-add" uuid="dfbf1414-d325-4eaa-936b-7ca1191fdd61" name="br-ex-if" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6078] audit: op="connection-update" uuid="64497e37-9e92-5f20-a47f-5c77436a71c0" name="ci-private-network" args="ovs-external-ids.data,ipv6.addresses,ipv6.method,ipv6.routing-rules,ipv6.dns,ipv6.addr-gen-mode,ipv6.routes,connection.slave-type,connection.controller,connection.master,connection.port-type,connection.timestamp,ipv4.addresses,ipv4.method,ipv4.routing-rules,ipv4.never-default,ipv4.dns,ipv4.routes,ovs-interface.type" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6095] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6097] audit: op="connection-add" uuid="c0944ca9-e4c8-49cd-8525-4b2cec03338a" name="vlan20-if" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6114] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6116] audit: op="connection-add" uuid="3f0217ab-7678-425f-bddf-c7ee6adc36b4" name="vlan21-if" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6133] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6135] audit: op="connection-add" uuid="98c851fb-70a5-40bb-89e6-5400a9dc18e7" name="vlan22-if" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6152] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6154] audit: op="connection-add" uuid="26b79d0a-3aa4-46c3-935e-2a8ac0b319fe" name="vlan23-if" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6166] audit: op="connection-delete" uuid="3f6c124f-2186-3ad9-bc47-40d15759b6fb" name="Wired connection 1" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6179] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6190] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6194] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (717e5043-5415-4bca-a04d-609c3ed4e7a9)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6195] audit: op="connection-activate" uuid="717e5043-5415-4bca-a04d-609c3ed4e7a9" name="br-ex-br" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6197] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6204] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6209] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (ba2395e4-d710-4e91-ac3a-b618a00699b8)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6211] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6217] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6221] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (19c626b1-686b-4924-b078-7bb0fff35b4a)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6223] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6231] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6235] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (842d8ac4-3d9f-4a30-ba96-88a9fed32c49)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6237] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6244] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6248] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (f92d5775-b4d4-49d0-8a45-93cff2ed6be4)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6250] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6257] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6262] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (98677d2f-5af2-4345-b830-95bca0690aea)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6263] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6271] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6275] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (f8d6554a-b4db-4f16-9f1e-5c3392ac67de)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6276] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6279] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6281] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6288] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6293] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6298] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (dfbf1414-d325-4eaa-936b-7ca1191fdd61)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6299] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6302] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6304] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6306] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6308] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6319] device (eth1): disconnecting for new activation request.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6320] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6323] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6325] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6327] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6330] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6335] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6339] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c0944ca9-e4c8-49cd-8525-4b2cec03338a)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6340] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6343] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6345] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6347] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6352] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6357] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6361] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (3f0217ab-7678-425f-bddf-c7ee6adc36b4)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6362] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6366] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6368] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6369] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6372] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6377] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6382] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (98c851fb-70a5-40bb-89e6-5400a9dc18e7)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6383] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6386] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6389] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6390] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6394] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6399] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6405] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (26b79d0a-3aa4-46c3-935e-2a8ac0b319fe)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6406] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6411] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6413] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6414] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6416] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6432] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6434] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6437] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6439] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6448] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6452] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6457] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6461] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6463] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6469] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6474] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6478] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6480] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6486] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6490] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6493] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6495] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6501] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6505] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6508] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6510] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6517] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6521] dhcp4 (eth0): canceled DHCP transaction
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6521] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6521] dhcp4 (eth0): state changed no lease
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6523] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6535] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 24 19:42:36 compute-0 kernel: Timeout policy base is empty
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6539] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52339 uid=0 result="fail" reason="Device is not activated"
Nov 24 19:42:36 compute-0 systemd-udevd[52343]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6573] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6579] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6621] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6632] device (eth1): disconnecting for new activation request.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6633] audit: op="connection-activate" uuid="64497e37-9e92-5f20-a47f-5c77436a71c0" name="ci-private-network" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6637] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 24 19:42:36 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6667] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6671] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6783] device (eth1): Activation: starting connection 'ci-private-network' (64497e37-9e92-5f20-a47f-5c77436a71c0)
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6788] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6808] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6813] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6820] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6824] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6829] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52339 uid=0 result="success"
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6829] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6831] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6832] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6833] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6834] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6835] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6838] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6844] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6848] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6851] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6855] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6858] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6862] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6865] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6871] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6875] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6881] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6885] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6891] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6897] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6902] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6963] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6969] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.6974] device (eth1): Activation: successful, device activated.
Nov 24 19:42:36 compute-0 kernel: br-ex: entered promiscuous mode
Nov 24 19:42:36 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7152] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7166] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 kernel: vlan22: entered promiscuous mode
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7224] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7226] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7243] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 kernel: vlan21: entered promiscuous mode
Nov 24 19:42:36 compute-0 systemd-udevd[52344]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 19:42:36 compute-0 kernel: vlan23: entered promiscuous mode
Nov 24 19:42:36 compute-0 systemd-udevd[52345]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7395] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7406] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7427] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7428] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7433] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 kernel: vlan20: entered promiscuous mode
Nov 24 19:42:36 compute-0 systemd-udevd[52452]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7481] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7496] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7510] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7511] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7516] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7552] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7577] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7590] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7597] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7604] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7610] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7631] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7668] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7673] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 19:42:36 compute-0 NetworkManager[49557]: <info>  [1764013356.7680] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 19:42:37 compute-0 NetworkManager[49557]: <info>  [1764013357.8581] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52339 uid=0 result="success"
Nov 24 19:42:38 compute-0 NetworkManager[49557]: <info>  [1764013358.0913] checkpoint[0x559485182950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 24 19:42:38 compute-0 NetworkManager[49557]: <info>  [1764013358.0917] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52339 uid=0 result="success"
Nov 24 19:42:38 compute-0 sudo[52696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axnymjndvdmxizvxormipnsiqasqjjcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013357.7854233-295-141412620418119/AnsiballZ_async_status.py'
Nov 24 19:42:38 compute-0 sudo[52696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:38 compute-0 NetworkManager[49557]: <info>  [1764013358.4724] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52339 uid=0 result="success"
Nov 24 19:42:38 compute-0 NetworkManager[49557]: <info>  [1764013358.4737] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52339 uid=0 result="success"
Nov 24 19:42:38 compute-0 python3.9[52698]: ansible-ansible.legacy.async_status Invoked with jid=j36226959588.52333 mode=status _async_dir=/root/.ansible_async
Nov 24 19:42:38 compute-0 sudo[52696]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:38 compute-0 NetworkManager[49557]: <info>  [1764013358.6767] audit: op="networking-control" arg="global-dns-configuration" pid=52339 uid=0 result="success"
Nov 24 19:42:38 compute-0 NetworkManager[49557]: <info>  [1764013358.6800] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 24 19:42:38 compute-0 NetworkManager[49557]: <info>  [1764013358.6837] audit: op="networking-control" arg="global-dns-configuration" pid=52339 uid=0 result="success"
Nov 24 19:42:38 compute-0 NetworkManager[49557]: <info>  [1764013358.6859] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52339 uid=0 result="success"
Nov 24 19:42:38 compute-0 NetworkManager[49557]: <info>  [1764013358.8146] checkpoint[0x559485182a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 24 19:42:38 compute-0 NetworkManager[49557]: <info>  [1764013358.8150] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52339 uid=0 result="success"
Nov 24 19:42:38 compute-0 ansible-async_wrapper.py[52337]: Module complete (52337)
Nov 24 19:42:39 compute-0 ansible-async_wrapper.py[52336]: Done in kid B.
Nov 24 19:42:41 compute-0 sudo[52801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftfynpnitlrglajylgwcyuoohtynizlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013357.7854233-295-141412620418119/AnsiballZ_async_status.py'
Nov 24 19:42:41 compute-0 sudo[52801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:41 compute-0 python3.9[52803]: ansible-ansible.legacy.async_status Invoked with jid=j36226959588.52333 mode=status _async_dir=/root/.ansible_async
Nov 24 19:42:41 compute-0 sudo[52801]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:42 compute-0 sudo[52901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihddxfcusbstujtdsrmyxiexeuoutoxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013357.7854233-295-141412620418119/AnsiballZ_async_status.py'
Nov 24 19:42:42 compute-0 sudo[52901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:42 compute-0 python3.9[52903]: ansible-ansible.legacy.async_status Invoked with jid=j36226959588.52333 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 19:42:42 compute-0 sudo[52901]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:42 compute-0 sudo[53053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbiumvssooejzjblpjxyeyhacjhfjrqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013362.6715872-322-193014447849066/AnsiballZ_stat.py'
Nov 24 19:42:42 compute-0 sudo[53053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:43 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 19:42:43 compute-0 python3.9[53055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:42:43 compute-0 sudo[53053]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:43 compute-0 sudo[53179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leklagyjpktdqjoqzqpakskkpzifsqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013362.6715872-322-193014447849066/AnsiballZ_copy.py'
Nov 24 19:42:43 compute-0 sudo[53179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:43 compute-0 python3.9[53181]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013362.6715872-322-193014447849066/.source.returncode _original_basename=.5qn9ish8 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:42:43 compute-0 sudo[53179]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:44 compute-0 sudo[53331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwknoqyshqbsanjwdjnqcpzvrwmmhcwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013363.9416745-338-91998729116449/AnsiballZ_stat.py'
Nov 24 19:42:44 compute-0 sudo[53331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:44 compute-0 python3.9[53333]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:42:44 compute-0 sudo[53331]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:44 compute-0 sudo[53454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dljhpunjrwmsvcsrbdapcxwgcpfpukri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013363.9416745-338-91998729116449/AnsiballZ_copy.py'
Nov 24 19:42:44 compute-0 sudo[53454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:45 compute-0 python3.9[53456]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013363.9416745-338-91998729116449/.source.cfg _original_basename=.a64jirwm follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:42:45 compute-0 sudo[53454]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:45 compute-0 sudo[53607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hznxoiyqdigzmpyuwzrtbbnhcgxihgob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013365.3885071-353-100889484432120/AnsiballZ_systemd.py'
Nov 24 19:42:45 compute-0 sudo[53607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:42:46 compute-0 python3.9[53609]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 19:42:46 compute-0 systemd[1]: Reloading Network Manager...
Nov 24 19:42:46 compute-0 NetworkManager[49557]: <info>  [1764013366.2319] audit: op="reload" arg="0" pid=53613 uid=0 result="success"
Nov 24 19:42:46 compute-0 NetworkManager[49557]: <info>  [1764013366.2334] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 24 19:42:46 compute-0 systemd[1]: Reloaded Network Manager.
Nov 24 19:42:46 compute-0 sudo[53607]: pam_unix(sudo:session): session closed for user root
Nov 24 19:42:46 compute-0 sshd-session[45561]: Connection closed by 192.168.122.30 port 50196
Nov 24 19:42:46 compute-0 sshd-session[45558]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:42:46 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 24 19:42:46 compute-0 systemd[1]: session-10.scope: Consumed 51.448s CPU time.
Nov 24 19:42:46 compute-0 systemd-logind[795]: Session 10 logged out. Waiting for processes to exit.
Nov 24 19:42:46 compute-0 systemd-logind[795]: Removed session 10.
Nov 24 19:42:47 compute-0 sshd-session[53642]: Invalid user installer from 27.79.44.141 port 46436
Nov 24 19:42:51 compute-0 sshd-session[53646]: Accepted publickey for zuul from 192.168.122.30 port 36012 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:42:51 compute-0 systemd-logind[795]: New session 11 of user zuul.
Nov 24 19:42:51 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 24 19:42:51 compute-0 sshd-session[53646]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:42:52 compute-0 sshd-session[53642]: Connection closed by invalid user installer 27.79.44.141 port 46436 [preauth]
Nov 24 19:42:52 compute-0 python3.9[53799]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:42:53 compute-0 python3.9[53953]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:42:54 compute-0 python3.9[54147]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:42:55 compute-0 sshd-session[53649]: Connection closed by 192.168.122.30 port 36012
Nov 24 19:42:55 compute-0 sshd-session[53646]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:42:55 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 24 19:42:55 compute-0 systemd[1]: session-11.scope: Consumed 2.650s CPU time.
Nov 24 19:42:55 compute-0 systemd-logind[795]: Session 11 logged out. Waiting for processes to exit.
Nov 24 19:42:55 compute-0 systemd-logind[795]: Removed session 11.
Nov 24 19:42:56 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 19:43:00 compute-0 sshd-session[54175]: Accepted publickey for zuul from 192.168.122.30 port 33742 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:43:00 compute-0 systemd-logind[795]: New session 12 of user zuul.
Nov 24 19:43:00 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 24 19:43:00 compute-0 sshd-session[54175]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:43:01 compute-0 python3.9[54329]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:43:02 compute-0 python3.9[54483]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:43:03 compute-0 sudo[54637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpjqvebmcafzphljwyxsidpiqzjrmvbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013383.394196-40-251368827445323/AnsiballZ_setup.py'
Nov 24 19:43:03 compute-0 sudo[54637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:04 compute-0 python3.9[54639]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:43:04 compute-0 sudo[54637]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:04 compute-0 sudo[54723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgzcisqnacudpogmoqxnomdyonwtcbfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013383.394196-40-251368827445323/AnsiballZ_dnf.py'
Nov 24 19:43:04 compute-0 sudo[54723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:05 compute-0 python3.9[54726]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:43:06 compute-0 sudo[54723]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:06 compute-0 sudo[54877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guttazrruhaeludstlhuejsuqjafjnbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013386.4495177-52-56538588890807/AnsiballZ_setup.py'
Nov 24 19:43:06 compute-0 sudo[54877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:07 compute-0 python3.9[54879]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:43:07 compute-0 sudo[54877]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:08 compute-0 sudo[55073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqajuiydomygebuhymmevxcnpscposri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013387.942541-63-116228789756133/AnsiballZ_file.py'
Nov 24 19:43:08 compute-0 sudo[55073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:08 compute-0 python3.9[55075]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:43:08 compute-0 sudo[55073]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:09 compute-0 sudo[55225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyvsbpxwwdfrnnfsfutcpbxejprowuua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013388.8742807-71-3707334669800/AnsiballZ_command.py'
Nov 24 19:43:09 compute-0 sudo[55225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:09 compute-0 python3.9[55227]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:43:09 compute-0 podman[55228]: 2025-11-24 19:43:09.748794613 +0000 UTC m=+0.062322708 system refresh
Nov 24 19:43:09 compute-0 sudo[55225]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:10 compute-0 sudo[55389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebaoztnhsvvkglzanddlpqicgvpnwlky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013390.025671-79-150669191488282/AnsiballZ_stat.py'
Nov 24 19:43:10 compute-0 sudo[55389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:43:10 compute-0 python3.9[55391]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:43:10 compute-0 sudo[55389]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:11 compute-0 sudo[55512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dppwidhqgqnidnnszxlywvoryapyiabb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013390.025671-79-150669191488282/AnsiballZ_copy.py'
Nov 24 19:43:11 compute-0 sudo[55512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:11 compute-0 python3.9[55514]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013390.025671-79-150669191488282/.source.json follow=False _original_basename=podman_network_config.j2 checksum=43264a442d8db8fc50f922212a5e279cca8c6bd1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:43:11 compute-0 sudo[55512]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:12 compute-0 sudo[55664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-empanhhwsmzoebzuhouwtlctmrnuwwiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013391.8664796-94-189403549214154/AnsiballZ_stat.py'
Nov 24 19:43:12 compute-0 sudo[55664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:12 compute-0 python3.9[55666]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:43:12 compute-0 sudo[55664]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:12 compute-0 sudo[55787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpfkjyyjvvehuaavlxkqhyaoqnlsrffj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013391.8664796-94-189403549214154/AnsiballZ_copy.py'
Nov 24 19:43:12 compute-0 sudo[55787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:13 compute-0 python3.9[55789]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764013391.8664796-94-189403549214154/.source.conf follow=False _original_basename=registries.conf.j2 checksum=42cf6598d1501993a3f526c66e74463836466903 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:43:13 compute-0 sudo[55787]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:13 compute-0 sudo[55939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vctowgczlljxyktsyvdywwtcemppqepq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013393.4317343-110-3137314453464/AnsiballZ_ini_file.py'
Nov 24 19:43:13 compute-0 sudo[55939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:14 compute-0 python3.9[55941]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:43:14 compute-0 sudo[55939]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:14 compute-0 sudo[56091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqpfubkkkevelahxmqlbmopkmbfmqkub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013394.3181932-110-128176642352064/AnsiballZ_ini_file.py'
Nov 24 19:43:14 compute-0 sudo[56091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:14 compute-0 python3.9[56093]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:43:14 compute-0 sudo[56091]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:15 compute-0 sudo[56243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awdvqfuyyeywvirsnysuuutecqqstglr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013395.0774424-110-184620727729390/AnsiballZ_ini_file.py'
Nov 24 19:43:15 compute-0 sudo[56243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:15 compute-0 python3.9[56245]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:43:15 compute-0 sudo[56243]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:16 compute-0 sudo[56395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkkwzhpkwjnxcftjhpqyvqjbaamhbjln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013395.7457337-110-11028665717974/AnsiballZ_ini_file.py'
Nov 24 19:43:16 compute-0 sudo[56395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:16 compute-0 python3.9[56397]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:43:16 compute-0 sudo[56395]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:16 compute-0 sshd-session[54649]: Received disconnect from 14.63.196.175 port 33016:11: Bye Bye [preauth]
Nov 24 19:43:16 compute-0 sshd-session[54649]: Disconnected from authenticating user root 14.63.196.175 port 33016 [preauth]
Nov 24 19:43:17 compute-0 sudo[56547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjhfgcrkyjuxwrbcaqbvhaehiessqrnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013396.6197724-141-39369627556993/AnsiballZ_dnf.py'
Nov 24 19:43:17 compute-0 sudo[56547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:17 compute-0 python3.9[56549]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:43:18 compute-0 sudo[56547]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:19 compute-0 sudo[56700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhbhezyrwrcbucqobtxfcitzenlxwjpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013398.8423946-152-90522339428924/AnsiballZ_setup.py'
Nov 24 19:43:19 compute-0 sudo[56700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:19 compute-0 python3.9[56702]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:43:19 compute-0 sudo[56700]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:20 compute-0 sudo[56854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-negiwkevibcwenyeryoofxbmlnwzhxbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013400.118235-160-212586112235250/AnsiballZ_stat.py'
Nov 24 19:43:20 compute-0 sudo[56854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:20 compute-0 python3.9[56856]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:43:20 compute-0 sudo[56854]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:21 compute-0 sudo[57006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhlxqybzweuafxojaekevnxlfncefsqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013400.988468-169-154071548180899/AnsiballZ_stat.py'
Nov 24 19:43:21 compute-0 sudo[57006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:21 compute-0 python3.9[57008]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:43:21 compute-0 sudo[57006]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:22 compute-0 sudo[57158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpyxjplahmnadancxviixoayhasqxgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013401.836222-179-56437117024597/AnsiballZ_command.py'
Nov 24 19:43:22 compute-0 sudo[57158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:22 compute-0 python3.9[57160]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:43:22 compute-0 sudo[57158]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:23 compute-0 sudo[57311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dibbkljndgswohhkmymbyqzfqvekibwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013402.7927945-189-128419580689118/AnsiballZ_service_facts.py'
Nov 24 19:43:23 compute-0 sudo[57311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:23 compute-0 python3.9[57313]: ansible-service_facts Invoked
Nov 24 19:43:23 compute-0 network[57330]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 19:43:23 compute-0 network[57331]: 'network-scripts' will be removed from distribution in near future.
Nov 24 19:43:23 compute-0 network[57332]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 19:43:27 compute-0 sudo[57311]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:28 compute-0 sudo[57615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fesbkerypthklhxhjlhfaltmvwditfft ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764013408.2097583-204-78916353484359/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764013408.2097583-204-78916353484359/args'
Nov 24 19:43:28 compute-0 sudo[57615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:28 compute-0 sudo[57615]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:29 compute-0 sudo[57782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcyxwgikxmctukxolrjclbmhwepunytv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013409.0316114-215-5189509689808/AnsiballZ_dnf.py'
Nov 24 19:43:29 compute-0 sudo[57782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:29 compute-0 python3.9[57784]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:43:30 compute-0 sudo[57782]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:31 compute-0 sudo[57935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsnkxmthohxfiqpskqwosliufdxkaksk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013411.2963583-228-233417184295810/AnsiballZ_package_facts.py'
Nov 24 19:43:31 compute-0 sudo[57935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:32 compute-0 python3.9[57937]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 19:43:32 compute-0 sudo[57935]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:33 compute-0 sudo[58087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouavjfnrhkgrsjxzwxvizcopkzmjsjvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013412.9500122-238-74748726966529/AnsiballZ_stat.py'
Nov 24 19:43:33 compute-0 sudo[58087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:33 compute-0 python3.9[58089]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:43:33 compute-0 sudo[58087]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:34 compute-0 sudo[58212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbrmuhjmyutdhpjkzhqksdxdpzaprkbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013412.9500122-238-74748726966529/AnsiballZ_copy.py'
Nov 24 19:43:34 compute-0 sudo[58212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:34 compute-0 python3.9[58214]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013412.9500122-238-74748726966529/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:43:34 compute-0 sudo[58212]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:34 compute-0 sudo[58366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmhrkajyjjkletbyfzektadpbaimmxfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013414.5005739-253-87205358771634/AnsiballZ_stat.py'
Nov 24 19:43:34 compute-0 sudo[58366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:35 compute-0 python3.9[58368]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:43:35 compute-0 sudo[58366]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:35 compute-0 sudo[58491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kttnnocqkoinegtgjqfrjbfntjlckxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013414.5005739-253-87205358771634/AnsiballZ_copy.py'
Nov 24 19:43:35 compute-0 sudo[58491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:35 compute-0 python3.9[58493]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013414.5005739-253-87205358771634/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:43:35 compute-0 sudo[58491]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:37 compute-0 sudo[58645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqblldbwmouvtbmsdmrwxjcjqtdeerou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013416.1765578-274-174087442312450/AnsiballZ_lineinfile.py'
Nov 24 19:43:37 compute-0 sudo[58645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:37 compute-0 python3.9[58647]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:43:37 compute-0 sudo[58645]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:38 compute-0 sudo[58799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrkybbcvwgoeacgphrtijkcrprrzzwct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013417.871409-289-12575775633096/AnsiballZ_setup.py'
Nov 24 19:43:38 compute-0 sudo[58799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:38 compute-0 python3.9[58801]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:43:38 compute-0 sudo[58799]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:39 compute-0 sudo[58883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujomxkuytftfewhpyromknomnjqwqzuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013417.871409-289-12575775633096/AnsiballZ_systemd.py'
Nov 24 19:43:39 compute-0 sudo[58883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:39 compute-0 python3.9[58885]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:43:39 compute-0 sudo[58883]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:40 compute-0 sudo[59037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bowtchaoecgwiqlfmseyrgxxlhfqtvhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013420.2735732-305-167545382420539/AnsiballZ_setup.py'
Nov 24 19:43:40 compute-0 sudo[59037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:40 compute-0 python3.9[59039]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:43:41 compute-0 sudo[59037]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:41 compute-0 sudo[59121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwtbeclkpysvriglprtdoanhjstimzoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013420.2735732-305-167545382420539/AnsiballZ_systemd.py'
Nov 24 19:43:41 compute-0 sudo[59121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:41 compute-0 python3.9[59123]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 19:43:41 compute-0 chronyd[782]: chronyd exiting
Nov 24 19:43:41 compute-0 systemd[1]: Stopping NTP client/server...
Nov 24 19:43:41 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 24 19:43:41 compute-0 systemd[1]: Stopped NTP client/server.
Nov 24 19:43:41 compute-0 systemd[1]: Starting NTP client/server...
Nov 24 19:43:42 compute-0 chronyd[59132]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 19:43:42 compute-0 chronyd[59132]: Frequency -26.534 +/- 0.106 ppm read from /var/lib/chrony/drift
Nov 24 19:43:42 compute-0 chronyd[59132]: Loaded seccomp filter (level 2)
Nov 24 19:43:42 compute-0 systemd[1]: Started NTP client/server.
Nov 24 19:43:42 compute-0 sudo[59121]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:42 compute-0 sshd-session[54178]: Connection closed by 192.168.122.30 port 33742
Nov 24 19:43:42 compute-0 sshd-session[54175]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:43:42 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 24 19:43:42 compute-0 systemd[1]: session-12.scope: Consumed 30.857s CPU time.
Nov 24 19:43:42 compute-0 systemd-logind[795]: Session 12 logged out. Waiting for processes to exit.
Nov 24 19:43:42 compute-0 systemd-logind[795]: Removed session 12.
Nov 24 19:43:48 compute-0 sshd-session[59158]: Accepted publickey for zuul from 192.168.122.30 port 52168 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:43:48 compute-0 systemd-logind[795]: New session 13 of user zuul.
Nov 24 19:43:48 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 24 19:43:48 compute-0 sshd-session[59158]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:43:48 compute-0 sudo[59311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmvrrruaouzxorrydpyqyeyrxftwcqql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013428.3540435-22-127890643394268/AnsiballZ_file.py'
Nov 24 19:43:48 compute-0 sudo[59311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:49 compute-0 python3.9[59313]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:43:49 compute-0 sudo[59311]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:49 compute-0 sudo[59463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feefokhvubehlfezefphkcihypftvtbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013429.31176-34-136591867065311/AnsiballZ_stat.py'
Nov 24 19:43:49 compute-0 sudo[59463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:50 compute-0 python3.9[59465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:43:50 compute-0 sudo[59463]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:50 compute-0 sudo[59586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzdtuacstduwvnbknyxrjnlcuooqjlxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013429.31176-34-136591867065311/AnsiballZ_copy.py'
Nov 24 19:43:50 compute-0 sudo[59586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:50 compute-0 python3.9[59588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013429.31176-34-136591867065311/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:43:50 compute-0 sudo[59586]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:51 compute-0 sshd-session[59161]: Connection closed by 192.168.122.30 port 52168
Nov 24 19:43:51 compute-0 sshd-session[59158]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:43:51 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 24 19:43:51 compute-0 systemd[1]: session-13.scope: Consumed 1.841s CPU time.
Nov 24 19:43:51 compute-0 systemd-logind[795]: Session 13 logged out. Waiting for processes to exit.
Nov 24 19:43:51 compute-0 systemd-logind[795]: Removed session 13.
Nov 24 19:43:56 compute-0 sshd-session[59613]: Accepted publickey for zuul from 192.168.122.30 port 57858 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:43:56 compute-0 systemd-logind[795]: New session 14 of user zuul.
Nov 24 19:43:56 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 24 19:43:56 compute-0 sshd-session[59613]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:43:57 compute-0 python3.9[59766]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:43:58 compute-0 sudo[59920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybwdebqesoppvdosbzaycuvcszmmfncd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013437.7735622-33-215232645083065/AnsiballZ_file.py'
Nov 24 19:43:58 compute-0 sudo[59920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:58 compute-0 python3.9[59922]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:43:58 compute-0 sudo[59920]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:59 compute-0 sudo[60097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hstxydqtbpruevgcwqjmrcslmxozdwje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013438.6507704-41-78792998145549/AnsiballZ_stat.py'
Nov 24 19:43:59 compute-0 sudo[60097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:43:59 compute-0 python3.9[60099]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:43:59 compute-0 sudo[60097]: pam_unix(sudo:session): session closed for user root
Nov 24 19:43:59 compute-0 sshd-session[59947]: Invalid user squid from 27.79.44.141 port 41624
Nov 24 19:43:59 compute-0 sshd-session[59947]: Connection closed by invalid user squid 27.79.44.141 port 41624 [preauth]
Nov 24 19:43:59 compute-0 sudo[60220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsgdkvceirfgkfrqgryevvmrhvyedmgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013438.6507704-41-78792998145549/AnsiballZ_copy.py'
Nov 24 19:43:59 compute-0 sudo[60220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:00 compute-0 python3.9[60222]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764013438.6507704-41-78792998145549/.source.json _original_basename=.lsul3k0d follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:00 compute-0 sudo[60220]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:00 compute-0 sudo[60372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvivuzjopgtphgfyxnafazoscrmgnibw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013440.5150409-64-193390272566092/AnsiballZ_stat.py'
Nov 24 19:44:00 compute-0 sudo[60372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:00 compute-0 python3.9[60374]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:00 compute-0 sudo[60372]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:01 compute-0 sudo[60495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snyiqdfkfbeikxmaxtwjsruypmwpkvir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013440.5150409-64-193390272566092/AnsiballZ_copy.py'
Nov 24 19:44:01 compute-0 sudo[60495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:01 compute-0 python3.9[60497]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013440.5150409-64-193390272566092/.source _original_basename=.plwk2opn follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:01 compute-0 sudo[60495]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:02 compute-0 sudo[60647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmxpgcdxfjmundukoorgwzwgmqwkmqjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013441.7094915-80-56765429138351/AnsiballZ_file.py'
Nov 24 19:44:02 compute-0 sudo[60647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:02 compute-0 python3.9[60649]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:44:02 compute-0 sudo[60647]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:02 compute-0 sudo[60799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzglxlnsgpfaooxsddrptjsexdkqzeka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013442.4034495-88-20723523080044/AnsiballZ_stat.py'
Nov 24 19:44:02 compute-0 sudo[60799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:02 compute-0 python3.9[60801]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:02 compute-0 sudo[60799]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:03 compute-0 sudo[60922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jexrwejvfttnwsubldlahptmjxarfjfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013442.4034495-88-20723523080044/AnsiballZ_copy.py'
Nov 24 19:44:03 compute-0 sudo[60922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:03 compute-0 python3.9[60924]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764013442.4034495-88-20723523080044/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:44:03 compute-0 sudo[60922]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:03 compute-0 sudo[61075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npbibzafopnwghuxghsjzdofpbhafflp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013443.6162145-88-27276681803244/AnsiballZ_stat.py'
Nov 24 19:44:03 compute-0 sudo[61075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:04 compute-0 python3.9[61077]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:04 compute-0 sudo[61075]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:04 compute-0 sudo[61198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muatlwkptgbekvojwtpulajjqswtmmhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013443.6162145-88-27276681803244/AnsiballZ_copy.py'
Nov 24 19:44:04 compute-0 sudo[61198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:04 compute-0 python3.9[61200]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764013443.6162145-88-27276681803244/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:44:04 compute-0 sudo[61198]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:05 compute-0 sudo[61350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfudyewqdozdndbkpzobjpllgjhjyhkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013444.9476929-117-84909939570042/AnsiballZ_file.py'
Nov 24 19:44:05 compute-0 sudo[61350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:05 compute-0 python3.9[61352]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:05 compute-0 sudo[61350]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:06 compute-0 sudo[61502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbtkvuenucjedfvipcybhxhhmlzthwud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013445.691827-125-121449120608576/AnsiballZ_stat.py'
Nov 24 19:44:06 compute-0 sudo[61502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:06 compute-0 python3.9[61504]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:06 compute-0 sudo[61502]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:06 compute-0 sudo[61625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imzpncqylnnqfbonhjdlmsskcomrhxvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013445.691827-125-121449120608576/AnsiballZ_copy.py'
Nov 24 19:44:06 compute-0 sudo[61625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:06 compute-0 python3.9[61627]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013445.691827-125-121449120608576/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:06 compute-0 sudo[61625]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:07 compute-0 sudo[61777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nycnokshrbmppyqsxodskqisgkkftczs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013447.0459256-140-263952624341776/AnsiballZ_stat.py'
Nov 24 19:44:07 compute-0 sudo[61777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:07 compute-0 python3.9[61779]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:07 compute-0 sudo[61777]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:08 compute-0 sudo[61900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bejhpcxcjfxemxvjzndpqgxjfwvbxfvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013447.0459256-140-263952624341776/AnsiballZ_copy.py'
Nov 24 19:44:08 compute-0 sudo[61900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:08 compute-0 python3.9[61902]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013447.0459256-140-263952624341776/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:08 compute-0 sudo[61900]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:09 compute-0 sudo[62052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iagnmhcltfhkagupsnjnjgpuupadexiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013448.397487-155-50819531899590/AnsiballZ_systemd.py'
Nov 24 19:44:09 compute-0 sudo[62052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:09 compute-0 python3.9[62054]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:44:09 compute-0 systemd[1]: Reloading.
Nov 24 19:44:09 compute-0 systemd-rc-local-generator[62075]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:44:09 compute-0 systemd-sysv-generator[62080]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:44:09 compute-0 systemd[1]: Reloading.
Nov 24 19:44:09 compute-0 systemd-rc-local-generator[62122]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:44:09 compute-0 systemd-sysv-generator[62125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:44:09 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 24 19:44:09 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 24 19:44:09 compute-0 sudo[62052]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:10 compute-0 sudo[62281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mblizucptxbsycwtwqvvuztlxqlfsbrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013450.0467303-163-16028273562445/AnsiballZ_stat.py'
Nov 24 19:44:10 compute-0 sudo[62281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:10 compute-0 python3.9[62283]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:10 compute-0 sudo[62281]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:11 compute-0 sudo[62404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chkfzdrcjcwmykvoucevfeaivikzmzqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013450.0467303-163-16028273562445/AnsiballZ_copy.py'
Nov 24 19:44:11 compute-0 sudo[62404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:11 compute-0 python3.9[62406]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013450.0467303-163-16028273562445/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:11 compute-0 sudo[62404]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:11 compute-0 sudo[62556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aztjdvjxwmvozpdsgbvqjvmgsgiejsrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013451.4117396-178-274503115073619/AnsiballZ_stat.py'
Nov 24 19:44:11 compute-0 sudo[62556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:11 compute-0 python3.9[62558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:11 compute-0 sudo[62556]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:12 compute-0 sudo[62679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isyanqrrtedlimvnsjnxrjrxlulyoovv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013451.4117396-178-274503115073619/AnsiballZ_copy.py'
Nov 24 19:44:12 compute-0 sudo[62679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:12 compute-0 python3.9[62681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013451.4117396-178-274503115073619/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:12 compute-0 sudo[62679]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:13 compute-0 sudo[62831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhldfibokuvxufegbjycgeylhwazmgfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013452.800009-193-204193682189550/AnsiballZ_systemd.py'
Nov 24 19:44:13 compute-0 sudo[62831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:13 compute-0 python3.9[62833]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:44:13 compute-0 systemd[1]: Reloading.
Nov 24 19:44:13 compute-0 systemd-rc-local-generator[62862]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:44:13 compute-0 systemd-sysv-generator[62865]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:44:13 compute-0 systemd[1]: Reloading.
Nov 24 19:44:13 compute-0 systemd-rc-local-generator[62896]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:44:13 compute-0 systemd-sysv-generator[62901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:44:14 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 19:44:14 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 19:44:14 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 19:44:14 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 19:44:14 compute-0 sudo[62831]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:14 compute-0 python3.9[63059]: ansible-ansible.builtin.service_facts Invoked
Nov 24 19:44:14 compute-0 network[63076]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 19:44:14 compute-0 network[63077]: 'network-scripts' will be removed from distribution in near future.
Nov 24 19:44:14 compute-0 network[63078]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 19:44:19 compute-0 sudo[63338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzefujkoetjbtrjgukjdjlibkjuzqepf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013458.7432425-209-176380984609751/AnsiballZ_systemd.py'
Nov 24 19:44:19 compute-0 sudo[63338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:19 compute-0 python3.9[63340]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:44:19 compute-0 systemd[1]: Reloading.
Nov 24 19:44:19 compute-0 systemd-sysv-generator[63373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:44:19 compute-0 systemd-rc-local-generator[63365]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:44:19 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 24 19:44:20 compute-0 iptables.init[63380]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 24 19:44:20 compute-0 iptables.init[63380]: iptables: Flushing firewall rules: [  OK  ]
Nov 24 19:44:20 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 24 19:44:20 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 24 19:44:20 compute-0 sudo[63338]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:20 compute-0 sudo[63574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfwlshbybdzxiqlialfmuayybmveiyyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013460.4173925-209-83858419482117/AnsiballZ_systemd.py'
Nov 24 19:44:20 compute-0 sudo[63574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:21 compute-0 python3.9[63576]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:44:21 compute-0 sudo[63574]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:21 compute-0 sudo[63728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tybynccklvfqndasbtsnqciiogwowsbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013461.3980248-225-221237756729898/AnsiballZ_systemd.py'
Nov 24 19:44:21 compute-0 sudo[63728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:22 compute-0 python3.9[63730]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:44:22 compute-0 systemd[1]: Reloading.
Nov 24 19:44:22 compute-0 systemd-rc-local-generator[63757]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:44:22 compute-0 systemd-sysv-generator[63760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:44:22 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 24 19:44:22 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 24 19:44:22 compute-0 sudo[63728]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:23 compute-0 sudo[63920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzxvofkgeodmzclaznljlqgxcmfljthh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013462.6189704-233-231929410767420/AnsiballZ_command.py'
Nov 24 19:44:23 compute-0 sudo[63920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:23 compute-0 python3.9[63922]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:44:23 compute-0 sudo[63920]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:24 compute-0 sudo[64073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfjgxsnlvunvqmvdkicnntbvtjaihibd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013463.6954074-247-9863060745669/AnsiballZ_stat.py'
Nov 24 19:44:24 compute-0 sudo[64073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:24 compute-0 python3.9[64075]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:24 compute-0 sudo[64073]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:24 compute-0 sudo[64198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amrdjoynizqlfrqhwjejyikirxinlacs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013463.6954074-247-9863060745669/AnsiballZ_copy.py'
Nov 24 19:44:24 compute-0 sudo[64198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:24 compute-0 python3.9[64200]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013463.6954074-247-9863060745669/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:24 compute-0 sudo[64198]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:25 compute-0 sudo[64351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyuljfefbocrpvzudoqmikxgziivwlxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013465.0831025-262-136640939783016/AnsiballZ_systemd.py'
Nov 24 19:44:25 compute-0 sudo[64351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:25 compute-0 python3.9[64353]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 19:44:25 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 24 19:44:25 compute-0 sshd[1004]: Received SIGHUP; restarting.
Nov 24 19:44:25 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 24 19:44:25 compute-0 sshd[1004]: Server listening on 0.0.0.0 port 22.
Nov 24 19:44:25 compute-0 sshd[1004]: Server listening on :: port 22.
Nov 24 19:44:25 compute-0 sudo[64351]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:26 compute-0 sudo[64507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-satzhgmsdtchufddesrrvwsyvfjsnmba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013466.0279675-270-152884074514070/AnsiballZ_file.py'
Nov 24 19:44:26 compute-0 sudo[64507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:26 compute-0 python3.9[64509]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:26 compute-0 sudo[64507]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:27 compute-0 sudo[64659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkkbkucrbzzldjoboxxwvtqsyyrlzsrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013466.6488936-278-190619036031747/AnsiballZ_stat.py'
Nov 24 19:44:27 compute-0 sudo[64659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:27 compute-0 python3.9[64661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:27 compute-0 sudo[64659]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:27 compute-0 sudo[64782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctsvyhaezcdeenqylvjecsbvwcwewxjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013466.6488936-278-190619036031747/AnsiballZ_copy.py'
Nov 24 19:44:27 compute-0 sudo[64782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:27 compute-0 python3.9[64784]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013466.6488936-278-190619036031747/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:27 compute-0 sudo[64782]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:28 compute-0 sudo[64934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcwufqwhisvcumqdqndirbkdqmxvvizo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013468.0027835-296-243889341389346/AnsiballZ_timezone.py'
Nov 24 19:44:28 compute-0 sudo[64934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:28 compute-0 python3.9[64936]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 19:44:28 compute-0 systemd[1]: Starting Time & Date Service...
Nov 24 19:44:28 compute-0 systemd[1]: Started Time & Date Service.
Nov 24 19:44:28 compute-0 sudo[64934]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:29 compute-0 sudo[65090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipypkgqmlvrsvchnhdtycrlvhyctdvpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013469.1293988-305-151341324928851/AnsiballZ_file.py'
Nov 24 19:44:29 compute-0 sudo[65090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:29 compute-0 python3.9[65092]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:29 compute-0 sudo[65090]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:30 compute-0 sudo[65242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uegsxeefknyedtavlopkdkvysxfqgshq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013469.8077016-313-66014644567411/AnsiballZ_stat.py'
Nov 24 19:44:30 compute-0 sudo[65242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:30 compute-0 python3.9[65244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:30 compute-0 sudo[65242]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:30 compute-0 sudo[65365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aihvxuzyqldkiiabitnvludalkiezvcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013469.8077016-313-66014644567411/AnsiballZ_copy.py'
Nov 24 19:44:30 compute-0 sudo[65365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:30 compute-0 python3.9[65367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013469.8077016-313-66014644567411/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:30 compute-0 sudo[65365]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:31 compute-0 sudo[65517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbpndabouvizbqvoasaayxswjyyebcrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013470.9837935-328-171634848950811/AnsiballZ_stat.py'
Nov 24 19:44:31 compute-0 sudo[65517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:31 compute-0 python3.9[65519]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:31 compute-0 sudo[65517]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:31 compute-0 sudo[65640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjyhodwfriqalivxfbgrugebnmpighjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013470.9837935-328-171634848950811/AnsiballZ_copy.py'
Nov 24 19:44:31 compute-0 sudo[65640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:32 compute-0 python3.9[65642]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764013470.9837935-328-171634848950811/.source.yaml _original_basename=.vryxzibr follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:32 compute-0 sudo[65640]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:32 compute-0 sudo[65792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vntbalbdfllnwbczzbhhkydcotaorzdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013472.3175664-343-179273699118046/AnsiballZ_stat.py'
Nov 24 19:44:32 compute-0 sudo[65792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:32 compute-0 python3.9[65794]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:32 compute-0 sudo[65792]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:33 compute-0 sudo[65915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxaxbfzvsbubyiabychhrdrcznnitsyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013472.3175664-343-179273699118046/AnsiballZ_copy.py'
Nov 24 19:44:33 compute-0 sudo[65915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:33 compute-0 python3.9[65917]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013472.3175664-343-179273699118046/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:33 compute-0 sudo[65915]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:34 compute-0 sudo[66069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yonvpqsudyqvnjejtfkdyzmxipsxwrcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013473.6858337-358-12228126512071/AnsiballZ_command.py'
Nov 24 19:44:34 compute-0 sudo[66069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:34 compute-0 python3.9[66071]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:44:34 compute-0 sudo[66069]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:34 compute-0 sudo[66222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcbkkruxkdrpifbquuemhihrjuwbwgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013474.609675-366-24755339945864/AnsiballZ_command.py'
Nov 24 19:44:34 compute-0 sudo[66222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:35 compute-0 python3.9[66224]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:44:35 compute-0 sudo[66222]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:35 compute-0 sudo[66375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypswfofowkqmjfkwohfwmuwtqkloovzi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764013475.4379854-374-259517299144811/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 19:44:35 compute-0 sudo[66375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:36 compute-0 python3[66377]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 19:44:36 compute-0 sudo[66375]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:37 compute-0 sudo[66527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvaigsmqusfhptfqszkqexoarbymlimf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013476.6813815-382-70308941732846/AnsiballZ_stat.py'
Nov 24 19:44:37 compute-0 sudo[66527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:37 compute-0 python3.9[66529]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:37 compute-0 sudo[66527]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:37 compute-0 sshd-session[65963]: Invalid user ubnt from 27.79.44.141 port 51396
Nov 24 19:44:37 compute-0 sudo[66650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drhnvbhwoekzpsltqkbcdzhdoosmryif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013476.6813815-382-70308941732846/AnsiballZ_copy.py'
Nov 24 19:44:37 compute-0 sudo[66650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:37 compute-0 sshd-session[65963]: Connection closed by invalid user ubnt 27.79.44.141 port 51396 [preauth]
Nov 24 19:44:37 compute-0 python3.9[66652]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013476.6813815-382-70308941732846/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:37 compute-0 sudo[66650]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:38 compute-0 sudo[66802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqvjjjldoyrxcorvtcxqtcgozbaeilab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013478.1279507-397-67639363210480/AnsiballZ_stat.py'
Nov 24 19:44:38 compute-0 sudo[66802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:38 compute-0 python3.9[66804]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:38 compute-0 sudo[66802]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:39 compute-0 sudo[66925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcoxhmnppexekinmphfjcsvtgxyjuzve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013478.1279507-397-67639363210480/AnsiballZ_copy.py'
Nov 24 19:44:39 compute-0 sudo[66925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:39 compute-0 python3.9[66927]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013478.1279507-397-67639363210480/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:39 compute-0 sudo[66925]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:40 compute-0 sudo[67077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuqfowzivnwmsonveobladrppvmdecjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013479.7899978-412-231954695312575/AnsiballZ_stat.py'
Nov 24 19:44:40 compute-0 sudo[67077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:40 compute-0 python3.9[67079]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:40 compute-0 sudo[67077]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:40 compute-0 sudo[67200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opjmjscvsuifycjeojpdocznwdvpbsii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013479.7899978-412-231954695312575/AnsiballZ_copy.py'
Nov 24 19:44:40 compute-0 sudo[67200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:41 compute-0 python3.9[67202]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013479.7899978-412-231954695312575/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:41 compute-0 sudo[67200]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:41 compute-0 sudo[67352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqirbevjhzkxlxiabdtaifwbszltucms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013481.3341806-427-235585984798462/AnsiballZ_stat.py'
Nov 24 19:44:41 compute-0 sudo[67352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:41 compute-0 python3.9[67354]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:41 compute-0 sudo[67352]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:42 compute-0 sudo[67475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aykgrjgjpylakhcvumrzjnxdklxarwbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013481.3341806-427-235585984798462/AnsiballZ_copy.py'
Nov 24 19:44:42 compute-0 sudo[67475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:42 compute-0 python3.9[67477]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013481.3341806-427-235585984798462/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:42 compute-0 sudo[67475]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:43 compute-0 sudo[67627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aluykqtciuxrjspzlymsdwxisljgocym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013482.6945543-442-102396795510261/AnsiballZ_stat.py'
Nov 24 19:44:43 compute-0 sudo[67627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:43 compute-0 python3.9[67629]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:44:43 compute-0 sudo[67627]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:43 compute-0 sudo[67750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkxhvbsyqglzebcsiklpxnuzdyddpzqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013482.6945543-442-102396795510261/AnsiballZ_copy.py'
Nov 24 19:44:43 compute-0 sudo[67750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:44 compute-0 python3.9[67752]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764013482.6945543-442-102396795510261/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:44 compute-0 sudo[67750]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:44 compute-0 sudo[67902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcsmlroiekwornkfitnksmiykamwtsjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013484.2881494-457-222681758531429/AnsiballZ_file.py'
Nov 24 19:44:44 compute-0 sudo[67902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:44 compute-0 python3.9[67904]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:44 compute-0 sudo[67902]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:45 compute-0 sudo[68054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swcpckdiftcgnyatdjbwpqhwfdyofirs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013485.1327074-465-277902223643814/AnsiballZ_command.py'
Nov 24 19:44:45 compute-0 sudo[68054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:45 compute-0 python3.9[68056]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:44:45 compute-0 sudo[68054]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:46 compute-0 sudo[68213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kflgfqqujaxmdjftqqdoyddvfgkhttws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013486.002025-473-166796656434457/AnsiballZ_blockinfile.py'
Nov 24 19:44:46 compute-0 sudo[68213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:46 compute-0 python3.9[68215]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:46 compute-0 sudo[68213]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:47 compute-0 sudo[68366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgxwjuedeomhfteibkuuhxwmzbjowqed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013487.1783175-482-149201868069467/AnsiballZ_file.py'
Nov 24 19:44:47 compute-0 sudo[68366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:47 compute-0 python3.9[68368]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:47 compute-0 sudo[68366]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:48 compute-0 sudo[68518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sufvxhehmmwlwzafmzubjgngehntrrar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013487.94971-482-154651831014650/AnsiballZ_file.py'
Nov 24 19:44:48 compute-0 sudo[68518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:48 compute-0 python3.9[68520]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:48 compute-0 sudo[68518]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:49 compute-0 sudo[68670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzmkksqjeldbgundnxqirgdbowcpzqqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013488.7705977-497-46817010333250/AnsiballZ_mount.py'
Nov 24 19:44:49 compute-0 sudo[68670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:49 compute-0 python3.9[68672]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 19:44:49 compute-0 sudo[68670]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:50 compute-0 sudo[68823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeqyfxkndqgkqxqijdwbxmbgyrehvclr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013489.790554-497-209786729126062/AnsiballZ_mount.py'
Nov 24 19:44:50 compute-0 sudo[68823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:50 compute-0 python3.9[68825]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 19:44:50 compute-0 sudo[68823]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:50 compute-0 sshd-session[59616]: Connection closed by 192.168.122.30 port 57858
Nov 24 19:44:50 compute-0 sshd-session[59613]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:44:50 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 24 19:44:50 compute-0 systemd[1]: session-14.scope: Consumed 41.192s CPU time.
Nov 24 19:44:50 compute-0 systemd-logind[795]: Session 14 logged out. Waiting for processes to exit.
Nov 24 19:44:50 compute-0 systemd-logind[795]: Removed session 14.
Nov 24 19:44:55 compute-0 sshd-session[68851]: Accepted publickey for zuul from 192.168.122.30 port 53064 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:44:55 compute-0 systemd-logind[795]: New session 15 of user zuul.
Nov 24 19:44:55 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 24 19:44:55 compute-0 sshd-session[68851]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:44:56 compute-0 sudo[69004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcvxiazupczhghchowrgstbyxcdkvatx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013495.875942-16-108468580169141/AnsiballZ_tempfile.py'
Nov 24 19:44:56 compute-0 sudo[69004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:56 compute-0 python3.9[69006]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 19:44:56 compute-0 sudo[69004]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:57 compute-0 sudo[69156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aixfsxqnaocsrirnmrmkphnchzxnzyjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013496.8471987-28-261124480727360/AnsiballZ_stat.py'
Nov 24 19:44:57 compute-0 sudo[69156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:57 compute-0 python3.9[69158]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:44:57 compute-0 sudo[69156]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:58 compute-0 sudo[69308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qldjoshfxbajfcepgprmniatkdpxswjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013497.7521336-38-12134664481230/AnsiballZ_setup.py'
Nov 24 19:44:58 compute-0 sudo[69308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:58 compute-0 python3.9[69310]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:44:58 compute-0 sudo[69308]: pam_unix(sudo:session): session closed for user root
Nov 24 19:44:58 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 19:44:59 compute-0 sudo[69463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezwlrytxlapfuyiikkousuqainknlgvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013498.998052-47-92958470358243/AnsiballZ_blockinfile.py'
Nov 24 19:44:59 compute-0 sudo[69463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:44:59 compute-0 python3.9[69465]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDz08cIhIJvEgXDwwGqJcUcccV13vZKm79Alj6fJP3mPS8+SwiNI2qAVhh0gh5ljYJD+o/0TOs+oZqmsC5hBhAO2ePN3HXhd28IAsAKLACa/ITk0kE++96j+0UiC4lw+9hb+48H8lKqPpNrF4uYg1DJ28srFtzLeR0FNjuaAz5045n1dGd+mMz75P/cAKwMKTlAklCc8V/Kug6mBm12mItgO4kd9XjLa6tSbZ5n9KuTW094j2RJFwUCXAoVEDXBI7CUAUMuKR8M3TriPeAeRsm38Do1qBf66tdb+5RzcVeOpDvLPe6oe6ys1AbYx1xOxF33s+YojUw3r94r7LUGviON0qiGkWmLBXAzWeE/KL/QI+tx7hSicZ1AnRFsCo4GAyLRAeyYhcStsMfKyEZkGLIqRoUaCvjUyOnIk8B1lLcUWnw11MeV2gBW9oSLHHN9vSQKePdKKvWvKyoHNrBECkye93MoYc8g9QPF+9a+gChshN/8DHBpQFG1PXhb3KMYM38=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGRjViR+rENMWsp0rfw0jkB6UrpO4igMTnHnreNvRXh6
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGmuOeSvYdXKNZKhBs8YqKEpqCpD8Nk8aZY8F++/S1nbmdyIEMuIhp/lyVvyV1J7c6T45oEtqKedTy9KkwaDKNA=
                                             create=True mode=0644 path=/tmp/ansible.oxt99kyb state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:44:59 compute-0 sudo[69463]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:00 compute-0 sudo[69615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpivzyzwhbcfebnwuyeklqnszhqqiisq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013499.8926797-55-52203077355952/AnsiballZ_command.py'
Nov 24 19:45:00 compute-0 sudo[69615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:00 compute-0 python3.9[69617]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.oxt99kyb' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:45:00 compute-0 sudo[69615]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:01 compute-0 sudo[69769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iurlfzfdocfllvgmeamxnwmcrnuifird ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013500.719969-63-27886423027351/AnsiballZ_file.py'
Nov 24 19:45:01 compute-0 sudo[69769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:01 compute-0 python3.9[69771]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.oxt99kyb state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:45:01 compute-0 sudo[69769]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:01 compute-0 sshd-session[68854]: Connection closed by 192.168.122.30 port 53064
Nov 24 19:45:01 compute-0 sshd-session[68851]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:45:01 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 24 19:45:01 compute-0 systemd[1]: session-15.scope: Consumed 3.854s CPU time.
Nov 24 19:45:01 compute-0 systemd-logind[795]: Session 15 logged out. Waiting for processes to exit.
Nov 24 19:45:01 compute-0 systemd-logind[795]: Removed session 15.
Nov 24 19:45:06 compute-0 sshd-session[69796]: Accepted publickey for zuul from 192.168.122.30 port 39984 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:45:06 compute-0 systemd-logind[795]: New session 16 of user zuul.
Nov 24 19:45:06 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 24 19:45:06 compute-0 sshd-session[69796]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:45:08 compute-0 python3.9[69949]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:45:09 compute-0 sudo[70103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkldkzsbwmihvwecwvggbkdcerldlyrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013508.4527776-32-122321284115458/AnsiballZ_systemd.py'
Nov 24 19:45:09 compute-0 sudo[70103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:09 compute-0 python3.9[70105]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 19:45:09 compute-0 sudo[70103]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:10 compute-0 sudo[70257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gksrxfgswigasrvqknlwnysrpmvhgthc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013509.7185729-40-217911185524387/AnsiballZ_systemd.py'
Nov 24 19:45:10 compute-0 sudo[70257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:10 compute-0 python3.9[70259]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 19:45:10 compute-0 sudo[70257]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:11 compute-0 sudo[70410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvgdipgzupxfqfxmwiroifvreeutrgwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013510.748444-49-171099674974482/AnsiballZ_command.py'
Nov 24 19:45:11 compute-0 sudo[70410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:11 compute-0 python3.9[70412]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:45:11 compute-0 sudo[70410]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:12 compute-0 sudo[70563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmxcuybiwhybswzjefvredhbcysmptox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013511.6939435-57-233205381876007/AnsiballZ_stat.py'
Nov 24 19:45:12 compute-0 sudo[70563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:12 compute-0 python3.9[70565]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:45:12 compute-0 sudo[70563]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:12 compute-0 sudo[70717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbphmehauyhlicvarirutfydiuqgpzoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013512.6443846-65-48334233183325/AnsiballZ_command.py'
Nov 24 19:45:12 compute-0 sudo[70717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:13 compute-0 python3.9[70719]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:45:13 compute-0 sudo[70717]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:13 compute-0 sudo[70872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zetxxaoyskchlgexpwsomrytujsjqhpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013513.3646367-73-118799513201570/AnsiballZ_file.py'
Nov 24 19:45:13 compute-0 sudo[70872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:14 compute-0 python3.9[70874]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:45:14 compute-0 sudo[70872]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:14 compute-0 sshd-session[69799]: Connection closed by 192.168.122.30 port 39984
Nov 24 19:45:14 compute-0 sshd-session[69796]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:45:14 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 24 19:45:14 compute-0 systemd[1]: session-16.scope: Consumed 5.147s CPU time.
Nov 24 19:45:14 compute-0 systemd-logind[795]: Session 16 logged out. Waiting for processes to exit.
Nov 24 19:45:14 compute-0 systemd-logind[795]: Removed session 16.
Nov 24 19:45:19 compute-0 sshd-session[70899]: Accepted publickey for zuul from 192.168.122.30 port 58506 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:45:19 compute-0 systemd-logind[795]: New session 17 of user zuul.
Nov 24 19:45:19 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 24 19:45:19 compute-0 sshd-session[70899]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:45:20 compute-0 python3.9[71052]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:45:21 compute-0 sudo[71206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbkmdvyaclwmohgtxqlgyyzkrdxtvfht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013521.4108307-34-280689424366937/AnsiballZ_setup.py'
Nov 24 19:45:21 compute-0 sudo[71206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:21 compute-0 python3.9[71208]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:45:22 compute-0 sudo[71206]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:22 compute-0 sudo[71290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djvclhbotndmmujeqoolxsiwwvvnlfmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013521.4108307-34-280689424366937/AnsiballZ_dnf.py'
Nov 24 19:45:22 compute-0 sudo[71290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:23 compute-0 python3.9[71292]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 19:45:24 compute-0 sudo[71290]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:25 compute-0 python3.9[71443]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:45:26 compute-0 python3.9[71594]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 19:45:27 compute-0 python3.9[71744]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:45:27 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 19:45:28 compute-0 python3.9[71895]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:45:28 compute-0 sshd-session[70902]: Connection closed by 192.168.122.30 port 58506
Nov 24 19:45:28 compute-0 sshd-session[70899]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:45:28 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 24 19:45:28 compute-0 systemd[1]: session-17.scope: Consumed 6.786s CPU time.
Nov 24 19:45:28 compute-0 systemd-logind[795]: Session 17 logged out. Waiting for processes to exit.
Nov 24 19:45:28 compute-0 systemd-logind[795]: Removed session 17.
Nov 24 19:45:36 compute-0 sshd-session[71920]: Accepted publickey for zuul from 38.102.83.75 port 47154 ssh2: RSA SHA256:7SvGaq0vO1tX0FCwphjOH0o+Hv96ctrv4u16VrRbmZ0
Nov 24 19:45:36 compute-0 systemd-logind[795]: New session 18 of user zuul.
Nov 24 19:45:36 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 24 19:45:36 compute-0 sshd-session[71920]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:45:37 compute-0 sudo[71996]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjmnwsnnmgtgmamhqhobiotiyeubbfjl ; /usr/bin/python3'
Nov 24 19:45:37 compute-0 sudo[71996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:37 compute-0 useradd[72000]: new group: name=ceph-admin, GID=42478
Nov 24 19:45:37 compute-0 useradd[72000]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 24 19:45:37 compute-0 sudo[71996]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:37 compute-0 sudo[72082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxqytixghzjpbkknrybfmeldnjibsvct ; /usr/bin/python3'
Nov 24 19:45:37 compute-0 sudo[72082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:37 compute-0 sudo[72082]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:38 compute-0 sudo[72155]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlzjnsobnnedkccrrwvzwkqcmtyzhbix ; /usr/bin/python3'
Nov 24 19:45:38 compute-0 sudo[72155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:38 compute-0 sudo[72155]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:38 compute-0 sudo[72205]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iinjttkvmzwxwwkdfniljeywgzsmvcns ; /usr/bin/python3'
Nov 24 19:45:38 compute-0 sudo[72205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:39 compute-0 sudo[72205]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:39 compute-0 sudo[72231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yquzbrvesbwepbkobewemgccrdztqkdv ; /usr/bin/python3'
Nov 24 19:45:39 compute-0 sudo[72231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:39 compute-0 sudo[72231]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:39 compute-0 sudo[72257]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hipuvvdthinqbrhqrzsmcmjjvfgymtns ; /usr/bin/python3'
Nov 24 19:45:39 compute-0 sudo[72257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:39 compute-0 sudo[72257]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:40 compute-0 sudo[72283]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmovxafnjxxwmlwhyjhpscduewwnkmfs ; /usr/bin/python3'
Nov 24 19:45:40 compute-0 sudo[72283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:40 compute-0 sudo[72283]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:40 compute-0 sudo[72361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyrxsrlerspchcdbtwfpeeqcxahwwmwg ; /usr/bin/python3'
Nov 24 19:45:40 compute-0 sudo[72361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:40 compute-0 sudo[72361]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:41 compute-0 sudo[72434]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvyqzsktewfeosnvwdgoopwdeiuemycq ; /usr/bin/python3'
Nov 24 19:45:41 compute-0 sudo[72434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:41 compute-0 sudo[72434]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:41 compute-0 sudo[72536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udszoyvhnrpgurlvqfafbxtvadnutnbk ; /usr/bin/python3'
Nov 24 19:45:41 compute-0 sudo[72536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:41 compute-0 sudo[72536]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:42 compute-0 sudo[72609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvonvcrvmxbnwqcoxdsloiysikjdlrqw ; /usr/bin/python3'
Nov 24 19:45:42 compute-0 sudo[72609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:42 compute-0 sudo[72609]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:42 compute-0 sudo[72659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlinugxhkypyfwxvjjhaiqklexwcqupi ; /usr/bin/python3'
Nov 24 19:45:42 compute-0 sudo[72659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:42 compute-0 python3[72661]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:45:44 compute-0 sudo[72659]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:44 compute-0 sudo[72754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eagovvokzmlogevdlgxpfdyybcywnlyi ; /usr/bin/python3'
Nov 24 19:45:44 compute-0 sudo[72754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:44 compute-0 python3[72756]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 19:45:46 compute-0 sudo[72754]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:46 compute-0 sudo[72781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smdpcpsxopayutyfcmyiwtelboenbmey ; /usr/bin/python3'
Nov 24 19:45:46 compute-0 sudo[72781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:46 compute-0 python3[72783]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 19:45:46 compute-0 sudo[72781]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:46 compute-0 sudo[72807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isbacxvgichkodrynmbqseyaanvdkkje ; /usr/bin/python3'
Nov 24 19:45:46 compute-0 sudo[72807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:46 compute-0 python3[72809]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:45:46 compute-0 kernel: loop: module loaded
Nov 24 19:45:46 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 24 19:45:46 compute-0 sudo[72807]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:47 compute-0 sudo[72842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfanosehonutjmzseisqfgohlqddlids ; /usr/bin/python3'
Nov 24 19:45:47 compute-0 sudo[72842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:47 compute-0 python3[72844]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:45:47 compute-0 lvm[72847]: PV /dev/loop3 not used.
Nov 24 19:45:47 compute-0 lvm[72856]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 19:45:47 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 24 19:45:47 compute-0 sudo[72842]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:47 compute-0 lvm[72858]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 24 19:45:47 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 24 19:45:48 compute-0 sudo[72934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwggzzvlemphtcozmtchxsjhqhhsqrsq ; /usr/bin/python3'
Nov 24 19:45:48 compute-0 sudo[72934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:48 compute-0 python3[72936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:45:48 compute-0 sudo[72934]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:48 compute-0 sudo[73007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubzszjzbdzixvqcejfttxvccrgrvkepn ; /usr/bin/python3'
Nov 24 19:45:48 compute-0 sudo[73007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:48 compute-0 python3[73009]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764013547.8165221-37362-84999838960491/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:45:48 compute-0 sudo[73007]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:49 compute-0 sudo[73057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccjlcihzofiwlactigwxsmhcbddlbvcf ; /usr/bin/python3'
Nov 24 19:45:49 compute-0 sudo[73057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:49 compute-0 python3[73059]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:45:49 compute-0 systemd[1]: Reloading.
Nov 24 19:45:49 compute-0 systemd-rc-local-generator[73085]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:45:49 compute-0 systemd-sysv-generator[73091]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:45:49 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 24 19:45:49 compute-0 bash[73099]: /dev/loop3: [64513]:4194933 (/var/lib/ceph-osd-0.img)
Nov 24 19:45:49 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 24 19:45:49 compute-0 lvm[73100]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 19:45:49 compute-0 lvm[73100]: VG ceph_vg0 finished
Nov 24 19:45:49 compute-0 sudo[73057]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:49 compute-0 sudo[73124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmnrhjsmpoupgmuisltxudardfpvrput ; /usr/bin/python3'
Nov 24 19:45:49 compute-0 sudo[73124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:50 compute-0 python3[73126]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 19:45:51 compute-0 chronyd[59132]: Selected source 162.159.200.123 (pool.ntp.org)
Nov 24 19:45:51 compute-0 sudo[73124]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:51 compute-0 sudo[73151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usniwazxmfmihbajgolvlxbkvzidnvyx ; /usr/bin/python3'
Nov 24 19:45:51 compute-0 sudo[73151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:51 compute-0 python3[73153]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 19:45:51 compute-0 sudo[73151]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:51 compute-0 sudo[73177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myqvqmiydryjoxsxjwiuxnjiaadbazns ; /usr/bin/python3'
Nov 24 19:45:51 compute-0 sudo[73177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:52 compute-0 python3[73179]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:45:52 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 24 19:45:52 compute-0 sudo[73177]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:52 compute-0 sudo[73209]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyujgrdjuzmcuanazotaeivyitfgaqrd ; /usr/bin/python3'
Nov 24 19:45:52 compute-0 sudo[73209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:52 compute-0 python3[73211]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:45:52 compute-0 lvm[73214]: PV /dev/loop4 not used.
Nov 24 19:45:52 compute-0 lvm[73223]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 19:45:52 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 24 19:45:52 compute-0 sudo[73209]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:52 compute-0 lvm[73225]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 24 19:45:52 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 24 19:45:53 compute-0 sudo[73301]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udeyqkonsduyeoljhrbdcwlcmntmnsgv ; /usr/bin/python3'
Nov 24 19:45:53 compute-0 sudo[73301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:53 compute-0 python3[73303]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:45:53 compute-0 sudo[73301]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:53 compute-0 sudo[73374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntdzgokidpklminqwolfiwkgdguxoitp ; /usr/bin/python3'
Nov 24 19:45:53 compute-0 sudo[73374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:53 compute-0 python3[73376]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764013552.9538488-37389-169932590864946/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:45:53 compute-0 sudo[73374]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:53 compute-0 sudo[73424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dihwdzzdgapgubhvgdvqcosssgsoxpqt ; /usr/bin/python3'
Nov 24 19:45:53 compute-0 sudo[73424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:54 compute-0 python3[73426]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:45:54 compute-0 systemd[1]: Reloading.
Nov 24 19:45:54 compute-0 systemd-rc-local-generator[73456]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:45:54 compute-0 systemd-sysv-generator[73459]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:45:54 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 24 19:45:54 compute-0 bash[73466]: /dev/loop4: [64513]:4328005 (/var/lib/ceph-osd-1.img)
Nov 24 19:45:54 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 24 19:45:54 compute-0 lvm[73467]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 19:45:54 compute-0 lvm[73467]: VG ceph_vg1 finished
Nov 24 19:45:54 compute-0 sudo[73424]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:54 compute-0 sudo[73491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vysgqxcamhobzafpdvjjczkykmynyzsk ; /usr/bin/python3'
Nov 24 19:45:54 compute-0 sudo[73491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:55 compute-0 python3[73493]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 19:45:56 compute-0 sudo[73491]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:56 compute-0 sudo[73518]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdbqhfozkckgofzzqpwzzioduzukstbm ; /usr/bin/python3'
Nov 24 19:45:56 compute-0 sudo[73518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:56 compute-0 python3[73520]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 19:45:56 compute-0 sudo[73518]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:56 compute-0 sudo[73544]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcqisgmvkfxbgjapjirmaqlgzwrafyzi ; /usr/bin/python3'
Nov 24 19:45:56 compute-0 sudo[73544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:57 compute-0 python3[73546]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:45:57 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 24 19:45:57 compute-0 sudo[73544]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:57 compute-0 sudo[73576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyixleofteaymaldaqkstbevndifehzp ; /usr/bin/python3'
Nov 24 19:45:57 compute-0 sudo[73576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:57 compute-0 python3[73578]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:45:57 compute-0 lvm[73581]: PV /dev/loop5 not used.
Nov 24 19:45:57 compute-0 lvm[73583]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 19:45:57 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 24 19:45:57 compute-0 lvm[73593]:   1 logical volume(s) in volume group "ceph_vg2" now active
Nov 24 19:45:57 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 24 19:45:57 compute-0 sudo[73576]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:58 compute-0 sudo[73669]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzqkuyzcniszwgazzynavlnzaonahzcg ; /usr/bin/python3'
Nov 24 19:45:58 compute-0 sudo[73669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:58 compute-0 python3[73671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:45:58 compute-0 sudo[73669]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:58 compute-0 sudo[73742]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqlbentgistsgbfjcpcvzltyhdsuhcxc ; /usr/bin/python3'
Nov 24 19:45:58 compute-0 sudo[73742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:58 compute-0 python3[73744]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764013557.9965408-37416-230508680151454/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:45:58 compute-0 sudo[73742]: pam_unix(sudo:session): session closed for user root
Nov 24 19:45:59 compute-0 sudo[73792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khnryzcxwtrqinafrcoafuiejqviccim ; /usr/bin/python3'
Nov 24 19:45:59 compute-0 sudo[73792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:45:59 compute-0 python3[73794]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:45:59 compute-0 systemd[1]: Reloading.
Nov 24 19:45:59 compute-0 systemd-rc-local-generator[73823]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:45:59 compute-0 systemd-sysv-generator[73828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:45:59 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 24 19:45:59 compute-0 bash[73834]: /dev/loop5: [64513]:4328036 (/var/lib/ceph-osd-2.img)
Nov 24 19:45:59 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 24 19:45:59 compute-0 lvm[73835]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 19:45:59 compute-0 lvm[73835]: VG ceph_vg2 finished
Nov 24 19:45:59 compute-0 sudo[73792]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:01 compute-0 python3[73861]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:46:02 compute-0 sshd-session[73836]: Invalid user config from 27.79.44.141 port 39012
Nov 24 19:46:04 compute-0 sudo[73952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqeleumokrjmqfcjkaaoecwjuixnhhha ; /usr/bin/python3'
Nov 24 19:46:04 compute-0 sudo[73952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:04 compute-0 python3[73954]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 19:46:04 compute-0 sshd-session[73836]: Connection closed by invalid user config 27.79.44.141 port 39012 [preauth]
Nov 24 19:46:05 compute-0 groupadd[73962]: group added to /etc/group: name=cephadm, GID=992
Nov 24 19:46:05 compute-0 groupadd[73962]: group added to /etc/gshadow: name=cephadm
Nov 24 19:46:05 compute-0 groupadd[73962]: new group: name=cephadm, GID=992
Nov 24 19:46:05 compute-0 useradd[73969]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 24 19:46:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 19:46:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 19:46:06 compute-0 sshd-session[73956]: Invalid user support from 27.79.44.141 port 39026
Nov 24 19:46:06 compute-0 sudo[73952]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 19:46:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 19:46:06 compute-0 systemd[1]: run-r3f4eaa66044d42feaa227c74840d6dfa.service: Deactivated successfully.
Nov 24 19:46:06 compute-0 sudo[74065]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkrzxrdodhhdxxtijjpldeatvfgdvabj ; /usr/bin/python3'
Nov 24 19:46:06 compute-0 sudo[74065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:06 compute-0 sshd-session[73956]: Connection closed by invalid user support 27.79.44.141 port 39026 [preauth]
Nov 24 19:46:06 compute-0 python3[74067]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 19:46:06 compute-0 sudo[74065]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:06 compute-0 sudo[74093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qswrcoqjzwbcurtjvrwyuvsnojtrghyr ; /usr/bin/python3'
Nov 24 19:46:06 compute-0 sudo[74093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:07 compute-0 python3[74095]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:07 compute-0 sudo[74093]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:07 compute-0 sudo[74158]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivtrfgrixqmtxylftcguwhjggpsohwhh ; /usr/bin/python3'
Nov 24 19:46:07 compute-0 sudo[74158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:08 compute-0 python3[74160]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:46:08 compute-0 sudo[74158]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:08 compute-0 sudo[74184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajsuotlvixbyalmgbqvxromaygbbimzz ; /usr/bin/python3'
Nov 24 19:46:08 compute-0 sudo[74184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:08 compute-0 python3[74186]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:46:08 compute-0 sudo[74184]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:08 compute-0 sudo[74262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypbxneulqwhlmkgfopowxkruxdydhpsa ; /usr/bin/python3'
Nov 24 19:46:08 compute-0 sudo[74262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:09 compute-0 python3[74264]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:46:09 compute-0 sudo[74262]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:09 compute-0 sudo[74335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iecfoivatyvznamepllcslwzsihqwhjg ; /usr/bin/python3'
Nov 24 19:46:09 compute-0 sudo[74335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:09 compute-0 python3[74337]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764013568.749663-37563-82552936902424/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:46:09 compute-0 sudo[74335]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:10 compute-0 sudo[74437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohabtwoissblmktwyutffihmchkvurxr ; /usr/bin/python3'
Nov 24 19:46:10 compute-0 sudo[74437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:10 compute-0 python3[74439]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:46:10 compute-0 sudo[74437]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:10 compute-0 sudo[74510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqxqdoxniavqcrduphylbegyuygsbuyi ; /usr/bin/python3'
Nov 24 19:46:10 compute-0 sudo[74510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:10 compute-0 python3[74512]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764013570.0462265-37581-170436963364255/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:46:10 compute-0 sudo[74510]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:11 compute-0 sudo[74560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpekcsjfnitqxbxazcbgqcwirmhttcww ; /usr/bin/python3'
Nov 24 19:46:11 compute-0 sudo[74560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:11 compute-0 python3[74562]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 19:46:11 compute-0 sudo[74560]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:11 compute-0 sudo[74588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pycrwbztvibczoxxjvriitekokpgqwod ; /usr/bin/python3'
Nov 24 19:46:11 compute-0 sudo[74588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:11 compute-0 python3[74590]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 19:46:11 compute-0 sudo[74588]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:12 compute-0 sudo[74616]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdurakyuecrykviazzbstdjxqbqlcgey ; /usr/bin/python3'
Nov 24 19:46:12 compute-0 sudo[74616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:12 compute-0 python3[74618]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 19:46:12 compute-0 sudo[74616]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:12 compute-0 sudo[74644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yekhdtirzgerqnyrfymmjdqosacirwtr ; /usr/bin/python3'
Nov 24 19:46:12 compute-0 sudo[74644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:46:12 compute-0 python3[74646]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:12 compute-0 sshd-session[74662]: Accepted publickey for ceph-admin from 192.168.122.100 port 43286 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:46:12 compute-0 systemd-logind[795]: New session 19 of user ceph-admin.
Nov 24 19:46:12 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 19:46:12 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 19:46:13 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 19:46:13 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 24 19:46:13 compute-0 systemd[74666]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:46:13 compute-0 systemd[74666]: Queued start job for default target Main User Target.
Nov 24 19:46:13 compute-0 systemd[74666]: Created slice User Application Slice.
Nov 24 19:46:13 compute-0 systemd[74666]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 19:46:13 compute-0 systemd[74666]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 19:46:13 compute-0 systemd[74666]: Reached target Paths.
Nov 24 19:46:13 compute-0 systemd[74666]: Reached target Timers.
Nov 24 19:46:13 compute-0 systemd[74666]: Starting D-Bus User Message Bus Socket...
Nov 24 19:46:13 compute-0 systemd[74666]: Starting Create User's Volatile Files and Directories...
Nov 24 19:46:13 compute-0 systemd[74666]: Listening on D-Bus User Message Bus Socket.
Nov 24 19:46:13 compute-0 systemd[74666]: Reached target Sockets.
Nov 24 19:46:13 compute-0 systemd[74666]: Finished Create User's Volatile Files and Directories.
Nov 24 19:46:13 compute-0 systemd[74666]: Reached target Basic System.
Nov 24 19:46:13 compute-0 systemd[74666]: Reached target Main User Target.
Nov 24 19:46:13 compute-0 systemd[74666]: Startup finished in 170ms.
Nov 24 19:46:13 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 24 19:46:13 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Nov 24 19:46:13 compute-0 sshd-session[74662]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:46:13 compute-0 sudo[74684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 24 19:46:13 compute-0 sudo[74684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:46:13 compute-0 sudo[74684]: pam_unix(sudo:session): session closed for user root
Nov 24 19:46:13 compute-0 sshd-session[74683]: Received disconnect from 192.168.122.100 port 43286:11: disconnected by user
Nov 24 19:46:13 compute-0 sshd-session[74683]: Disconnected from user ceph-admin 192.168.122.100 port 43286
Nov 24 19:46:13 compute-0 sshd-session[74662]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 19:46:13 compute-0 systemd-logind[795]: Session 19 logged out. Waiting for processes to exit.
Nov 24 19:46:13 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 24 19:46:13 compute-0 systemd-logind[795]: Removed session 19.
Nov 24 19:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2297777212-lower\x2dmapped.mount: Deactivated successfully.
Nov 24 19:46:23 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 24 19:46:23 compute-0 systemd[74666]: Activating special unit Exit the Session...
Nov 24 19:46:23 compute-0 systemd[74666]: Stopped target Main User Target.
Nov 24 19:46:23 compute-0 systemd[74666]: Stopped target Basic System.
Nov 24 19:46:23 compute-0 systemd[74666]: Stopped target Paths.
Nov 24 19:46:23 compute-0 systemd[74666]: Stopped target Sockets.
Nov 24 19:46:23 compute-0 systemd[74666]: Stopped target Timers.
Nov 24 19:46:23 compute-0 systemd[74666]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 24 19:46:23 compute-0 systemd[74666]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 19:46:23 compute-0 systemd[74666]: Closed D-Bus User Message Bus Socket.
Nov 24 19:46:23 compute-0 systemd[74666]: Stopped Create User's Volatile Files and Directories.
Nov 24 19:46:23 compute-0 systemd[74666]: Removed slice User Application Slice.
Nov 24 19:46:23 compute-0 systemd[74666]: Reached target Shutdown.
Nov 24 19:46:23 compute-0 systemd[74666]: Finished Exit the Session.
Nov 24 19:46:23 compute-0 systemd[74666]: Reached target Exit the Session.
Nov 24 19:46:23 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 24 19:46:23 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 24 19:46:23 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 24 19:46:23 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 24 19:46:23 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 24 19:46:23 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 24 19:46:23 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 24 19:46:26 compute-0 sshd-session[74781]: Connection closed by authenticating user root 27.79.44.141 port 57566 [preauth]
Nov 24 19:46:32 compute-0 podman[74721]: 2025-11-24 19:46:32.035841403 +0000 UTC m=+18.503549368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:32 compute-0 podman[74797]: 2025-11-24 19:46:32.104323675 +0000 UTC m=+0.030569093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:32 compute-0 podman[74797]: 2025-11-24 19:46:32.360572225 +0000 UTC m=+0.286817653 container create 551fc24a61ea81995dacfb38fe3176ef8039f325ef28d0e6ee635ee24503844c (image=quay.io/ceph/ceph:v18, name=epic_banach, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 19:46:32 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 24 19:46:32 compute-0 systemd[1]: Started libpod-conmon-551fc24a61ea81995dacfb38fe3176ef8039f325ef28d0e6ee635ee24503844c.scope.
Nov 24 19:46:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:32 compute-0 podman[74797]: 2025-11-24 19:46:32.594870326 +0000 UTC m=+0.521115834 container init 551fc24a61ea81995dacfb38fe3176ef8039f325ef28d0e6ee635ee24503844c (image=quay.io/ceph/ceph:v18, name=epic_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:46:32 compute-0 podman[74797]: 2025-11-24 19:46:32.610145446 +0000 UTC m=+0.536390874 container start 551fc24a61ea81995dacfb38fe3176ef8039f325ef28d0e6ee635ee24503844c (image=quay.io/ceph/ceph:v18, name=epic_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 19:46:32 compute-0 podman[74797]: 2025-11-24 19:46:32.614833853 +0000 UTC m=+0.541079331 container attach 551fc24a61ea81995dacfb38fe3176ef8039f325ef28d0e6ee635ee24503844c (image=quay.io/ceph/ceph:v18, name=epic_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:46:32 compute-0 epic_banach[74814]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 24 19:46:32 compute-0 systemd[1]: libpod-551fc24a61ea81995dacfb38fe3176ef8039f325ef28d0e6ee635ee24503844c.scope: Deactivated successfully.
Nov 24 19:46:32 compute-0 podman[74797]: 2025-11-24 19:46:32.945809722 +0000 UTC m=+0.872055130 container died 551fc24a61ea81995dacfb38fe3176ef8039f325ef28d0e6ee635ee24503844c (image=quay.io/ceph/ceph:v18, name=epic_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 19:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ab0a0c0764a2464997fd598a28fc45c343c3755b4ce038b104a53e816ee4b33-merged.mount: Deactivated successfully.
Nov 24 19:46:34 compute-0 podman[74797]: 2025-11-24 19:46:34.612563301 +0000 UTC m=+2.538808719 container remove 551fc24a61ea81995dacfb38fe3176ef8039f325ef28d0e6ee635ee24503844c (image=quay.io/ceph/ceph:v18, name=epic_banach, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:34 compute-0 systemd[1]: libpod-conmon-551fc24a61ea81995dacfb38fe3176ef8039f325ef28d0e6ee635ee24503844c.scope: Deactivated successfully.
Nov 24 19:46:34 compute-0 podman[74830]: 2025-11-24 19:46:34.726445094 +0000 UTC m=+0.076678444 container create ec22fa51a4eaac9cf7ecb2413caeaf68ff24f952520d46b78d51aaa63a787583 (image=quay.io/ceph/ceph:v18, name=eloquent_black, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:46:34 compute-0 systemd[1]: Started libpod-conmon-ec22fa51a4eaac9cf7ecb2413caeaf68ff24f952520d46b78d51aaa63a787583.scope.
Nov 24 19:46:34 compute-0 podman[74830]: 2025-11-24 19:46:34.695583654 +0000 UTC m=+0.045817064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:34 compute-0 podman[74830]: 2025-11-24 19:46:34.823319618 +0000 UTC m=+0.173553028 container init ec22fa51a4eaac9cf7ecb2413caeaf68ff24f952520d46b78d51aaa63a787583 (image=quay.io/ceph/ceph:v18, name=eloquent_black, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 19:46:34 compute-0 podman[74830]: 2025-11-24 19:46:34.833305127 +0000 UTC m=+0.183538477 container start ec22fa51a4eaac9cf7ecb2413caeaf68ff24f952520d46b78d51aaa63a787583 (image=quay.io/ceph/ceph:v18, name=eloquent_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 19:46:34 compute-0 podman[74830]: 2025-11-24 19:46:34.838173368 +0000 UTC m=+0.188406728 container attach ec22fa51a4eaac9cf7ecb2413caeaf68ff24f952520d46b78d51aaa63a787583 (image=quay.io/ceph/ceph:v18, name=eloquent_black, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:46:34 compute-0 eloquent_black[74847]: 167 167
Nov 24 19:46:34 compute-0 systemd[1]: libpod-ec22fa51a4eaac9cf7ecb2413caeaf68ff24f952520d46b78d51aaa63a787583.scope: Deactivated successfully.
Nov 24 19:46:34 compute-0 podman[74830]: 2025-11-24 19:46:34.841892188 +0000 UTC m=+0.192125568 container died ec22fa51a4eaac9cf7ecb2413caeaf68ff24f952520d46b78d51aaa63a787583 (image=quay.io/ceph/ceph:v18, name=eloquent_black, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 24 19:46:34 compute-0 podman[74830]: 2025-11-24 19:46:34.890853874 +0000 UTC m=+0.241087224 container remove ec22fa51a4eaac9cf7ecb2413caeaf68ff24f952520d46b78d51aaa63a787583 (image=quay.io/ceph/ceph:v18, name=eloquent_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 24 19:46:34 compute-0 systemd[1]: libpod-conmon-ec22fa51a4eaac9cf7ecb2413caeaf68ff24f952520d46b78d51aaa63a787583.scope: Deactivated successfully.
Nov 24 19:46:34 compute-0 podman[74863]: 2025-11-24 19:46:34.980796483 +0000 UTC m=+0.056008608 container create 2878073a5d575b6c9e183aaa0a7a70d6f902c5d2c0c347f4091361f3307d1e05 (image=quay.io/ceph/ceph:v18, name=gifted_hermann, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:46:35 compute-0 systemd[1]: Started libpod-conmon-2878073a5d575b6c9e183aaa0a7a70d6f902c5d2c0c347f4091361f3307d1e05.scope.
Nov 24 19:46:35 compute-0 podman[74863]: 2025-11-24 19:46:34.960485377 +0000 UTC m=+0.035697442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:35 compute-0 podman[74863]: 2025-11-24 19:46:35.076877977 +0000 UTC m=+0.152090072 container init 2878073a5d575b6c9e183aaa0a7a70d6f902c5d2c0c347f4091361f3307d1e05 (image=quay.io/ceph/ceph:v18, name=gifted_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:46:35 compute-0 podman[74863]: 2025-11-24 19:46:35.087123682 +0000 UTC m=+0.162335767 container start 2878073a5d575b6c9e183aaa0a7a70d6f902c5d2c0c347f4091361f3307d1e05 (image=quay.io/ceph/ceph:v18, name=gifted_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 19:46:35 compute-0 podman[74863]: 2025-11-24 19:46:35.093287968 +0000 UTC m=+0.168500123 container attach 2878073a5d575b6c9e183aaa0a7a70d6f902c5d2c0c347f4091361f3307d1e05 (image=quay.io/ceph/ceph:v18, name=gifted_hermann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:46:35 compute-0 gifted_hermann[74880]: AQAbtiRpV+RFBxAA8ffMjzglk+lGhJVJnD617g==
Nov 24 19:46:35 compute-0 systemd[1]: libpod-2878073a5d575b6c9e183aaa0a7a70d6f902c5d2c0c347f4091361f3307d1e05.scope: Deactivated successfully.
Nov 24 19:46:35 compute-0 podman[74863]: 2025-11-24 19:46:35.127418596 +0000 UTC m=+0.202630701 container died 2878073a5d575b6c9e183aaa0a7a70d6f902c5d2c0c347f4091361f3307d1e05 (image=quay.io/ceph/ceph:v18, name=gifted_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 19:46:35 compute-0 podman[74863]: 2025-11-24 19:46:35.182703843 +0000 UTC m=+0.257915938 container remove 2878073a5d575b6c9e183aaa0a7a70d6f902c5d2c0c347f4091361f3307d1e05 (image=quay.io/ceph/ceph:v18, name=gifted_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 19:46:35 compute-0 systemd[1]: libpod-conmon-2878073a5d575b6c9e183aaa0a7a70d6f902c5d2c0c347f4091361f3307d1e05.scope: Deactivated successfully.
Nov 24 19:46:35 compute-0 podman[74900]: 2025-11-24 19:46:35.267114162 +0000 UTC m=+0.057842826 container create 5d32c1070a33484d2b797f78ccce2379ba1662a2728a11914867ce6abeabdfb4 (image=quay.io/ceph/ceph:v18, name=nice_cartwright, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:35 compute-0 systemd[1]: Started libpod-conmon-5d32c1070a33484d2b797f78ccce2379ba1662a2728a11914867ce6abeabdfb4.scope.
Nov 24 19:46:35 compute-0 podman[74900]: 2025-11-24 19:46:35.237212018 +0000 UTC m=+0.027940732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:35 compute-0 podman[74900]: 2025-11-24 19:46:35.365094057 +0000 UTC m=+0.155822761 container init 5d32c1070a33484d2b797f78ccce2379ba1662a2728a11914867ce6abeabdfb4 (image=quay.io/ceph/ceph:v18, name=nice_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:35 compute-0 podman[74900]: 2025-11-24 19:46:35.375033604 +0000 UTC m=+0.165762268 container start 5d32c1070a33484d2b797f78ccce2379ba1662a2728a11914867ce6abeabdfb4 (image=quay.io/ceph/ceph:v18, name=nice_cartwright, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:46:35 compute-0 podman[74900]: 2025-11-24 19:46:35.37934865 +0000 UTC m=+0.170077364 container attach 5d32c1070a33484d2b797f78ccce2379ba1662a2728a11914867ce6abeabdfb4 (image=quay.io/ceph/ceph:v18, name=nice_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:46:35 compute-0 nice_cartwright[74916]: AQAbtiRpnj6MGBAAlu82yiQVSm6pc1gVUpCTAA==
Nov 24 19:46:35 compute-0 systemd[1]: libpod-5d32c1070a33484d2b797f78ccce2379ba1662a2728a11914867ce6abeabdfb4.scope: Deactivated successfully.
Nov 24 19:46:35 compute-0 podman[74900]: 2025-11-24 19:46:35.41767572 +0000 UTC m=+0.208404444 container died 5d32c1070a33484d2b797f78ccce2379ba1662a2728a11914867ce6abeabdfb4 (image=quay.io/ceph/ceph:v18, name=nice_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:46:35 compute-0 podman[74900]: 2025-11-24 19:46:35.465481886 +0000 UTC m=+0.256210550 container remove 5d32c1070a33484d2b797f78ccce2379ba1662a2728a11914867ce6abeabdfb4 (image=quay.io/ceph/ceph:v18, name=nice_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 19:46:35 compute-0 systemd[1]: libpod-conmon-5d32c1070a33484d2b797f78ccce2379ba1662a2728a11914867ce6abeabdfb4.scope: Deactivated successfully.
Nov 24 19:46:35 compute-0 podman[74935]: 2025-11-24 19:46:35.564504339 +0000 UTC m=+0.058188746 container create f722edfb544cc2e99da86992dc4a7d6eca4309c719155a7532063e2097c0ffda (image=quay.io/ceph/ceph:v18, name=compassionate_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:35 compute-0 systemd[1]: Started libpod-conmon-f722edfb544cc2e99da86992dc4a7d6eca4309c719155a7532063e2097c0ffda.scope.
Nov 24 19:46:35 compute-0 podman[74935]: 2025-11-24 19:46:35.536559918 +0000 UTC m=+0.030244395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:35 compute-0 podman[74935]: 2025-11-24 19:46:35.659121344 +0000 UTC m=+0.152805741 container init f722edfb544cc2e99da86992dc4a7d6eca4309c719155a7532063e2097c0ffda (image=quay.io/ceph/ceph:v18, name=compassionate_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 19:46:35 compute-0 podman[74935]: 2025-11-24 19:46:35.668845985 +0000 UTC m=+0.162530392 container start f722edfb544cc2e99da86992dc4a7d6eca4309c719155a7532063e2097c0ffda (image=quay.io/ceph/ceph:v18, name=compassionate_chebyshev, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 19:46:35 compute-0 podman[74935]: 2025-11-24 19:46:35.67499426 +0000 UTC m=+0.168678727 container attach f722edfb544cc2e99da86992dc4a7d6eca4309c719155a7532063e2097c0ffda (image=quay.io/ceph/ceph:v18, name=compassionate_chebyshev, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 19:46:35 compute-0 compassionate_chebyshev[74952]: AQAbtiRpMrbuKRAAo9K15cCmTHndem98qgvTJw==
Nov 24 19:46:35 compute-0 systemd[1]: libpod-f722edfb544cc2e99da86992dc4a7d6eca4309c719155a7532063e2097c0ffda.scope: Deactivated successfully.
Nov 24 19:46:35 compute-0 podman[74935]: 2025-11-24 19:46:35.708750177 +0000 UTC m=+0.202434594 container died f722edfb544cc2e99da86992dc4a7d6eca4309c719155a7532063e2097c0ffda (image=quay.io/ceph/ceph:v18, name=compassionate_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-77156f0dab3723a13c3d53f6fd678a126697a7c57de24ee9a84b19313b44aef4-merged.mount: Deactivated successfully.
Nov 24 19:46:35 compute-0 podman[74935]: 2025-11-24 19:46:35.763998533 +0000 UTC m=+0.257682950 container remove f722edfb544cc2e99da86992dc4a7d6eca4309c719155a7532063e2097c0ffda (image=quay.io/ceph/ceph:v18, name=compassionate_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:35 compute-0 systemd[1]: libpod-conmon-f722edfb544cc2e99da86992dc4a7d6eca4309c719155a7532063e2097c0ffda.scope: Deactivated successfully.
Nov 24 19:46:35 compute-0 podman[74971]: 2025-11-24 19:46:35.836465712 +0000 UTC m=+0.050642603 container create b76e0b2de20b78f67fed8ddc7cee8868d19466c7c5f1a7fe890798a8b4237ab1 (image=quay.io/ceph/ceph:v18, name=distracted_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:46:35 compute-0 systemd[1]: Started libpod-conmon-b76e0b2de20b78f67fed8ddc7cee8868d19466c7c5f1a7fe890798a8b4237ab1.scope.
Nov 24 19:46:35 compute-0 podman[74971]: 2025-11-24 19:46:35.812737133 +0000 UTC m=+0.026914024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635fa434a68208681e8c330fba3b587aa33bf0f6a7de05ee52ab1960aa479eea/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:35 compute-0 podman[74971]: 2025-11-24 19:46:35.947827417 +0000 UTC m=+0.162004288 container init b76e0b2de20b78f67fed8ddc7cee8868d19466c7c5f1a7fe890798a8b4237ab1 (image=quay.io/ceph/ceph:v18, name=distracted_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:35 compute-0 podman[74971]: 2025-11-24 19:46:35.957811735 +0000 UTC m=+0.171988646 container start b76e0b2de20b78f67fed8ddc7cee8868d19466c7c5f1a7fe890798a8b4237ab1 (image=quay.io/ceph/ceph:v18, name=distracted_driscoll, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 19:46:35 compute-0 podman[74971]: 2025-11-24 19:46:35.962288465 +0000 UTC m=+0.176465356 container attach b76e0b2de20b78f67fed8ddc7cee8868d19466c7c5f1a7fe890798a8b4237ab1 (image=quay.io/ceph/ceph:v18, name=distracted_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:36 compute-0 distracted_driscoll[74987]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 24 19:46:36 compute-0 distracted_driscoll[74987]: setting min_mon_release = pacific
Nov 24 19:46:36 compute-0 distracted_driscoll[74987]: /usr/bin/monmaptool: set fsid to 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:46:36 compute-0 distracted_driscoll[74987]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 24 19:46:36 compute-0 systemd[1]: libpod-b76e0b2de20b78f67fed8ddc7cee8868d19466c7c5f1a7fe890798a8b4237ab1.scope: Deactivated successfully.
Nov 24 19:46:36 compute-0 podman[74971]: 2025-11-24 19:46:36.016721529 +0000 UTC m=+0.230898440 container died b76e0b2de20b78f67fed8ddc7cee8868d19466c7c5f1a7fe890798a8b4237ab1 (image=quay.io/ceph/ceph:v18, name=distracted_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 19:46:36 compute-0 podman[74971]: 2025-11-24 19:46:36.077812972 +0000 UTC m=+0.291989883 container remove b76e0b2de20b78f67fed8ddc7cee8868d19466c7c5f1a7fe890798a8b4237ab1 (image=quay.io/ceph/ceph:v18, name=distracted_driscoll, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 19:46:36 compute-0 systemd[1]: libpod-conmon-b76e0b2de20b78f67fed8ddc7cee8868d19466c7c5f1a7fe890798a8b4237ab1.scope: Deactivated successfully.
Nov 24 19:46:36 compute-0 podman[75006]: 2025-11-24 19:46:36.175725545 +0000 UTC m=+0.067310231 container create 86a730d86020ed545197e7e92332adce0f19f3db9a4447a362a6a4ec53c5ccb3 (image=quay.io/ceph/ceph:v18, name=laughing_sinoussi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 19:46:36 compute-0 systemd[1]: Started libpod-conmon-86a730d86020ed545197e7e92332adce0f19f3db9a4447a362a6a4ec53c5ccb3.scope.
Nov 24 19:46:36 compute-0 podman[75006]: 2025-11-24 19:46:36.148695487 +0000 UTC m=+0.040280233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f728730e3077c2194bfaa684d8a18bc38e7aa8403b88d0f4a3578525dd3b9687/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f728730e3077c2194bfaa684d8a18bc38e7aa8403b88d0f4a3578525dd3b9687/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f728730e3077c2194bfaa684d8a18bc38e7aa8403b88d0f4a3578525dd3b9687/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f728730e3077c2194bfaa684d8a18bc38e7aa8403b88d0f4a3578525dd3b9687/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:36 compute-0 podman[75006]: 2025-11-24 19:46:36.289776392 +0000 UTC m=+0.181361088 container init 86a730d86020ed545197e7e92332adce0f19f3db9a4447a362a6a4ec53c5ccb3 (image=quay.io/ceph/ceph:v18, name=laughing_sinoussi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 19:46:36 compute-0 podman[75006]: 2025-11-24 19:46:36.299119303 +0000 UTC m=+0.190703979 container start 86a730d86020ed545197e7e92332adce0f19f3db9a4447a362a6a4ec53c5ccb3 (image=quay.io/ceph/ceph:v18, name=laughing_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 19:46:36 compute-0 podman[75006]: 2025-11-24 19:46:36.302991446 +0000 UTC m=+0.194576132 container attach 86a730d86020ed545197e7e92332adce0f19f3db9a4447a362a6a4ec53c5ccb3 (image=quay.io/ceph/ceph:v18, name=laughing_sinoussi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 19:46:36 compute-0 systemd[1]: libpod-86a730d86020ed545197e7e92332adce0f19f3db9a4447a362a6a4ec53c5ccb3.scope: Deactivated successfully.
Nov 24 19:46:36 compute-0 podman[75006]: 2025-11-24 19:46:36.396683046 +0000 UTC m=+0.288267732 container died 86a730d86020ed545197e7e92332adce0f19f3db9a4447a362a6a4ec53c5ccb3 (image=quay.io/ceph/ceph:v18, name=laughing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:36 compute-0 podman[75006]: 2025-11-24 19:46:36.451939292 +0000 UTC m=+0.343523978 container remove 86a730d86020ed545197e7e92332adce0f19f3db9a4447a362a6a4ec53c5ccb3 (image=quay.io/ceph/ceph:v18, name=laughing_sinoussi, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 19:46:36 compute-0 systemd[1]: libpod-conmon-86a730d86020ed545197e7e92332adce0f19f3db9a4447a362a6a4ec53c5ccb3.scope: Deactivated successfully.
Nov 24 19:46:36 compute-0 systemd[1]: Reloading.
Nov 24 19:46:36 compute-0 systemd-rc-local-generator[75090]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:46:36 compute-0 systemd-sysv-generator[75093]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:46:36 compute-0 systemd[1]: Reloading.
Nov 24 19:46:36 compute-0 systemd-rc-local-generator[75126]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:46:36 compute-0 systemd-sysv-generator[75129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:46:36 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 24 19:46:37 compute-0 systemd[1]: Reloading.
Nov 24 19:46:37 compute-0 systemd-rc-local-generator[75162]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:46:37 compute-0 systemd-sysv-generator[75168]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:46:37 compute-0 systemd[1]: Reached target Ceph cluster 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:46:37 compute-0 systemd[1]: Reloading.
Nov 24 19:46:37 compute-0 systemd-rc-local-generator[75201]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:46:37 compute-0 systemd-sysv-generator[75205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:46:37 compute-0 systemd[1]: Reloading.
Nov 24 19:46:37 compute-0 systemd-rc-local-generator[75242]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:46:37 compute-0 systemd-sysv-generator[75246]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:46:37 compute-0 systemd[1]: Created slice Slice /system/ceph-05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:46:37 compute-0 systemd[1]: Reached target System Time Set.
Nov 24 19:46:37 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 24 19:46:37 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:38 compute-0 podman[75300]: 2025-11-24 19:46:38.152184752 +0000 UTC m=+0.057743594 container create c693d92b6e8b2f075d2b428624be8892c1c15311228f0b201931c8877aa8a455 (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:46:38 compute-0 podman[75300]: 2025-11-24 19:46:38.122403571 +0000 UTC m=+0.027962453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eaf1019441a873a755c39bfa5633b533f3a48a4186517a26691c14024b8c95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eaf1019441a873a755c39bfa5633b533f3a48a4186517a26691c14024b8c95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eaf1019441a873a755c39bfa5633b533f3a48a4186517a26691c14024b8c95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eaf1019441a873a755c39bfa5633b533f3a48a4186517a26691c14024b8c95/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:38 compute-0 podman[75300]: 2025-11-24 19:46:38.255949802 +0000 UTC m=+0.161508624 container init c693d92b6e8b2f075d2b428624be8892c1c15311228f0b201931c8877aa8a455 (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 19:46:38 compute-0 podman[75300]: 2025-11-24 19:46:38.267946514 +0000 UTC m=+0.173505356 container start c693d92b6e8b2f075d2b428624be8892c1c15311228f0b201931c8877aa8a455 (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:38 compute-0 bash[75300]: c693d92b6e8b2f075d2b428624be8892c1c15311228f0b201931c8877aa8a455
Nov 24 19:46:38 compute-0 systemd[1]: Started Ceph mon.compute-0 for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:46:38 compute-0 ceph-mon[75320]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 19:46:38 compute-0 ceph-mon[75320]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 24 19:46:38 compute-0 ceph-mon[75320]: pidfile_write: ignore empty --pid-file
Nov 24 19:46:38 compute-0 ceph-mon[75320]: load: jerasure load: lrc 
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: RocksDB version: 7.9.2
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Git sha 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: DB SUMMARY
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: DB Session ID:  MS89I5HEJQ1R25WRLYH0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: CURRENT file:  CURRENT
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                         Options.error_if_exists: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                       Options.create_if_missing: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                                     Options.env: 0x557261509c40
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                                Options.info_log: 0x557261faee80
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                              Options.statistics: (nil)
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                               Options.use_fsync: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                              Options.db_log_dir: 
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                                 Options.wal_dir: 
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                    Options.write_buffer_manager: 0x557261fbeb40
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                  Options.unordered_write: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                               Options.row_cache: None
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                              Options.wal_filter: None
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.two_write_queues: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.wal_compression: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.atomic_flush: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.max_background_jobs: 2
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.max_background_compactions: -1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.max_subcompactions: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.max_total_wal_size: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                          Options.max_open_files: -1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:       Options.compaction_readahead_size: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Compression algorithms supported:
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         kZSTD supported: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         kXpressCompression supported: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         kBZip2Compression supported: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         kLZ4Compression supported: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         kZlibCompression supported: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         kSnappyCompression supported: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:           Options.merge_operator: 
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:        Options.compaction_filter: None
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557261faea80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557261fa71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:        Options.write_buffer_size: 33554432
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:  Options.max_write_buffer_number: 2
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:          Options.compression: NoCompression
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.num_levels: 7
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7034352b-6130-4856-a956-9f7f793f6e65
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013598334101, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013598336351, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "MS89I5HEJQ1R25WRLYH0", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013598336485, "job": 1, "event": "recovery_finished"}
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557261fd0e00
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: DB pointer 0x5572620da000
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 19:46:38 compute-0 ceph-mon[75320]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557261fa71f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 19:46:38 compute-0 ceph-mon[75320]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@-1(???) e0 preinit fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 24 19:46:38 compute-0 ceph-mon[75320]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 19:46:38 compute-0 ceph-mon[75320]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 24 19:46:38 compute-0 ceph-mon[75320]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 19:46:38 compute-0 ceph-mon[75320]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 19:46:38 compute-0 ceph-mon[75320]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-24T19:46:36.342513Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,os=Linux}
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).mds e1 new map
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 24 19:46:38 compute-0 ceph-mon[75320]: log_channel(cluster) log [DBG] : fsmap 
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mkfs 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 24 19:46:38 compute-0 ceph-mon[75320]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 24 19:46:38 compute-0 ceph-mon[75320]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 24 19:46:38 compute-0 podman[75321]: 2025-11-24 19:46:38.397405255 +0000 UTC m=+0.073857457 container create 774a33506fb77798d586d94eca7f8a261fdaf2968702615e742a92e2da34ead5 (image=quay.io/ceph/ceph:v18, name=wizardly_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 19:46:38 compute-0 systemd[1]: Started libpod-conmon-774a33506fb77798d586d94eca7f8a261fdaf2968702615e742a92e2da34ead5.scope.
Nov 24 19:46:38 compute-0 podman[75321]: 2025-11-24 19:46:38.374335645 +0000 UTC m=+0.050787817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ea71fc27abed37f140f314abadb52ce9441b25f2f2cc495eb14422e7e56ed5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ea71fc27abed37f140f314abadb52ce9441b25f2f2cc495eb14422e7e56ed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ea71fc27abed37f140f314abadb52ce9441b25f2f2cc495eb14422e7e56ed5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:38 compute-0 podman[75321]: 2025-11-24 19:46:38.504093884 +0000 UTC m=+0.180546096 container init 774a33506fb77798d586d94eca7f8a261fdaf2968702615e742a92e2da34ead5 (image=quay.io/ceph/ceph:v18, name=wizardly_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 19:46:38 compute-0 podman[75321]: 2025-11-24 19:46:38.515617554 +0000 UTC m=+0.192069746 container start 774a33506fb77798d586d94eca7f8a261fdaf2968702615e742a92e2da34ead5 (image=quay.io/ceph/ceph:v18, name=wizardly_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 19:46:38 compute-0 podman[75321]: 2025-11-24 19:46:38.520126435 +0000 UTC m=+0.196578637 container attach 774a33506fb77798d586d94eca7f8a261fdaf2968702615e742a92e2da34ead5 (image=quay.io/ceph/ceph:v18, name=wizardly_sinoussi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 19:46:38 compute-0 ceph-mon[75320]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 24 19:46:38 compute-0 ceph-mon[75320]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3258250704' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:   cluster:
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:     id:     05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:     health: HEALTH_OK
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:  
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:   services:
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:     mon: 1 daemons, quorum compute-0 (age 0.547203s)
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:     mgr: no daemons active
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:     osd: 0 osds: 0 up, 0 in
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:  
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:   data:
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:     pools:   0 pools, 0 pgs
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:     objects: 0 objects, 0 B
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:     usage:   0 B used, 0 B / 0 B avail
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:     pgs:     
Nov 24 19:46:38 compute-0 wizardly_sinoussi[75375]:  
Nov 24 19:46:38 compute-0 systemd[1]: libpod-774a33506fb77798d586d94eca7f8a261fdaf2968702615e742a92e2da34ead5.scope: Deactivated successfully.
Nov 24 19:46:38 compute-0 podman[75321]: 2025-11-24 19:46:38.938946957 +0000 UTC m=+0.615399149 container died 774a33506fb77798d586d94eca7f8a261fdaf2968702615e742a92e2da34ead5 (image=quay.io/ceph/ceph:v18, name=wizardly_sinoussi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:46:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-43ea71fc27abed37f140f314abadb52ce9441b25f2f2cc495eb14422e7e56ed5-merged.mount: Deactivated successfully.
Nov 24 19:46:39 compute-0 podman[75321]: 2025-11-24 19:46:39.017093538 +0000 UTC m=+0.693545740 container remove 774a33506fb77798d586d94eca7f8a261fdaf2968702615e742a92e2da34ead5 (image=quay.io/ceph/ceph:v18, name=wizardly_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:46:39 compute-0 systemd[1]: libpod-conmon-774a33506fb77798d586d94eca7f8a261fdaf2968702615e742a92e2da34ead5.scope: Deactivated successfully.
Nov 24 19:46:39 compute-0 podman[75415]: 2025-11-24 19:46:39.116285496 +0000 UTC m=+0.065186094 container create 668aeb58ed5446f1b25e08d46d189ab4a14e8e6cd70983cdb073e4b3b49ae729 (image=quay.io/ceph/ceph:v18, name=naughty_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:46:39 compute-0 systemd[1]: Started libpod-conmon-668aeb58ed5446f1b25e08d46d189ab4a14e8e6cd70983cdb073e4b3b49ae729.scope.
Nov 24 19:46:39 compute-0 podman[75415]: 2025-11-24 19:46:39.085750925 +0000 UTC m=+0.034651533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39595cceff7da3d3c0e9a1bcd7578a5d7ddad1914447b0f167a7983aaf1f07eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39595cceff7da3d3c0e9a1bcd7578a5d7ddad1914447b0f167a7983aaf1f07eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39595cceff7da3d3c0e9a1bcd7578a5d7ddad1914447b0f167a7983aaf1f07eb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39595cceff7da3d3c0e9a1bcd7578a5d7ddad1914447b0f167a7983aaf1f07eb/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:39 compute-0 podman[75415]: 2025-11-24 19:46:39.233844837 +0000 UTC m=+0.182745425 container init 668aeb58ed5446f1b25e08d46d189ab4a14e8e6cd70983cdb073e4b3b49ae729 (image=quay.io/ceph/ceph:v18, name=naughty_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:39 compute-0 podman[75415]: 2025-11-24 19:46:39.244149314 +0000 UTC m=+0.193049902 container start 668aeb58ed5446f1b25e08d46d189ab4a14e8e6cd70983cdb073e4b3b49ae729 (image=quay.io/ceph/ceph:v18, name=naughty_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:39 compute-0 podman[75415]: 2025-11-24 19:46:39.263406652 +0000 UTC m=+0.212307240 container attach 668aeb58ed5446f1b25e08d46d189ab4a14e8e6cd70983cdb073e4b3b49ae729 (image=quay.io/ceph/ceph:v18, name=naughty_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 19:46:39 compute-0 ceph-mon[75320]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 19:46:39 compute-0 ceph-mon[75320]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 24 19:46:39 compute-0 ceph-mon[75320]: fsmap 
Nov 24 19:46:39 compute-0 ceph-mon[75320]: osdmap e1: 0 total, 0 up, 0 in
Nov 24 19:46:39 compute-0 ceph-mon[75320]: mgrmap e1: no daemons active
Nov 24 19:46:39 compute-0 ceph-mon[75320]: from='client.? 192.168.122.100:0/3258250704' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 19:46:39 compute-0 ceph-mon[75320]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 24 19:46:39 compute-0 ceph-mon[75320]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3458500420' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 19:46:39 compute-0 ceph-mon[75320]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3458500420' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 19:46:39 compute-0 naughty_rhodes[75431]: 
Nov 24 19:46:39 compute-0 naughty_rhodes[75431]: [global]
Nov 24 19:46:39 compute-0 naughty_rhodes[75431]:         fsid = 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:46:39 compute-0 naughty_rhodes[75431]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 24 19:46:39 compute-0 naughty_rhodes[75431]:         osd_crush_chooseleaf_type = 0
Nov 24 19:46:39 compute-0 systemd[1]: libpod-668aeb58ed5446f1b25e08d46d189ab4a14e8e6cd70983cdb073e4b3b49ae729.scope: Deactivated successfully.
Nov 24 19:46:39 compute-0 podman[75415]: 2025-11-24 19:46:39.667975331 +0000 UTC m=+0.616875899 container died 668aeb58ed5446f1b25e08d46d189ab4a14e8e6cd70983cdb073e4b3b49ae729 (image=quay.io/ceph/ceph:v18, name=naughty_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 19:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-39595cceff7da3d3c0e9a1bcd7578a5d7ddad1914447b0f167a7983aaf1f07eb-merged.mount: Deactivated successfully.
Nov 24 19:46:39 compute-0 podman[75415]: 2025-11-24 19:46:39.724555472 +0000 UTC m=+0.673456060 container remove 668aeb58ed5446f1b25e08d46d189ab4a14e8e6cd70983cdb073e4b3b49ae729 (image=quay.io/ceph/ceph:v18, name=naughty_rhodes, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:39 compute-0 systemd[1]: libpod-conmon-668aeb58ed5446f1b25e08d46d189ab4a14e8e6cd70983cdb073e4b3b49ae729.scope: Deactivated successfully.
Nov 24 19:46:39 compute-0 podman[75469]: 2025-11-24 19:46:39.829218996 +0000 UTC m=+0.068298137 container create 018fb4b9c2aa05b89a104eada51f3821e020063b349b957e2bb0a1e270872554 (image=quay.io/ceph/ceph:v18, name=infallible_murdock, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:39 compute-0 systemd[1]: Started libpod-conmon-018fb4b9c2aa05b89a104eada51f3821e020063b349b957e2bb0a1e270872554.scope.
Nov 24 19:46:39 compute-0 podman[75469]: 2025-11-24 19:46:39.798267204 +0000 UTC m=+0.037346405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9265697e33fded406eaa0b4ab2b0e7b743d6c86c2726ee6ef52fb127dec1e073/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9265697e33fded406eaa0b4ab2b0e7b743d6c86c2726ee6ef52fb127dec1e073/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9265697e33fded406eaa0b4ab2b0e7b743d6c86c2726ee6ef52fb127dec1e073/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9265697e33fded406eaa0b4ab2b0e7b743d6c86c2726ee6ef52fb127dec1e073/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:39 compute-0 podman[75469]: 2025-11-24 19:46:39.928859736 +0000 UTC m=+0.167938927 container init 018fb4b9c2aa05b89a104eada51f3821e020063b349b957e2bb0a1e270872554 (image=quay.io/ceph/ceph:v18, name=infallible_murdock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 19:46:39 compute-0 podman[75469]: 2025-11-24 19:46:39.942666437 +0000 UTC m=+0.181745578 container start 018fb4b9c2aa05b89a104eada51f3821e020063b349b957e2bb0a1e270872554 (image=quay.io/ceph/ceph:v18, name=infallible_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:46:39 compute-0 podman[75469]: 2025-11-24 19:46:39.946290115 +0000 UTC m=+0.185369266 container attach 018fb4b9c2aa05b89a104eada51f3821e020063b349b957e2bb0a1e270872554 (image=quay.io/ceph/ceph:v18, name=infallible_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 19:46:40 compute-0 ceph-mon[75320]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:46:40 compute-0 ceph-mon[75320]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677594328' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:46:40 compute-0 systemd[1]: libpod-018fb4b9c2aa05b89a104eada51f3821e020063b349b957e2bb0a1e270872554.scope: Deactivated successfully.
Nov 24 19:46:40 compute-0 podman[75469]: 2025-11-24 19:46:40.354853711 +0000 UTC m=+0.593932852 container died 018fb4b9c2aa05b89a104eada51f3821e020063b349b957e2bb0a1e270872554 (image=quay.io/ceph/ceph:v18, name=infallible_murdock, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 19:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9265697e33fded406eaa0b4ab2b0e7b743d6c86c2726ee6ef52fb127dec1e073-merged.mount: Deactivated successfully.
Nov 24 19:46:40 compute-0 ceph-mon[75320]: from='client.? 192.168.122.100:0/3458500420' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 19:46:40 compute-0 ceph-mon[75320]: from='client.? 192.168.122.100:0/3458500420' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 19:46:40 compute-0 ceph-mon[75320]: from='client.? 192.168.122.100:0/677594328' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:46:40 compute-0 podman[75469]: 2025-11-24 19:46:40.415516112 +0000 UTC m=+0.654595243 container remove 018fb4b9c2aa05b89a104eada51f3821e020063b349b957e2bb0a1e270872554 (image=quay.io/ceph/ceph:v18, name=infallible_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:40 compute-0 systemd[1]: libpod-conmon-018fb4b9c2aa05b89a104eada51f3821e020063b349b957e2bb0a1e270872554.scope: Deactivated successfully.
Nov 24 19:46:40 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:46:40 compute-0 ceph-mon[75320]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 24 19:46:40 compute-0 ceph-mon[75320]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 24 19:46:40 compute-0 ceph-mon[75320]: mon.compute-0@0(leader) e1 shutdown
Nov 24 19:46:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0[75316]: 2025-11-24T19:46:40.740+0000 7f94f72ef640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 24 19:46:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0[75316]: 2025-11-24T19:46:40.740+0000 7f94f72ef640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 24 19:46:40 compute-0 ceph-mon[75320]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 19:46:40 compute-0 ceph-mon[75320]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 19:46:40 compute-0 podman[75553]: 2025-11-24 19:46:40.863885339 +0000 UTC m=+0.188969893 container died c693d92b6e8b2f075d2b428624be8892c1c15311228f0b201931c8877aa8a455 (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 19:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-94eaf1019441a873a755c39bfa5633b533f3a48a4186517a26691c14024b8c95-merged.mount: Deactivated successfully.
Nov 24 19:46:40 compute-0 podman[75553]: 2025-11-24 19:46:40.914892071 +0000 UTC m=+0.239976625 container remove c693d92b6e8b2f075d2b428624be8892c1c15311228f0b201931c8877aa8a455 (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:46:40 compute-0 bash[75553]: ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0
Nov 24 19:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:41 compute-0 systemd[1]: ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@mon.compute-0.service: Deactivated successfully.
Nov 24 19:46:41 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:46:41 compute-0 systemd[1]: ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@mon.compute-0.service: Consumed 1.374s CPU time.
Nov 24 19:46:41 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 19:46:41 compute-0 podman[75657]: 2025-11-24 19:46:41.509401487 +0000 UTC m=+0.090478994 container create ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 19:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723c1f5a49b91f06debbc8449ee44a9ce14b5c1cfc8f78df4e82f83aba6b94f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723c1f5a49b91f06debbc8449ee44a9ce14b5c1cfc8f78df4e82f83aba6b94f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723c1f5a49b91f06debbc8449ee44a9ce14b5c1cfc8f78df4e82f83aba6b94f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723c1f5a49b91f06debbc8449ee44a9ce14b5c1cfc8f78df4e82f83aba6b94f1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:41 compute-0 podman[75657]: 2025-11-24 19:46:41.582696477 +0000 UTC m=+0.163773984 container init ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:46:41 compute-0 podman[75657]: 2025-11-24 19:46:41.490312773 +0000 UTC m=+0.071390250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:41 compute-0 podman[75657]: 2025-11-24 19:46:41.593085567 +0000 UTC m=+0.174163074 container start ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:41 compute-0 bash[75657]: ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e
Nov 24 19:46:41 compute-0 systemd[1]: Started Ceph mon.compute-0 for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:46:41 compute-0 ceph-mon[75677]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 19:46:41 compute-0 ceph-mon[75677]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 24 19:46:41 compute-0 ceph-mon[75677]: pidfile_write: ignore empty --pid-file
Nov 24 19:46:41 compute-0 ceph-mon[75677]: load: jerasure load: lrc 
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: RocksDB version: 7.9.2
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Git sha 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: DB SUMMARY
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: DB Session ID:  5CV8W25MMEGW3WBPB1SJ
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: CURRENT file:  CURRENT
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55676 ; 
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                         Options.error_if_exists: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                       Options.create_if_missing: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                                     Options.env: 0x55b73d6ffc40
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                                Options.info_log: 0x55b73e0a7040
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                              Options.statistics: (nil)
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                               Options.use_fsync: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                              Options.db_log_dir: 
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                                 Options.wal_dir: 
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                    Options.write_buffer_manager: 0x55b73e0b6b40
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                  Options.unordered_write: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                               Options.row_cache: None
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                              Options.wal_filter: None
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.two_write_queues: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.wal_compression: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.atomic_flush: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.max_background_jobs: 2
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.max_background_compactions: -1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.max_subcompactions: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.max_total_wal_size: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                          Options.max_open_files: -1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:       Options.compaction_readahead_size: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Compression algorithms supported:
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         kZSTD supported: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         kXpressCompression supported: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         kBZip2Compression supported: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         kLZ4Compression supported: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         kZlibCompression supported: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         kSnappyCompression supported: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:           Options.merge_operator: 
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:        Options.compaction_filter: None
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b73e0a6c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b73e09f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:        Options.write_buffer_size: 33554432
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:  Options.max_write_buffer_number: 2
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:          Options.compression: NoCompression
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.num_levels: 7
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7034352b-6130-4856-a956-9f7f793f6e65
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013601666747, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013601669561, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53797, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51386, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013601, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013601669716, "job": 1, "event": "recovery_finished"}
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b73e0c8e00
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: DB pointer 0x55b73e152000
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 19:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.73 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.73 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b73e09f1f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 19:46:41 compute-0 ceph-mon[75677]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@-1(???) e1 preinit fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@-1(???).mds e1 new map
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 24 19:46:41 compute-0 ceph-mon[75677]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 19:46:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 19:46:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 19:46:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : fsmap 
Nov 24 19:46:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 24 19:46:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 24 19:46:41 compute-0 podman[75678]: 2025-11-24 19:46:41.728238791 +0000 UTC m=+0.075734347 container create fb59184d83809f5f4cfc3d98645350d7993751952e82de47a10aa51ca8122f7d (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 19:46:41 compute-0 ceph-mon[75677]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 24 19:46:41 compute-0 ceph-mon[75677]: fsmap 
Nov 24 19:46:41 compute-0 ceph-mon[75677]: osdmap e1: 0 total, 0 up, 0 in
Nov 24 19:46:41 compute-0 ceph-mon[75677]: mgrmap e1: no daemons active
Nov 24 19:46:41 compute-0 systemd[1]: Started libpod-conmon-fb59184d83809f5f4cfc3d98645350d7993751952e82de47a10aa51ca8122f7d.scope.
Nov 24 19:46:41 compute-0 podman[75678]: 2025-11-24 19:46:41.700111715 +0000 UTC m=+0.047607311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76d2514ea8f3d8dd3d767549a99b608048ceceb56e0edd4cf2d6511fdbfe98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76d2514ea8f3d8dd3d767549a99b608048ceceb56e0edd4cf2d6511fdbfe98/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76d2514ea8f3d8dd3d767549a99b608048ceceb56e0edd4cf2d6511fdbfe98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:41 compute-0 podman[75678]: 2025-11-24 19:46:41.852692907 +0000 UTC m=+0.200188493 container init fb59184d83809f5f4cfc3d98645350d7993751952e82de47a10aa51ca8122f7d (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 19:46:41 compute-0 podman[75678]: 2025-11-24 19:46:41.863691833 +0000 UTC m=+0.211187379 container start fb59184d83809f5f4cfc3d98645350d7993751952e82de47a10aa51ca8122f7d (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 19:46:41 compute-0 podman[75678]: 2025-11-24 19:46:41.867841835 +0000 UTC m=+0.215337381 container attach fb59184d83809f5f4cfc3d98645350d7993751952e82de47a10aa51ca8122f7d (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 19:46:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 24 19:46:42 compute-0 systemd[1]: libpod-fb59184d83809f5f4cfc3d98645350d7993751952e82de47a10aa51ca8122f7d.scope: Deactivated successfully.
Nov 24 19:46:42 compute-0 podman[75678]: 2025-11-24 19:46:42.299178104 +0000 UTC m=+0.646673650 container died fb59184d83809f5f4cfc3d98645350d7993751952e82de47a10aa51ca8122f7d (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:46:42 compute-0 podman[75678]: 2025-11-24 19:46:42.358475158 +0000 UTC m=+0.705970674 container remove fb59184d83809f5f4cfc3d98645350d7993751952e82de47a10aa51ca8122f7d (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:46:42 compute-0 systemd[1]: libpod-conmon-fb59184d83809f5f4cfc3d98645350d7993751952e82de47a10aa51ca8122f7d.scope: Deactivated successfully.
Nov 24 19:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab76d2514ea8f3d8dd3d767549a99b608048ceceb56e0edd4cf2d6511fdbfe98-merged.mount: Deactivated successfully.
Nov 24 19:46:42 compute-0 podman[75771]: 2025-11-24 19:46:42.463400699 +0000 UTC m=+0.071115183 container create 61fd8c22bf31f140a33b14af24e079712497af9cbed3871fd582a8399d770938 (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 19:46:42 compute-0 systemd[1]: Started libpod-conmon-61fd8c22bf31f140a33b14af24e079712497af9cbed3871fd582a8399d770938.scope.
Nov 24 19:46:42 compute-0 podman[75771]: 2025-11-24 19:46:42.432944851 +0000 UTC m=+0.040659385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a7e446105705a1db7df1df6d66da3cf5fd485bd7626975fd74e1ba2f0c0371/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a7e446105705a1db7df1df6d66da3cf5fd485bd7626975fd74e1ba2f0c0371/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a7e446105705a1db7df1df6d66da3cf5fd485bd7626975fd74e1ba2f0c0371/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:42 compute-0 podman[75771]: 2025-11-24 19:46:42.567517049 +0000 UTC m=+0.175231573 container init 61fd8c22bf31f140a33b14af24e079712497af9cbed3871fd582a8399d770938 (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 19:46:42 compute-0 podman[75771]: 2025-11-24 19:46:42.578435543 +0000 UTC m=+0.186150017 container start 61fd8c22bf31f140a33b14af24e079712497af9cbed3871fd582a8399d770938 (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:46:42 compute-0 podman[75771]: 2025-11-24 19:46:42.583020426 +0000 UTC m=+0.190734960 container attach 61fd8c22bf31f140a33b14af24e079712497af9cbed3871fd582a8399d770938 (image=quay.io/ceph/ceph:v18, name=quirky_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 19:46:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 24 19:46:42 compute-0 systemd[1]: libpod-61fd8c22bf31f140a33b14af24e079712497af9cbed3871fd582a8399d770938.scope: Deactivated successfully.
Nov 24 19:46:42 compute-0 podman[75771]: 2025-11-24 19:46:42.995846267 +0000 UTC m=+0.603560751 container died 61fd8c22bf31f140a33b14af24e079712497af9cbed3871fd582a8399d770938 (image=quay.io/ceph/ceph:v18, name=quirky_shockley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 19:46:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a7e446105705a1db7df1df6d66da3cf5fd485bd7626975fd74e1ba2f0c0371-merged.mount: Deactivated successfully.
Nov 24 19:46:43 compute-0 podman[75771]: 2025-11-24 19:46:43.056113897 +0000 UTC m=+0.663828381 container remove 61fd8c22bf31f140a33b14af24e079712497af9cbed3871fd582a8399d770938 (image=quay.io/ceph/ceph:v18, name=quirky_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:46:43 compute-0 systemd[1]: libpod-conmon-61fd8c22bf31f140a33b14af24e079712497af9cbed3871fd582a8399d770938.scope: Deactivated successfully.
Nov 24 19:46:43 compute-0 systemd[1]: Reloading.
Nov 24 19:46:43 compute-0 systemd-rc-local-generator[75853]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:46:43 compute-0 systemd-sysv-generator[75856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:46:43 compute-0 systemd[1]: Reloading.
Nov 24 19:46:43 compute-0 systemd-rc-local-generator[75897]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:46:43 compute-0 systemd-sysv-generator[75901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:46:43 compute-0 systemd[1]: Starting Ceph mgr.compute-0.ofslrn for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:46:44 compute-0 podman[75955]: 2025-11-24 19:46:44.142330936 +0000 UTC m=+0.084639177 container create 68bd91a1ba694bdc34c40ad98c0145e3ffbf68f0de80447cec9aa8681028fe51 (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:44 compute-0 podman[75955]: 2025-11-24 19:46:44.109232136 +0000 UTC m=+0.051540427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42463d69db195b9597f6085d071fb02d6d8f39109e13d23d8c286f9386d9d7f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42463d69db195b9597f6085d071fb02d6d8f39109e13d23d8c286f9386d9d7f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42463d69db195b9597f6085d071fb02d6d8f39109e13d23d8c286f9386d9d7f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42463d69db195b9597f6085d071fb02d6d8f39109e13d23d8c286f9386d9d7f0/merged/var/lib/ceph/mgr/ceph-compute-0.ofslrn supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:44 compute-0 podman[75955]: 2025-11-24 19:46:44.226120399 +0000 UTC m=+0.168428690 container init 68bd91a1ba694bdc34c40ad98c0145e3ffbf68f0de80447cec9aa8681028fe51 (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:46:44 compute-0 podman[75955]: 2025-11-24 19:46:44.242800828 +0000 UTC m=+0.185109059 container start 68bd91a1ba694bdc34c40ad98c0145e3ffbf68f0de80447cec9aa8681028fe51 (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 19:46:44 compute-0 bash[75955]: 68bd91a1ba694bdc34c40ad98c0145e3ffbf68f0de80447cec9aa8681028fe51
Nov 24 19:46:44 compute-0 systemd[1]: Started Ceph mgr.compute-0.ofslrn for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:46:44 compute-0 ceph-mgr[75975]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 19:46:44 compute-0 ceph-mgr[75975]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 24 19:46:44 compute-0 ceph-mgr[75975]: pidfile_write: ignore empty --pid-file
Nov 24 19:46:44 compute-0 podman[75976]: 2025-11-24 19:46:44.388744162 +0000 UTC m=+0.085445159 container create 04cd626103b7877c372540b24a5fb81fa640d84ba44cba510699258a35244184 (image=quay.io/ceph/ceph:v18, name=gallant_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:44 compute-0 systemd[1]: Started libpod-conmon-04cd626103b7877c372540b24a5fb81fa640d84ba44cba510699258a35244184.scope.
Nov 24 19:46:44 compute-0 podman[75976]: 2025-11-24 19:46:44.352203169 +0000 UTC m=+0.048904216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:44 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'alerts'
Nov 24 19:46:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8cd2141ccf6be8d959da121279081bf5a1134dc18c4f2068fbc4d371f0fdefc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8cd2141ccf6be8d959da121279081bf5a1134dc18c4f2068fbc4d371f0fdefc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8cd2141ccf6be8d959da121279081bf5a1134dc18c4f2068fbc4d371f0fdefc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:44 compute-0 podman[75976]: 2025-11-24 19:46:44.506764255 +0000 UTC m=+0.203465272 container init 04cd626103b7877c372540b24a5fb81fa640d84ba44cba510699258a35244184 (image=quay.io/ceph/ceph:v18, name=gallant_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 19:46:44 compute-0 podman[75976]: 2025-11-24 19:46:44.520532135 +0000 UTC m=+0.217233132 container start 04cd626103b7877c372540b24a5fb81fa640d84ba44cba510699258a35244184 (image=quay.io/ceph/ceph:v18, name=gallant_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:46:44 compute-0 podman[75976]: 2025-11-24 19:46:44.526112255 +0000 UTC m=+0.222813282 container attach 04cd626103b7877c372540b24a5fb81fa640d84ba44cba510699258a35244184 (image=quay.io/ceph/ceph:v18, name=gallant_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:46:44 compute-0 ceph-mgr[75975]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 19:46:44 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'balancer'
Nov 24 19:46:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:44.761+0000 7f3f27be9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 19:46:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 19:46:44 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460213341' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]: 
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]: {
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "health": {
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "status": "HEALTH_OK",
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "checks": {},
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "mutes": []
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     },
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "election_epoch": 5,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "quorum": [
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         0
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     ],
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "quorum_names": [
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "compute-0"
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     ],
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "quorum_age": 3,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "monmap": {
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "epoch": 1,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "min_mon_release_name": "reef",
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "num_mons": 1
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     },
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "osdmap": {
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "epoch": 1,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "num_osds": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "num_up_osds": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "osd_up_since": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "num_in_osds": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "osd_in_since": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "num_remapped_pgs": 0
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     },
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "pgmap": {
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "pgs_by_state": [],
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "num_pgs": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "num_pools": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "num_objects": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "data_bytes": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "bytes_used": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "bytes_avail": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "bytes_total": 0
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     },
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "fsmap": {
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "epoch": 1,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "by_rank": [],
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "up:standby": 0
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     },
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "mgrmap": {
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "available": false,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "num_standbys": 0,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "modules": [
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:             "iostat",
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:             "nfs",
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:             "restful"
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         ],
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "services": {}
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     },
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "servicemap": {
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "epoch": 1,
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "modified": "2025-11-24T19:46:38.375320+0000",
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:         "services": {}
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     },
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]:     "progress_events": {}
Nov 24 19:46:44 compute-0 gallant_chatelet[76014]: }
Nov 24 19:46:44 compute-0 systemd[1]: libpod-04cd626103b7877c372540b24a5fb81fa640d84ba44cba510699258a35244184.scope: Deactivated successfully.
Nov 24 19:46:44 compute-0 podman[75976]: 2025-11-24 19:46:44.956173129 +0000 UTC m=+0.652874166 container died 04cd626103b7877c372540b24a5fb81fa640d84ba44cba510699258a35244184 (image=quay.io/ceph/ceph:v18, name=gallant_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:44 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/460213341' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8cd2141ccf6be8d959da121279081bf5a1134dc18c4f2068fbc4d371f0fdefc-merged.mount: Deactivated successfully.
Nov 24 19:46:45 compute-0 podman[75976]: 2025-11-24 19:46:45.013919212 +0000 UTC m=+0.710620189 container remove 04cd626103b7877c372540b24a5fb81fa640d84ba44cba510699258a35244184 (image=quay.io/ceph/ceph:v18, name=gallant_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:46:45 compute-0 systemd[1]: libpod-conmon-04cd626103b7877c372540b24a5fb81fa640d84ba44cba510699258a35244184.scope: Deactivated successfully.
Nov 24 19:46:45 compute-0 ceph-mgr[75975]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 19:46:45 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'cephadm'
Nov 24 19:46:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:45.044+0000 7f3f27be9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 19:46:46 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'crash'
Nov 24 19:46:47 compute-0 podman[76066]: 2025-11-24 19:46:47.127612519 +0000 UTC m=+0.072209922 container create c042daca4806db78d612cfa1bd9b6e2d1e1c667fdfaf9d0aeaaedcb4d9b872de (image=quay.io/ceph/ceph:v18, name=relaxed_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:46:47 compute-0 systemd[1]: Started libpod-conmon-c042daca4806db78d612cfa1bd9b6e2d1e1c667fdfaf9d0aeaaedcb4d9b872de.scope.
Nov 24 19:46:47 compute-0 podman[76066]: 2025-11-24 19:46:47.099367569 +0000 UTC m=+0.043965012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:47 compute-0 ceph-mgr[75975]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 19:46:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:47.194+0000 7f3f27be9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 19:46:47 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'dashboard'
Nov 24 19:46:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac32a04e5bd4a57b201d4994d6d0289f31470102633f6d75c5defebc6278a76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac32a04e5bd4a57b201d4994d6d0289f31470102633f6d75c5defebc6278a76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac32a04e5bd4a57b201d4994d6d0289f31470102633f6d75c5defebc6278a76/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:47 compute-0 podman[76066]: 2025-11-24 19:46:47.224533735 +0000 UTC m=+0.169131108 container init c042daca4806db78d612cfa1bd9b6e2d1e1c667fdfaf9d0aeaaedcb4d9b872de (image=quay.io/ceph/ceph:v18, name=relaxed_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 19:46:47 compute-0 podman[76066]: 2025-11-24 19:46:47.234297768 +0000 UTC m=+0.178895161 container start c042daca4806db78d612cfa1bd9b6e2d1e1c667fdfaf9d0aeaaedcb4d9b872de (image=quay.io/ceph/ceph:v18, name=relaxed_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 19:46:47 compute-0 podman[76066]: 2025-11-24 19:46:47.238710766 +0000 UTC m=+0.183308149 container attach c042daca4806db78d612cfa1bd9b6e2d1e1c667fdfaf9d0aeaaedcb4d9b872de (image=quay.io/ceph/ceph:v18, name=relaxed_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:46:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 19:46:47 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2348680885' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:47 compute-0 relaxed_bell[76083]: 
Nov 24 19:46:47 compute-0 relaxed_bell[76083]: {
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "health": {
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "status": "HEALTH_OK",
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "checks": {},
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "mutes": []
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     },
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "election_epoch": 5,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "quorum": [
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         0
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     ],
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "quorum_names": [
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "compute-0"
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     ],
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "quorum_age": 5,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "monmap": {
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "epoch": 1,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "min_mon_release_name": "reef",
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "num_mons": 1
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     },
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "osdmap": {
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "epoch": 1,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "num_osds": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "num_up_osds": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "osd_up_since": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "num_in_osds": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "osd_in_since": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "num_remapped_pgs": 0
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     },
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "pgmap": {
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "pgs_by_state": [],
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "num_pgs": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "num_pools": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "num_objects": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "data_bytes": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "bytes_used": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "bytes_avail": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "bytes_total": 0
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     },
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "fsmap": {
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "epoch": 1,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "by_rank": [],
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "up:standby": 0
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     },
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "mgrmap": {
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "available": false,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "num_standbys": 0,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "modules": [
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:             "iostat",
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:             "nfs",
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:             "restful"
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         ],
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "services": {}
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     },
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "servicemap": {
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "epoch": 1,
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "modified": "2025-11-24T19:46:38.375320+0000",
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:         "services": {}
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     },
Nov 24 19:46:47 compute-0 relaxed_bell[76083]:     "progress_events": {}
Nov 24 19:46:47 compute-0 relaxed_bell[76083]: }
Nov 24 19:46:47 compute-0 systemd[1]: libpod-c042daca4806db78d612cfa1bd9b6e2d1e1c667fdfaf9d0aeaaedcb4d9b872de.scope: Deactivated successfully.
Nov 24 19:46:47 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2348680885' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:47 compute-0 podman[76109]: 2025-11-24 19:46:47.696834486 +0000 UTC m=+0.026705380 container died c042daca4806db78d612cfa1bd9b6e2d1e1c667fdfaf9d0aeaaedcb4d9b872de (image=quay.io/ceph/ceph:v18, name=relaxed_bell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-bac32a04e5bd4a57b201d4994d6d0289f31470102633f6d75c5defebc6278a76-merged.mount: Deactivated successfully.
Nov 24 19:46:47 compute-0 podman[76109]: 2025-11-24 19:46:47.746272314 +0000 UTC m=+0.076143208 container remove c042daca4806db78d612cfa1bd9b6e2d1e1c667fdfaf9d0aeaaedcb4d9b872de (image=quay.io/ceph/ceph:v18, name=relaxed_bell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:46:47 compute-0 systemd[1]: libpod-conmon-c042daca4806db78d612cfa1bd9b6e2d1e1c667fdfaf9d0aeaaedcb4d9b872de.scope: Deactivated successfully.
Nov 24 19:46:48 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'devicehealth'
Nov 24 19:46:48 compute-0 ceph-mgr[75975]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 19:46:48 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 19:46:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:48.710+0000 7f3f27be9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 19:46:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 19:46:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 19:46:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]:   from numpy import show_config as show_numpy_config
Nov 24 19:46:49 compute-0 ceph-mgr[75975]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 19:46:49 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'influx'
Nov 24 19:46:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:49.194+0000 7f3f27be9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 19:46:49 compute-0 ceph-mgr[75975]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 19:46:49 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'insights'
Nov 24 19:46:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:49.431+0000 7f3f27be9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 19:46:49 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'iostat'
Nov 24 19:46:49 compute-0 podman[76124]: 2025-11-24 19:46:49.863116296 +0000 UTC m=+0.071406541 container create 2a4ab174f34312b3169f8764748aa1be07b36d9588e43984e4efea3f46d5c543 (image=quay.io/ceph/ceph:v18, name=lucid_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:49 compute-0 ceph-mgr[75975]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 19:46:49 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'k8sevents'
Nov 24 19:46:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:49.869+0000 7f3f27be9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 19:46:49 compute-0 systemd[1]: Started libpod-conmon-2a4ab174f34312b3169f8764748aa1be07b36d9588e43984e4efea3f46d5c543.scope.
Nov 24 19:46:49 compute-0 podman[76124]: 2025-11-24 19:46:49.833569101 +0000 UTC m=+0.041859366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb24e6c5c3e703413fa2d4b50baab494fa9c8ba92dfa6f1a9fc398f053b4a8b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb24e6c5c3e703413fa2d4b50baab494fa9c8ba92dfa6f1a9fc398f053b4a8b3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb24e6c5c3e703413fa2d4b50baab494fa9c8ba92dfa6f1a9fc398f053b4a8b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:49 compute-0 podman[76124]: 2025-11-24 19:46:49.957237398 +0000 UTC m=+0.165527643 container init 2a4ab174f34312b3169f8764748aa1be07b36d9588e43984e4efea3f46d5c543 (image=quay.io/ceph/ceph:v18, name=lucid_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 19:46:49 compute-0 podman[76124]: 2025-11-24 19:46:49.969415115 +0000 UTC m=+0.177705350 container start 2a4ab174f34312b3169f8764748aa1be07b36d9588e43984e4efea3f46d5c543 (image=quay.io/ceph/ceph:v18, name=lucid_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:46:49 compute-0 podman[76124]: 2025-11-24 19:46:49.973677049 +0000 UTC m=+0.181967284 container attach 2a4ab174f34312b3169f8764748aa1be07b36d9588e43984e4efea3f46d5c543 (image=quay.io/ceph/ceph:v18, name=lucid_saha, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 19:46:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 19:46:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543782749' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:50 compute-0 lucid_saha[76141]: 
Nov 24 19:46:50 compute-0 lucid_saha[76141]: {
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "health": {
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "status": "HEALTH_OK",
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "checks": {},
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "mutes": []
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     },
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "election_epoch": 5,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "quorum": [
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         0
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     ],
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "quorum_names": [
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "compute-0"
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     ],
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "quorum_age": 8,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "monmap": {
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "epoch": 1,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "min_mon_release_name": "reef",
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "num_mons": 1
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     },
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "osdmap": {
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "epoch": 1,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "num_osds": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "num_up_osds": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "osd_up_since": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "num_in_osds": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "osd_in_since": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "num_remapped_pgs": 0
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     },
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "pgmap": {
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "pgs_by_state": [],
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "num_pgs": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "num_pools": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "num_objects": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "data_bytes": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "bytes_used": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "bytes_avail": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "bytes_total": 0
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     },
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "fsmap": {
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "epoch": 1,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "by_rank": [],
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "up:standby": 0
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     },
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "mgrmap": {
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "available": false,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "num_standbys": 0,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "modules": [
Nov 24 19:46:50 compute-0 lucid_saha[76141]:             "iostat",
Nov 24 19:46:50 compute-0 lucid_saha[76141]:             "nfs",
Nov 24 19:46:50 compute-0 lucid_saha[76141]:             "restful"
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         ],
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "services": {}
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     },
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "servicemap": {
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "epoch": 1,
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "modified": "2025-11-24T19:46:38.375320+0000",
Nov 24 19:46:50 compute-0 lucid_saha[76141]:         "services": {}
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     },
Nov 24 19:46:50 compute-0 lucid_saha[76141]:     "progress_events": {}
Nov 24 19:46:50 compute-0 lucid_saha[76141]: }
Nov 24 19:46:50 compute-0 systemd[1]: libpod-2a4ab174f34312b3169f8764748aa1be07b36d9588e43984e4efea3f46d5c543.scope: Deactivated successfully.
Nov 24 19:46:50 compute-0 podman[76124]: 2025-11-24 19:46:50.396578161 +0000 UTC m=+0.604868396 container died 2a4ab174f34312b3169f8764748aa1be07b36d9588e43984e4efea3f46d5c543 (image=quay.io/ceph/ceph:v18, name=lucid_saha, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb24e6c5c3e703413fa2d4b50baab494fa9c8ba92dfa6f1a9fc398f053b4a8b3-merged.mount: Deactivated successfully.
Nov 24 19:46:50 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1543782749' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:50 compute-0 podman[76124]: 2025-11-24 19:46:50.451787566 +0000 UTC m=+0.660077811 container remove 2a4ab174f34312b3169f8764748aa1be07b36d9588e43984e4efea3f46d5c543 (image=quay.io/ceph/ceph:v18, name=lucid_saha, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:50 compute-0 systemd[1]: libpod-conmon-2a4ab174f34312b3169f8764748aa1be07b36d9588e43984e4efea3f46d5c543.scope: Deactivated successfully.
Nov 24 19:46:51 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'localpool'
Nov 24 19:46:51 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 19:46:52 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'mirroring'
Nov 24 19:46:52 compute-0 podman[76180]: 2025-11-24 19:46:52.541018745 +0000 UTC m=+0.061322270 container create a2ed166074738e12e7ead94e977f6ebe12eae402eabe316221a80483e13c4064 (image=quay.io/ceph/ceph:v18, name=crazy_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 19:46:52 compute-0 systemd[1]: Started libpod-conmon-a2ed166074738e12e7ead94e977f6ebe12eae402eabe316221a80483e13c4064.scope.
Nov 24 19:46:52 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'nfs'
Nov 24 19:46:52 compute-0 podman[76180]: 2025-11-24 19:46:52.506884427 +0000 UTC m=+0.027187992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f01c04bbf5f975a9fe018177e6968cfd50e55fb877b5ff15ea94c7c1781266/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f01c04bbf5f975a9fe018177e6968cfd50e55fb877b5ff15ea94c7c1781266/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f01c04bbf5f975a9fe018177e6968cfd50e55fb877b5ff15ea94c7c1781266/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:52 compute-0 podman[76180]: 2025-11-24 19:46:52.639520594 +0000 UTC m=+0.159824159 container init a2ed166074738e12e7ead94e977f6ebe12eae402eabe316221a80483e13c4064 (image=quay.io/ceph/ceph:v18, name=crazy_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:46:52 compute-0 podman[76180]: 2025-11-24 19:46:52.648296419 +0000 UTC m=+0.168599944 container start a2ed166074738e12e7ead94e977f6ebe12eae402eabe316221a80483e13c4064 (image=quay.io/ceph/ceph:v18, name=crazy_chaum, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 19:46:52 compute-0 podman[76180]: 2025-11-24 19:46:52.651811344 +0000 UTC m=+0.172114869 container attach a2ed166074738e12e7ead94e977f6ebe12eae402eabe316221a80483e13c4064 (image=quay.io/ceph/ceph:v18, name=crazy_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:46:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 19:46:53 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543689427' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:53 compute-0 crazy_chaum[76195]: 
Nov 24 19:46:53 compute-0 crazy_chaum[76195]: {
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "health": {
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "status": "HEALTH_OK",
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "checks": {},
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "mutes": []
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     },
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "election_epoch": 5,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "quorum": [
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         0
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     ],
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "quorum_names": [
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "compute-0"
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     ],
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "quorum_age": 11,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "monmap": {
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "epoch": 1,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "min_mon_release_name": "reef",
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "num_mons": 1
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     },
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "osdmap": {
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "epoch": 1,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "num_osds": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "num_up_osds": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "osd_up_since": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "num_in_osds": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "osd_in_since": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "num_remapped_pgs": 0
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     },
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "pgmap": {
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "pgs_by_state": [],
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "num_pgs": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "num_pools": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "num_objects": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "data_bytes": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "bytes_used": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "bytes_avail": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "bytes_total": 0
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     },
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "fsmap": {
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "epoch": 1,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "by_rank": [],
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "up:standby": 0
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     },
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "mgrmap": {
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "available": false,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "num_standbys": 0,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "modules": [
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:             "iostat",
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:             "nfs",
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:             "restful"
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         ],
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "services": {}
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     },
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "servicemap": {
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "epoch": 1,
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "modified": "2025-11-24T19:46:38.375320+0000",
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:         "services": {}
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     },
Nov 24 19:46:53 compute-0 crazy_chaum[76195]:     "progress_events": {}
Nov 24 19:46:53 compute-0 crazy_chaum[76195]: }
Nov 24 19:46:53 compute-0 systemd[1]: libpod-a2ed166074738e12e7ead94e977f6ebe12eae402eabe316221a80483e13c4064.scope: Deactivated successfully.
Nov 24 19:46:53 compute-0 podman[76180]: 2025-11-24 19:46:53.065996121 +0000 UTC m=+0.586299636 container died a2ed166074738e12e7ead94e977f6ebe12eae402eabe316221a80483e13c4064 (image=quay.io/ceph/ceph:v18, name=crazy_chaum, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:53 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1543689427' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f01c04bbf5f975a9fe018177e6968cfd50e55fb877b5ff15ea94c7c1781266-merged.mount: Deactivated successfully.
Nov 24 19:46:53 compute-0 podman[76180]: 2025-11-24 19:46:53.129177491 +0000 UTC m=+0.649481016 container remove a2ed166074738e12e7ead94e977f6ebe12eae402eabe316221a80483e13c4064 (image=quay.io/ceph/ceph:v18, name=crazy_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 19:46:53 compute-0 systemd[1]: libpod-conmon-a2ed166074738e12e7ead94e977f6ebe12eae402eabe316221a80483e13c4064.scope: Deactivated successfully.
Nov 24 19:46:53 compute-0 ceph-mgr[75975]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 19:46:53 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'orchestrator'
Nov 24 19:46:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:53.217+0000 7f3f27be9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 19:46:53 compute-0 ceph-mgr[75975]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 19:46:53 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 19:46:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:53.812+0000 7f3f27be9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 19:46:54 compute-0 ceph-mgr[75975]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 19:46:54 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'osd_support'
Nov 24 19:46:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:54.053+0000 7f3f27be9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 19:46:54 compute-0 ceph-mgr[75975]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 19:46:54 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 19:46:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:54.265+0000 7f3f27be9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 19:46:54 compute-0 ceph-mgr[75975]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 19:46:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:54.509+0000 7f3f27be9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 19:46:54 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'progress'
Nov 24 19:46:54 compute-0 ceph-mgr[75975]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 19:46:54 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'prometheus'
Nov 24 19:46:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:54.724+0000 7f3f27be9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 19:46:55 compute-0 podman[76236]: 2025-11-24 19:46:55.229506288 +0000 UTC m=+0.064749383 container create 209c3045ee6fec67a89b0f4de826503d2e5d224fc31fa265234d4696b35b508d (image=quay.io/ceph/ceph:v18, name=flamboyant_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:46:55 compute-0 systemd[1]: Started libpod-conmon-209c3045ee6fec67a89b0f4de826503d2e5d224fc31fa265234d4696b35b508d.scope.
Nov 24 19:46:55 compute-0 podman[76236]: 2025-11-24 19:46:55.203023646 +0000 UTC m=+0.038266781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725088c55ece5a3ce5afb7097b7f145c4b2d460109c8e71867d1dc8450b16e79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725088c55ece5a3ce5afb7097b7f145c4b2d460109c8e71867d1dc8450b16e79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/725088c55ece5a3ce5afb7097b7f145c4b2d460109c8e71867d1dc8450b16e79/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:55 compute-0 podman[76236]: 2025-11-24 19:46:55.321724997 +0000 UTC m=+0.156968112 container init 209c3045ee6fec67a89b0f4de826503d2e5d224fc31fa265234d4696b35b508d (image=quay.io/ceph/ceph:v18, name=flamboyant_wright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:46:55 compute-0 podman[76236]: 2025-11-24 19:46:55.331945922 +0000 UTC m=+0.167188987 container start 209c3045ee6fec67a89b0f4de826503d2e5d224fc31fa265234d4696b35b508d (image=quay.io/ceph/ceph:v18, name=flamboyant_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 19:46:55 compute-0 podman[76236]: 2025-11-24 19:46:55.33558124 +0000 UTC m=+0.170824305 container attach 209c3045ee6fec67a89b0f4de826503d2e5d224fc31fa265234d4696b35b508d (image=quay.io/ceph/ceph:v18, name=flamboyant_wright, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:46:55 compute-0 ceph-mgr[75975]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 19:46:55 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'rbd_support'
Nov 24 19:46:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:55.619+0000 7f3f27be9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 19:46:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 19:46:55 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3112313455' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]: 
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]: {
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "health": {
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "status": "HEALTH_OK",
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "checks": {},
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "mutes": []
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     },
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "election_epoch": 5,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "quorum": [
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         0
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     ],
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "quorum_names": [
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "compute-0"
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     ],
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "quorum_age": 14,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "monmap": {
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "epoch": 1,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "min_mon_release_name": "reef",
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "num_mons": 1
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     },
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "osdmap": {
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "epoch": 1,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "num_osds": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "num_up_osds": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "osd_up_since": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "num_in_osds": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "osd_in_since": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "num_remapped_pgs": 0
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     },
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "pgmap": {
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "pgs_by_state": [],
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "num_pgs": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "num_pools": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "num_objects": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "data_bytes": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "bytes_used": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "bytes_avail": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "bytes_total": 0
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     },
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "fsmap": {
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "epoch": 1,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "by_rank": [],
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "up:standby": 0
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     },
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "mgrmap": {
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "available": false,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "num_standbys": 0,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "modules": [
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:             "iostat",
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:             "nfs",
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:             "restful"
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         ],
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "services": {}
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     },
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "servicemap": {
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "epoch": 1,
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "modified": "2025-11-24T19:46:38.375320+0000",
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:         "services": {}
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     },
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]:     "progress_events": {}
Nov 24 19:46:55 compute-0 flamboyant_wright[76253]: }
Nov 24 19:46:55 compute-0 systemd[1]: libpod-209c3045ee6fec67a89b0f4de826503d2e5d224fc31fa265234d4696b35b508d.scope: Deactivated successfully.
Nov 24 19:46:55 compute-0 podman[76236]: 2025-11-24 19:46:55.753136358 +0000 UTC m=+0.588379453 container died 209c3045ee6fec67a89b0f4de826503d2e5d224fc31fa265234d4696b35b508d (image=quay.io/ceph/ceph:v18, name=flamboyant_wright, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 19:46:55 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3112313455' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-725088c55ece5a3ce5afb7097b7f145c4b2d460109c8e71867d1dc8450b16e79-merged.mount: Deactivated successfully.
Nov 24 19:46:55 compute-0 podman[76236]: 2025-11-24 19:46:55.818823595 +0000 UTC m=+0.654066690 container remove 209c3045ee6fec67a89b0f4de826503d2e5d224fc31fa265234d4696b35b508d (image=quay.io/ceph/ceph:v18, name=flamboyant_wright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:46:55 compute-0 systemd[1]: libpod-conmon-209c3045ee6fec67a89b0f4de826503d2e5d224fc31fa265234d4696b35b508d.scope: Deactivated successfully.
Nov 24 19:46:55 compute-0 ceph-mgr[75975]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 19:46:55 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'restful'
Nov 24 19:46:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:55.888+0000 7f3f27be9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 19:46:56 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'rgw'
Nov 24 19:46:57 compute-0 ceph-mgr[75975]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 19:46:57 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'rook'
Nov 24 19:46:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:57.165+0000 7f3f27be9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 19:46:57 compute-0 podman[76293]: 2025-11-24 19:46:57.898235264 +0000 UTC m=+0.045221728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:46:58 compute-0 podman[76293]: 2025-11-24 19:46:58.06715731 +0000 UTC m=+0.214143764 container create 803930a2ccd855b24610f254a6dc43c99490a2b3c53bfecb7b99cc88c5264d90 (image=quay.io/ceph/ceph:v18, name=stupefied_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:46:58 compute-0 systemd[1]: Started libpod-conmon-803930a2ccd855b24610f254a6dc43c99490a2b3c53bfecb7b99cc88c5264d90.scope.
Nov 24 19:46:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1bcb5a71406058b9d99499fc2dd9f8a42202380b4d073a422d61471ee5e7f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1bcb5a71406058b9d99499fc2dd9f8a42202380b4d073a422d61471ee5e7f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1bcb5a71406058b9d99499fc2dd9f8a42202380b4d073a422d61471ee5e7f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:46:58 compute-0 podman[76293]: 2025-11-24 19:46:58.273772335 +0000 UTC m=+0.420758849 container init 803930a2ccd855b24610f254a6dc43c99490a2b3c53bfecb7b99cc88c5264d90 (image=quay.io/ceph/ceph:v18, name=stupefied_sutherland, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 24 19:46:58 compute-0 podman[76293]: 2025-11-24 19:46:58.284040975 +0000 UTC m=+0.431027429 container start 803930a2ccd855b24610f254a6dc43c99490a2b3c53bfecb7b99cc88c5264d90 (image=quay.io/ceph/ceph:v18, name=stupefied_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 19:46:58 compute-0 podman[76293]: 2025-11-24 19:46:58.342308875 +0000 UTC m=+0.489295319 container attach 803930a2ccd855b24610f254a6dc43c99490a2b3c53bfecb7b99cc88c5264d90 (image=quay.io/ceph/ceph:v18, name=stupefied_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:46:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 19:46:58 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1894732667' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]: 
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]: {
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "health": {
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "status": "HEALTH_OK",
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "checks": {},
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "mutes": []
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     },
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "election_epoch": 5,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "quorum": [
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         0
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     ],
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "quorum_names": [
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "compute-0"
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     ],
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "quorum_age": 16,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "monmap": {
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "epoch": 1,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "min_mon_release_name": "reef",
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "num_mons": 1
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     },
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "osdmap": {
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "epoch": 1,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "num_osds": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "num_up_osds": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "osd_up_since": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "num_in_osds": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "osd_in_since": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "num_remapped_pgs": 0
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     },
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "pgmap": {
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "pgs_by_state": [],
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "num_pgs": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "num_pools": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "num_objects": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "data_bytes": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "bytes_used": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "bytes_avail": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "bytes_total": 0
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     },
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "fsmap": {
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "epoch": 1,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "by_rank": [],
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "up:standby": 0
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     },
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "mgrmap": {
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "available": false,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "num_standbys": 0,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "modules": [
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:             "iostat",
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:             "nfs",
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:             "restful"
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         ],
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "services": {}
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     },
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "servicemap": {
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "epoch": 1,
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "modified": "2025-11-24T19:46:38.375320+0000",
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:         "services": {}
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     },
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]:     "progress_events": {}
Nov 24 19:46:58 compute-0 stupefied_sutherland[76309]: }
Nov 24 19:46:58 compute-0 systemd[1]: libpod-803930a2ccd855b24610f254a6dc43c99490a2b3c53bfecb7b99cc88c5264d90.scope: Deactivated successfully.
Nov 24 19:46:58 compute-0 podman[76335]: 2025-11-24 19:46:58.744298449 +0000 UTC m=+0.043617815 container died 803930a2ccd855b24610f254a6dc43c99490a2b3c53bfecb7b99cc88c5264d90 (image=quay.io/ceph/ceph:v18, name=stupefied_sutherland, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 19:46:58 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1894732667' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd1bcb5a71406058b9d99499fc2dd9f8a42202380b4d073a422d61471ee5e7f1-merged.mount: Deactivated successfully.
Nov 24 19:46:58 compute-0 podman[76335]: 2025-11-24 19:46:58.899401473 +0000 UTC m=+0.198720819 container remove 803930a2ccd855b24610f254a6dc43c99490a2b3c53bfecb7b99cc88c5264d90 (image=quay.io/ceph/ceph:v18, name=stupefied_sutherland, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:46:58 compute-0 systemd[1]: libpod-conmon-803930a2ccd855b24610f254a6dc43c99490a2b3c53bfecb7b99cc88c5264d90.scope: Deactivated successfully.
Nov 24 19:46:59 compute-0 ceph-mgr[75975]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 19:46:59 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'selftest'
Nov 24 19:46:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:59.095+0000 7f3f27be9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 19:46:59 compute-0 ceph-mgr[75975]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 19:46:59 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'snap_schedule'
Nov 24 19:46:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:59.316+0000 7f3f27be9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 19:46:59 compute-0 ceph-mgr[75975]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 19:46:59 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'stats'
Nov 24 19:46:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:46:59.544+0000 7f3f27be9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 19:46:59 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'status'
Nov 24 19:47:00 compute-0 ceph-mgr[75975]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 19:47:00 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'telegraf'
Nov 24 19:47:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:00.007+0000 7f3f27be9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 19:47:00 compute-0 ceph-mgr[75975]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 19:47:00 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'telemetry'
Nov 24 19:47:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:00.223+0000 7f3f27be9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 19:47:00 compute-0 ceph-mgr[75975]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 19:47:00 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 19:47:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:00.769+0000 7f3f27be9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 19:47:01 compute-0 podman[76350]: 2025-11-24 19:47:01.020323092 +0000 UTC m=+0.076602283 container create 07c46b63805fc1373c39994f52e5a88179030f5ebd92c1470bd767b1a20f622e (image=quay.io/ceph/ceph:v18, name=reverent_mendel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 19:47:01 compute-0 systemd[1]: Started libpod-conmon-07c46b63805fc1373c39994f52e5a88179030f5ebd92c1470bd767b1a20f622e.scope.
Nov 24 19:47:01 compute-0 podman[76350]: 2025-11-24 19:47:00.983711951 +0000 UTC m=+0.039991192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc45a29dba9faa2f74f7435a9d4926088ee247593e7dfaeb0656d74bf37fe099/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc45a29dba9faa2f74f7435a9d4926088ee247593e7dfaeb0656d74bf37fe099/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc45a29dba9faa2f74f7435a9d4926088ee247593e7dfaeb0656d74bf37fe099/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:01 compute-0 podman[76350]: 2025-11-24 19:47:01.120750909 +0000 UTC m=+0.177030090 container init 07c46b63805fc1373c39994f52e5a88179030f5ebd92c1470bd767b1a20f622e (image=quay.io/ceph/ceph:v18, name=reverent_mendel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 19:47:01 compute-0 podman[76350]: 2025-11-24 19:47:01.130751471 +0000 UTC m=+0.187030662 container start 07c46b63805fc1373c39994f52e5a88179030f5ebd92c1470bd767b1a20f622e (image=quay.io/ceph/ceph:v18, name=reverent_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 19:47:01 compute-0 podman[76350]: 2025-11-24 19:47:01.139706536 +0000 UTC m=+0.195985767 container attach 07c46b63805fc1373c39994f52e5a88179030f5ebd92c1470bd767b1a20f622e (image=quay.io/ceph/ceph:v18, name=reverent_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 19:47:01 compute-0 ceph-mgr[75975]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 19:47:01 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'volumes'
Nov 24 19:47:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:01.378+0000 7f3f27be9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 19:47:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 19:47:01 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3182112816' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:47:01 compute-0 reverent_mendel[76367]: 
Nov 24 19:47:01 compute-0 reverent_mendel[76367]: {
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "health": {
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "status": "HEALTH_OK",
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "checks": {},
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "mutes": []
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     },
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "election_epoch": 5,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "quorum": [
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         0
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     ],
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "quorum_names": [
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "compute-0"
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     ],
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "quorum_age": 19,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "monmap": {
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "epoch": 1,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "min_mon_release_name": "reef",
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "num_mons": 1
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     },
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "osdmap": {
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "epoch": 1,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "num_osds": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "num_up_osds": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "osd_up_since": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "num_in_osds": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "osd_in_since": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "num_remapped_pgs": 0
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     },
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "pgmap": {
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "pgs_by_state": [],
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "num_pgs": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "num_pools": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "num_objects": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "data_bytes": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "bytes_used": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "bytes_avail": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "bytes_total": 0
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     },
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "fsmap": {
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "epoch": 1,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "by_rank": [],
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "up:standby": 0
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     },
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "mgrmap": {
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "available": false,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "num_standbys": 0,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "modules": [
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:             "iostat",
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:             "nfs",
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:             "restful"
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         ],
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "services": {}
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     },
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "servicemap": {
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "epoch": 1,
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "modified": "2025-11-24T19:46:38.375320+0000",
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:         "services": {}
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     },
Nov 24 19:47:01 compute-0 reverent_mendel[76367]:     "progress_events": {}
Nov 24 19:47:01 compute-0 reverent_mendel[76367]: }
Nov 24 19:47:01 compute-0 systemd[1]: libpod-07c46b63805fc1373c39994f52e5a88179030f5ebd92c1470bd767b1a20f622e.scope: Deactivated successfully.
Nov 24 19:47:01 compute-0 podman[76350]: 2025-11-24 19:47:01.534151643 +0000 UTC m=+0.590430824 container died 07c46b63805fc1373c39994f52e5a88179030f5ebd92c1470bd767b1a20f622e (image=quay.io/ceph/ceph:v18, name=reverent_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 19:47:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc45a29dba9faa2f74f7435a9d4926088ee247593e7dfaeb0656d74bf37fe099-merged.mount: Deactivated successfully.
Nov 24 19:47:01 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3182112816' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:47:01 compute-0 podman[76350]: 2025-11-24 19:47:01.593722597 +0000 UTC m=+0.650001778 container remove 07c46b63805fc1373c39994f52e5a88179030f5ebd92c1470bd767b1a20f622e (image=quay.io/ceph/ceph:v18, name=reverent_mendel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:47:01 compute-0 systemd[1]: libpod-conmon-07c46b63805fc1373c39994f52e5a88179030f5ebd92c1470bd767b1a20f622e.scope: Deactivated successfully.
Nov 24 19:47:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:02.028+0000 7f3f27be9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'zabbix'
Nov 24 19:47:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:02.248+0000 7f3f27be9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: ms_deliver_dispatch: unhandled message 0x564ef7d351e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ofslrn
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr handle_mgr_map Activating!
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr handle_mgr_map I am now activating
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ofslrn(active, starting, since 0.0132121s)
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ofslrn", "id": "compute-0.ofslrn"} v 0) v1
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ofslrn", "id": "compute-0.ofslrn"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: balancer
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [balancer INFO root] Starting
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: crash
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:47:02
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [balancer INFO root] No pools available
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Manager daemon compute-0.ofslrn is now available
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: devicehealth
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Starting
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: iostat
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: nfs
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: orchestrator
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: pg_autoscaler
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: progress
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [progress INFO root] Loading...
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [progress INFO root] No stored events to load
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [progress INFO root] Loaded [] historic events
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [rbd_support INFO root] recovery thread starting
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [rbd_support INFO root] starting setup
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: rbd_support
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: restful
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [restful WARNING root] server not running: no certificate configured
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: status
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/mirror_snapshot_schedule"} v 0) v1
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/mirror_snapshot_schedule"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: telemetry
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [rbd_support INFO root] PerfHandler: starting
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [rbd_support INFO root] TaskHandler: starting
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/trash_purge_schedule"} v 0) v1
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/trash_purge_schedule"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: [rbd_support INFO root] setup complete
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 24 19:47:02 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: volumes
Nov 24 19:47:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:02 compute-0 ceph-mon[75677]: Activating manager daemon compute-0.ofslrn
Nov 24 19:47:02 compute-0 ceph-mon[75677]: mgrmap e2: compute-0.ofslrn(active, starting, since 0.0132121s)
Nov 24 19:47:02 compute-0 ceph-mon[75677]: from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ofslrn", "id": "compute-0.ofslrn"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: Manager daemon compute-0.ofslrn is now available
Nov 24 19:47:02 compute-0 ceph-mon[75677]: from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/mirror_snapshot_schedule"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/trash_purge_schedule"}]: dispatch
Nov 24 19:47:02 compute-0 ceph-mon[75677]: from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:02 compute-0 ceph-mon[75677]: from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:02 compute-0 ceph-mon[75677]: from='mgr.14102 192.168.122.100:0/3570416740' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ofslrn(active, since 1.02923s)
Nov 24 19:47:03 compute-0 podman[76484]: 2025-11-24 19:47:03.69340147 +0000 UTC m=+0.066836726 container create c6216de58b568152aa37d135199aef7604777becd2ed57965831686092008147 (image=quay.io/ceph/ceph:v18, name=frosty_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:47:03 compute-0 systemd[1]: Started libpod-conmon-c6216de58b568152aa37d135199aef7604777becd2ed57965831686092008147.scope.
Nov 24 19:47:03 compute-0 podman[76484]: 2025-11-24 19:47:03.667919981 +0000 UTC m=+0.041355277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec8ad73d15ca7ba9b2c267e2d90598e057d3c7f567465677fc1b111c19219f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec8ad73d15ca7ba9b2c267e2d90598e057d3c7f567465677fc1b111c19219f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec8ad73d15ca7ba9b2c267e2d90598e057d3c7f567465677fc1b111c19219f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:03 compute-0 podman[76484]: 2025-11-24 19:47:03.795412868 +0000 UTC m=+0.168848474 container init c6216de58b568152aa37d135199aef7604777becd2ed57965831686092008147 (image=quay.io/ceph/ceph:v18, name=frosty_gould, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 19:47:03 compute-0 podman[76484]: 2025-11-24 19:47:03.804643571 +0000 UTC m=+0.178078817 container start c6216de58b568152aa37d135199aef7604777becd2ed57965831686092008147 (image=quay.io/ceph/ceph:v18, name=frosty_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 19:47:03 compute-0 podman[76484]: 2025-11-24 19:47:03.808824841 +0000 UTC m=+0.182260147 container attach c6216de58b568152aa37d135199aef7604777becd2ed57965831686092008147 (image=quay.io/ceph/ceph:v18, name=frosty_gould, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 19:47:04 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:04 compute-0 ceph-mon[75677]: mgrmap e3: compute-0.ofslrn(active, since 1.02923s)
Nov 24 19:47:04 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ofslrn(active, since 2s)
Nov 24 19:47:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 19:47:04 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949020472' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:47:04 compute-0 frosty_gould[76500]: 
Nov 24 19:47:04 compute-0 frosty_gould[76500]: {
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "health": {
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "status": "HEALTH_OK",
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "checks": {},
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "mutes": []
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     },
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "election_epoch": 5,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "quorum": [
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         0
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     ],
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "quorum_names": [
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "compute-0"
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     ],
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "quorum_age": 22,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "monmap": {
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "epoch": 1,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "min_mon_release_name": "reef",
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "num_mons": 1
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     },
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "osdmap": {
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "epoch": 1,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "num_osds": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "num_up_osds": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "osd_up_since": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "num_in_osds": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "osd_in_since": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "num_remapped_pgs": 0
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     },
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "pgmap": {
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "pgs_by_state": [],
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "num_pgs": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "num_pools": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "num_objects": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "data_bytes": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "bytes_used": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "bytes_avail": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "bytes_total": 0
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     },
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "fsmap": {
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "epoch": 1,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "by_rank": [],
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "up:standby": 0
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     },
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "mgrmap": {
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "available": true,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "num_standbys": 0,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "modules": [
Nov 24 19:47:04 compute-0 frosty_gould[76500]:             "iostat",
Nov 24 19:47:04 compute-0 frosty_gould[76500]:             "nfs",
Nov 24 19:47:04 compute-0 frosty_gould[76500]:             "restful"
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         ],
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "services": {}
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     },
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "servicemap": {
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "epoch": 1,
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "modified": "2025-11-24T19:46:38.375320+0000",
Nov 24 19:47:04 compute-0 frosty_gould[76500]:         "services": {}
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     },
Nov 24 19:47:04 compute-0 frosty_gould[76500]:     "progress_events": {}
Nov 24 19:47:04 compute-0 frosty_gould[76500]: }
Nov 24 19:47:04 compute-0 systemd[1]: libpod-c6216de58b568152aa37d135199aef7604777becd2ed57965831686092008147.scope: Deactivated successfully.
Nov 24 19:47:04 compute-0 podman[76484]: 2025-11-24 19:47:04.453282062 +0000 UTC m=+0.826717318 container died c6216de58b568152aa37d135199aef7604777becd2ed57965831686092008147 (image=quay.io/ceph/ceph:v18, name=frosty_gould, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 19:47:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ec8ad73d15ca7ba9b2c267e2d90598e057d3c7f567465677fc1b111c19219f0-merged.mount: Deactivated successfully.
Nov 24 19:47:04 compute-0 podman[76484]: 2025-11-24 19:47:04.51337035 +0000 UTC m=+0.886805576 container remove c6216de58b568152aa37d135199aef7604777becd2ed57965831686092008147 (image=quay.io/ceph/ceph:v18, name=frosty_gould, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 19:47:04 compute-0 systemd[1]: libpod-conmon-c6216de58b568152aa37d135199aef7604777becd2ed57965831686092008147.scope: Deactivated successfully.
Nov 24 19:47:04 compute-0 podman[76538]: 2025-11-24 19:47:04.608698753 +0000 UTC m=+0.061426094 container create 062c99fd2efc7e116b5d53c354d2a8e96dbd401ff53ad174675dc0d3fc106333 (image=quay.io/ceph/ceph:v18, name=eager_curie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 19:47:04 compute-0 systemd[1]: Started libpod-conmon-062c99fd2efc7e116b5d53c354d2a8e96dbd401ff53ad174675dc0d3fc106333.scope.
Nov 24 19:47:04 compute-0 podman[76538]: 2025-11-24 19:47:04.584252501 +0000 UTC m=+0.036979852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/144040b286427fccfa81366bd635ca8c43e6afec68127a308dffa750886fcc4a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/144040b286427fccfa81366bd635ca8c43e6afec68127a308dffa750886fcc4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/144040b286427fccfa81366bd635ca8c43e6afec68127a308dffa750886fcc4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/144040b286427fccfa81366bd635ca8c43e6afec68127a308dffa750886fcc4a/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:04 compute-0 podman[76538]: 2025-11-24 19:47:04.712299044 +0000 UTC m=+0.165026415 container init 062c99fd2efc7e116b5d53c354d2a8e96dbd401ff53ad174675dc0d3fc106333 (image=quay.io/ceph/ceph:v18, name=eager_curie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 19:47:04 compute-0 podman[76538]: 2025-11-24 19:47:04.725971622 +0000 UTC m=+0.178698963 container start 062c99fd2efc7e116b5d53c354d2a8e96dbd401ff53ad174675dc0d3fc106333 (image=quay.io/ceph/ceph:v18, name=eager_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 19:47:04 compute-0 podman[76538]: 2025-11-24 19:47:04.730355767 +0000 UTC m=+0.183083118 container attach 062c99fd2efc7e116b5d53c354d2a8e96dbd401ff53ad174675dc0d3fc106333 (image=quay.io/ceph/ceph:v18, name=eager_curie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 19:47:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 24 19:47:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3291976322' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 19:47:05 compute-0 systemd[1]: libpod-062c99fd2efc7e116b5d53c354d2a8e96dbd401ff53ad174675dc0d3fc106333.scope: Deactivated successfully.
Nov 24 19:47:05 compute-0 podman[76538]: 2025-11-24 19:47:05.240058961 +0000 UTC m=+0.692786312 container died 062c99fd2efc7e116b5d53c354d2a8e96dbd401ff53ad174675dc0d3fc106333 (image=quay.io/ceph/ceph:v18, name=eager_curie, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-144040b286427fccfa81366bd635ca8c43e6afec68127a308dffa750886fcc4a-merged.mount: Deactivated successfully.
Nov 24 19:47:05 compute-0 podman[76538]: 2025-11-24 19:47:05.293200206 +0000 UTC m=+0.745927537 container remove 062c99fd2efc7e116b5d53c354d2a8e96dbd401ff53ad174675dc0d3fc106333 (image=quay.io/ceph/ceph:v18, name=eager_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:05 compute-0 ceph-mon[75677]: mgrmap e4: compute-0.ofslrn(active, since 2s)
Nov 24 19:47:05 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3949020472' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 19:47:05 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3291976322' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 19:47:05 compute-0 systemd[1]: libpod-conmon-062c99fd2efc7e116b5d53c354d2a8e96dbd401ff53ad174675dc0d3fc106333.scope: Deactivated successfully.
Nov 24 19:47:05 compute-0 podman[76593]: 2025-11-24 19:47:05.381563247 +0000 UTC m=+0.054717968 container create b077e65442f387a54b06b6411e1c0146c01820e31d9d1d44af8be272fc4074c7 (image=quay.io/ceph/ceph:v18, name=interesting_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 19:47:05 compute-0 systemd[1]: Started libpod-conmon-b077e65442f387a54b06b6411e1c0146c01820e31d9d1d44af8be272fc4074c7.scope.
Nov 24 19:47:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bed51665bcdd4df2822ab5ea8076fe2b20eeb8a0ea13095b56d863aa0bd770/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bed51665bcdd4df2822ab5ea8076fe2b20eeb8a0ea13095b56d863aa0bd770/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bed51665bcdd4df2822ab5ea8076fe2b20eeb8a0ea13095b56d863aa0bd770/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:05 compute-0 podman[76593]: 2025-11-24 19:47:05.363388949 +0000 UTC m=+0.036543700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:05 compute-0 podman[76593]: 2025-11-24 19:47:05.487338474 +0000 UTC m=+0.160493245 container init b077e65442f387a54b06b6411e1c0146c01820e31d9d1d44af8be272fc4074c7 (image=quay.io/ceph/ceph:v18, name=interesting_bell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:05 compute-0 podman[76593]: 2025-11-24 19:47:05.496037122 +0000 UTC m=+0.169191863 container start b077e65442f387a54b06b6411e1c0146c01820e31d9d1d44af8be272fc4074c7 (image=quay.io/ceph/ceph:v18, name=interesting_bell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:47:05 compute-0 podman[76593]: 2025-11-24 19:47:05.500041308 +0000 UTC m=+0.173196059 container attach b077e65442f387a54b06b6411e1c0146c01820e31d9d1d44af8be272fc4074c7 (image=quay.io/ceph/ceph:v18, name=interesting_bell, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 19:47:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 24 19:47:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4094735542' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 24 19:47:06 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:06 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4094735542' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 24 19:47:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4094735542' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 24 19:47:06 compute-0 ceph-mgr[75975]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 19:47:06 compute-0 ceph-mgr[75975]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 19:47:06 compute-0 ceph-mgr[75975]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 24 19:47:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ofslrn(active, since 4s)
Nov 24 19:47:06 compute-0 systemd[1]: libpod-b077e65442f387a54b06b6411e1c0146c01820e31d9d1d44af8be272fc4074c7.scope: Deactivated successfully.
Nov 24 19:47:06 compute-0 podman[76593]: 2025-11-24 19:47:06.348354402 +0000 UTC m=+1.021509153 container died b077e65442f387a54b06b6411e1c0146c01820e31d9d1d44af8be272fc4074c7 (image=quay.io/ceph/ceph:v18, name=interesting_bell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0bed51665bcdd4df2822ab5ea8076fe2b20eeb8a0ea13095b56d863aa0bd770-merged.mount: Deactivated successfully.
Nov 24 19:47:06 compute-0 podman[76593]: 2025-11-24 19:47:06.404918897 +0000 UTC m=+1.078073648 container remove b077e65442f387a54b06b6411e1c0146c01820e31d9d1d44af8be272fc4074c7 (image=quay.io/ceph/ceph:v18, name=interesting_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 19:47:06 compute-0 systemd[1]: libpod-conmon-b077e65442f387a54b06b6411e1c0146c01820e31d9d1d44af8be272fc4074c7.scope: Deactivated successfully.
Nov 24 19:47:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: ignoring --setuser ceph since I am not root
Nov 24 19:47:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: ignoring --setgroup ceph since I am not root
Nov 24 19:47:06 compute-0 ceph-mgr[75975]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 24 19:47:06 compute-0 ceph-mgr[75975]: pidfile_write: ignore empty --pid-file
Nov 24 19:47:06 compute-0 podman[76649]: 2025-11-24 19:47:06.510266593 +0000 UTC m=+0.072956547 container create af58ca85a1792dcd5aa2e2f7e6c128dd4d286d0a2935809353f541e275711c43 (image=quay.io/ceph/ceph:v18, name=ecstatic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 19:47:06 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'alerts'
Nov 24 19:47:06 compute-0 systemd[1]: Started libpod-conmon-af58ca85a1792dcd5aa2e2f7e6c128dd4d286d0a2935809353f541e275711c43.scope.
Nov 24 19:47:06 compute-0 podman[76649]: 2025-11-24 19:47:06.481527318 +0000 UTC m=+0.044217312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a1403c69c53fb8dd7c2b8613cdd39deeba31a4e842009d7b30971de3b67986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a1403c69c53fb8dd7c2b8613cdd39deeba31a4e842009d7b30971de3b67986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a1403c69c53fb8dd7c2b8613cdd39deeba31a4e842009d7b30971de3b67986/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:06 compute-0 podman[76649]: 2025-11-24 19:47:06.618283209 +0000 UTC m=+0.180973163 container init af58ca85a1792dcd5aa2e2f7e6c128dd4d286d0a2935809353f541e275711c43 (image=quay.io/ceph/ceph:v18, name=ecstatic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 19:47:06 compute-0 podman[76649]: 2025-11-24 19:47:06.630534111 +0000 UTC m=+0.193224065 container start af58ca85a1792dcd5aa2e2f7e6c128dd4d286d0a2935809353f541e275711c43 (image=quay.io/ceph/ceph:v18, name=ecstatic_rosalind, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 19:47:06 compute-0 podman[76649]: 2025-11-24 19:47:06.634829434 +0000 UTC m=+0.197519428 container attach af58ca85a1792dcd5aa2e2f7e6c128dd4d286d0a2935809353f541e275711c43 (image=quay.io/ceph/ceph:v18, name=ecstatic_rosalind, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:47:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:06.865+0000 7f8637150140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 19:47:06 compute-0 ceph-mgr[75975]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 19:47:06 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'balancer'
Nov 24 19:47:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:07.095+0000 7f8637150140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 19:47:07 compute-0 ceph-mgr[75975]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 19:47:07 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'cephadm'
Nov 24 19:47:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 24 19:47:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751151749' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 19:47:07 compute-0 ecstatic_rosalind[76690]: {
Nov 24 19:47:07 compute-0 ecstatic_rosalind[76690]:     "epoch": 5,
Nov 24 19:47:07 compute-0 ecstatic_rosalind[76690]:     "available": true,
Nov 24 19:47:07 compute-0 ecstatic_rosalind[76690]:     "active_name": "compute-0.ofslrn",
Nov 24 19:47:07 compute-0 ecstatic_rosalind[76690]:     "num_standby": 0
Nov 24 19:47:07 compute-0 ecstatic_rosalind[76690]: }
Nov 24 19:47:07 compute-0 systemd[1]: libpod-af58ca85a1792dcd5aa2e2f7e6c128dd4d286d0a2935809353f541e275711c43.scope: Deactivated successfully.
Nov 24 19:47:07 compute-0 podman[76649]: 2025-11-24 19:47:07.234731576 +0000 UTC m=+0.797421530 container died af58ca85a1792dcd5aa2e2f7e6c128dd4d286d0a2935809353f541e275711c43 (image=quay.io/ceph/ceph:v18, name=ecstatic_rosalind, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-03a1403c69c53fb8dd7c2b8613cdd39deeba31a4e842009d7b30971de3b67986-merged.mount: Deactivated successfully.
Nov 24 19:47:07 compute-0 podman[76649]: 2025-11-24 19:47:07.277791756 +0000 UTC m=+0.840481680 container remove af58ca85a1792dcd5aa2e2f7e6c128dd4d286d0a2935809353f541e275711c43 (image=quay.io/ceph/ceph:v18, name=ecstatic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 19:47:07 compute-0 systemd[1]: libpod-conmon-af58ca85a1792dcd5aa2e2f7e6c128dd4d286d0a2935809353f541e275711c43.scope: Deactivated successfully.
Nov 24 19:47:07 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4094735542' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 24 19:47:07 compute-0 ceph-mon[75677]: mgrmap e5: compute-0.ofslrn(active, since 4s)
Nov 24 19:47:07 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1751151749' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 19:47:07 compute-0 podman[76727]: 2025-11-24 19:47:07.372437461 +0000 UTC m=+0.064780291 container create 13b9c106224fe83d74342781ebd41ef64826146b2be7b82e8f2eac5ce4089df8 (image=quay.io/ceph/ceph:v18, name=condescending_curie, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 19:47:07 compute-0 systemd[1]: Started libpod-conmon-13b9c106224fe83d74342781ebd41ef64826146b2be7b82e8f2eac5ce4089df8.scope.
Nov 24 19:47:07 compute-0 podman[76727]: 2025-11-24 19:47:07.345449313 +0000 UTC m=+0.037792153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca9814bbb5f327104b33255de579e59daa16940c12d3be5c292cbaaf31b55c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca9814bbb5f327104b33255de579e59daa16940c12d3be5c292cbaaf31b55c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ca9814bbb5f327104b33255de579e59daa16940c12d3be5c292cbaaf31b55c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:07 compute-0 podman[76727]: 2025-11-24 19:47:07.474036559 +0000 UTC m=+0.166379399 container init 13b9c106224fe83d74342781ebd41ef64826146b2be7b82e8f2eac5ce4089df8 (image=quay.io/ceph/ceph:v18, name=condescending_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:07 compute-0 podman[76727]: 2025-11-24 19:47:07.484055472 +0000 UTC m=+0.176398302 container start 13b9c106224fe83d74342781ebd41ef64826146b2be7b82e8f2eac5ce4089df8 (image=quay.io/ceph/ceph:v18, name=condescending_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 19:47:07 compute-0 podman[76727]: 2025-11-24 19:47:07.488653643 +0000 UTC m=+0.180996473 container attach 13b9c106224fe83d74342781ebd41ef64826146b2be7b82e8f2eac5ce4089df8 (image=quay.io/ceph/ceph:v18, name=condescending_curie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:47:08 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'crash'
Nov 24 19:47:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:09.221+0000 7f8637150140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 19:47:09 compute-0 ceph-mgr[75975]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 19:47:09 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'dashboard'
Nov 24 19:47:10 compute-0 sshd-session[76768]: Invalid user admin from 27.79.44.141 port 43020
Nov 24 19:47:10 compute-0 sshd-session[76768]: Connection closed by invalid user admin 27.79.44.141 port 43020 [preauth]
Nov 24 19:47:10 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'devicehealth'
Nov 24 19:47:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:10.720+0000 7f8637150140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 19:47:10 compute-0 ceph-mgr[75975]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 19:47:10 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 19:47:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 19:47:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 19:47:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]:   from numpy import show_config as show_numpy_config
Nov 24 19:47:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:11.205+0000 7f8637150140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 19:47:11 compute-0 ceph-mgr[75975]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 19:47:11 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'influx'
Nov 24 19:47:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:11.421+0000 7f8637150140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 19:47:11 compute-0 ceph-mgr[75975]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 19:47:11 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'insights'
Nov 24 19:47:11 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'iostat'
Nov 24 19:47:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:11.853+0000 7f8637150140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 19:47:11 compute-0 ceph-mgr[75975]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 19:47:11 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'k8sevents'
Nov 24 19:47:13 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'localpool'
Nov 24 19:47:13 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 19:47:14 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'mirroring'
Nov 24 19:47:14 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'nfs'
Nov 24 19:47:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:15.140+0000 7f8637150140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 19:47:15 compute-0 ceph-mgr[75975]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 19:47:15 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'orchestrator'
Nov 24 19:47:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:15.760+0000 7f8637150140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 19:47:15 compute-0 ceph-mgr[75975]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 19:47:15 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 19:47:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:16.002+0000 7f8637150140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 19:47:16 compute-0 ceph-mgr[75975]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 19:47:16 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'osd_support'
Nov 24 19:47:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:16.216+0000 7f8637150140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 19:47:16 compute-0 ceph-mgr[75975]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 19:47:16 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 19:47:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:16.462+0000 7f8637150140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 19:47:16 compute-0 ceph-mgr[75975]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 19:47:16 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'progress'
Nov 24 19:47:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:16.680+0000 7f8637150140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 19:47:16 compute-0 ceph-mgr[75975]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 19:47:16 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'prometheus'
Nov 24 19:47:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:17.588+0000 7f8637150140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 19:47:17 compute-0 ceph-mgr[75975]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 19:47:17 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'rbd_support'
Nov 24 19:47:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:17.864+0000 7f8637150140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 19:47:17 compute-0 ceph-mgr[75975]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 19:47:17 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'restful'
Nov 24 19:47:18 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'rgw'
Nov 24 19:47:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:19.202+0000 7f8637150140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 19:47:19 compute-0 ceph-mgr[75975]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 19:47:19 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'rook'
Nov 24 19:47:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:21.089+0000 7f8637150140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 19:47:21 compute-0 ceph-mgr[75975]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 19:47:21 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'selftest'
Nov 24 19:47:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:21.311+0000 7f8637150140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 19:47:21 compute-0 ceph-mgr[75975]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 19:47:21 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'snap_schedule'
Nov 24 19:47:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:21.538+0000 7f8637150140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 19:47:21 compute-0 ceph-mgr[75975]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 19:47:21 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'stats'
Nov 24 19:47:21 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'status'
Nov 24 19:47:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:21.999+0000 7f8637150140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 19:47:21 compute-0 ceph-mgr[75975]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 19:47:21 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'telegraf'
Nov 24 19:47:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:22.215+0000 7f8637150140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 19:47:22 compute-0 ceph-mgr[75975]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 19:47:22 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'telemetry'
Nov 24 19:47:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:22.772+0000 7f8637150140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 19:47:22 compute-0 ceph-mgr[75975]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 19:47:22 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 19:47:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:23.384+0000 7f8637150140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 19:47:23 compute-0 ceph-mgr[75975]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 19:47:23 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'volumes'
Nov 24 19:47:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:24.019+0000 7f8637150140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr[py] Loading python module 'zabbix'
Nov 24 19:47:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T19:47:24.234+0000 7f8637150140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ofslrn restarted
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: ms_deliver_dispatch: unhandled message 0x558056f011e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ofslrn
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr handle_mgr_map Activating!
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr handle_mgr_map I am now activating
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ofslrn(active, starting, since 0.0204813s)
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ofslrn", "id": "compute-0.ofslrn"} v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ofslrn", "id": "compute-0.ofslrn"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: balancer
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Manager daemon compute-0.ofslrn is now available
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Starting
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:47:24
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [balancer INFO root] No pools available
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: Active manager daemon compute-0.ofslrn restarted
Nov 24 19:47:24 compute-0 ceph-mon[75677]: Activating manager daemon compute-0.ofslrn
Nov 24 19:47:24 compute-0 ceph-mon[75677]: osdmap e2: 0 total, 0 up, 0 in
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mgrmap e6: compute-0.ofslrn(active, starting, since 0.0204813s)
Nov 24 19:47:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ofslrn", "id": "compute-0.ofslrn"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mon[75677]: Manager daemon compute-0.ofslrn is now available
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: cephadm
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: crash
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: devicehealth
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Starting
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: iostat
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: nfs
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: orchestrator
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: pg_autoscaler
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: progress
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/mirror_snapshot_schedule"} v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/mirror_snapshot_schedule"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/trash_purge_schedule"} v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/trash_purge_schedule"}]: dispatch
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [progress INFO root] Loading...
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [progress INFO root] No stored events to load
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [progress INFO root] Loaded [] historic events
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] recovery thread starting
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] starting setup
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: rbd_support
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: restful
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: status
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: telemetry
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [restful WARNING root] server not running: no certificate configured
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] PerfHandler: starting
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] TaskHandler: starting
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] setup complete
Nov 24 19:47:24 compute-0 ceph-mgr[75975]: mgr load Constructed class from module: volumes
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 24 19:47:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 24 19:47:25 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ofslrn(active, since 1.03151s)
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 24 19:47:25 compute-0 condescending_curie[76744]: {
Nov 24 19:47:25 compute-0 condescending_curie[76744]:     "mgrmap_epoch": 7,
Nov 24 19:47:25 compute-0 condescending_curie[76744]:     "initialized": true
Nov 24 19:47:25 compute-0 condescending_curie[76744]: }
Nov 24 19:47:25 compute-0 systemd[1]: libpod-13b9c106224fe83d74342781ebd41ef64826146b2be7b82e8f2eac5ce4089df8.scope: Deactivated successfully.
Nov 24 19:47:25 compute-0 podman[76727]: 2025-11-24 19:47:25.309239544 +0000 UTC m=+18.001582334 container died 13b9c106224fe83d74342781ebd41ef64826146b2be7b82e8f2eac5ce4089df8 (image=quay.io/ceph/ceph:v18, name=condescending_curie, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:25 compute-0 ceph-mon[75677]: Found migration_current of "None". Setting to last migration.
Nov 24 19:47:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:47:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:47:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/mirror_snapshot_schedule"}]: dispatch
Nov 24 19:47:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ofslrn/trash_purge_schedule"}]: dispatch
Nov 24 19:47:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:25 compute-0 ceph-mon[75677]: mgrmap e7: compute-0.ofslrn(active, since 1.03151s)
Nov 24 19:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-15ca9814bbb5f327104b33255de579e59daa16940c12d3be5c292cbaaf31b55c-merged.mount: Deactivated successfully.
Nov 24 19:47:25 compute-0 podman[76727]: 2025-11-24 19:47:25.368942441 +0000 UTC m=+18.061285271 container remove 13b9c106224fe83d74342781ebd41ef64826146b2be7b82e8f2eac5ce4089df8 (image=quay.io/ceph/ceph:v18, name=condescending_curie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 19:47:25 compute-0 systemd[1]: libpod-conmon-13b9c106224fe83d74342781ebd41ef64826146b2be7b82e8f2eac5ce4089df8.scope: Deactivated successfully.
Nov 24 19:47:25 compute-0 podman[76907]: 2025-11-24 19:47:25.448342947 +0000 UTC m=+0.056075344 container create 80f5379004a1260e6370ee15947f8457fe0413fe878f63d742f8ab6d3ee4ae4d (image=quay.io/ceph/ceph:v18, name=pensive_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:25 compute-0 systemd[1]: Started libpod-conmon-80f5379004a1260e6370ee15947f8457fe0413fe878f63d742f8ab6d3ee4ae4d.scope.
Nov 24 19:47:25 compute-0 podman[76907]: 2025-11-24 19:47:25.420764432 +0000 UTC m=+0.028496889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa74500039f3a5389704370cde310f666d466b37b7f68781a753e8beb04d92f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa74500039f3a5389704370cde310f666d466b37b7f68781a753e8beb04d92f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa74500039f3a5389704370cde310f666d466b37b7f68781a753e8beb04d92f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:25 compute-0 podman[76907]: 2025-11-24 19:47:25.542845057 +0000 UTC m=+0.150577454 container init 80f5379004a1260e6370ee15947f8457fe0413fe878f63d742f8ab6d3ee4ae4d (image=quay.io/ceph/ceph:v18, name=pensive_nash, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:25 compute-0 podman[76907]: 2025-11-24 19:47:25.553513017 +0000 UTC m=+0.161245424 container start 80f5379004a1260e6370ee15947f8457fe0413fe878f63d742f8ab6d3ee4ae4d (image=quay.io/ceph/ceph:v18, name=pensive_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 19:47:25 compute-0 podman[76907]: 2025-11-24 19:47:25.557489272 +0000 UTC m=+0.165221669 container attach 80f5379004a1260e6370ee15947f8457fe0413fe878f63d742f8ab6d3ee4ae4d (image=quay.io/ceph/ceph:v18, name=pensive_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: [cephadm INFO cherrypy.error] [24/Nov/2025:19:47:25] ENGINE Bus STARTING
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : [24/Nov/2025:19:47:25] ENGINE Bus STARTING
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: [cephadm INFO cherrypy.error] [24/Nov/2025:19:47:25] ENGINE Serving on https://192.168.122.100:7150
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : [24/Nov/2025:19:47:25] ENGINE Serving on https://192.168.122.100:7150
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: [cephadm INFO cherrypy.error] [24/Nov/2025:19:47:25] ENGINE Client ('192.168.122.100', 42634) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : [24/Nov/2025:19:47:25] ENGINE Client ('192.168.122.100', 42634) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: [cephadm INFO cherrypy.error] [24/Nov/2025:19:47:25] ENGINE Serving on http://192.168.122.100:8765
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : [24/Nov/2025:19:47:25] ENGINE Serving on http://192.168.122.100:8765
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: [cephadm INFO cherrypy.error] [24/Nov/2025:19:47:25] ENGINE Bus STARTED
Nov 24 19:47:25 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : [24/Nov/2025:19:47:25] ENGINE Bus STARTED
Nov 24 19:47:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 19:47:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:47:26 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 24 19:47:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 19:47:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:47:26 compute-0 systemd[1]: libpod-80f5379004a1260e6370ee15947f8457fe0413fe878f63d742f8ab6d3ee4ae4d.scope: Deactivated successfully.
Nov 24 19:47:26 compute-0 podman[76907]: 2025-11-24 19:47:26.104009323 +0000 UTC m=+0.711741720 container died 80f5379004a1260e6370ee15947f8457fe0413fe878f63d742f8ab6d3ee4ae4d (image=quay.io/ceph/ceph:v18, name=pensive_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa74500039f3a5389704370cde310f666d466b37b7f68781a753e8beb04d92f8-merged.mount: Deactivated successfully.
Nov 24 19:47:26 compute-0 podman[76907]: 2025-11-24 19:47:26.155385301 +0000 UTC m=+0.763117668 container remove 80f5379004a1260e6370ee15947f8457fe0413fe878f63d742f8ab6d3ee4ae4d (image=quay.io/ceph/ceph:v18, name=pensive_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 19:47:26 compute-0 systemd[1]: libpod-conmon-80f5379004a1260e6370ee15947f8457fe0413fe878f63d742f8ab6d3ee4ae4d.scope: Deactivated successfully.
Nov 24 19:47:26 compute-0 podman[76984]: 2025-11-24 19:47:26.226475187 +0000 UTC m=+0.052041386 container create 5e394cb9a3e1dbf6147963d23da517eee8542c1f025bc8921a37bd373833517a (image=quay.io/ceph/ceph:v18, name=mystifying_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 19:47:26 compute-0 systemd[1]: Started libpod-conmon-5e394cb9a3e1dbf6147963d23da517eee8542c1f025bc8921a37bd373833517a.scope.
Nov 24 19:47:26 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:26 compute-0 podman[76984]: 2025-11-24 19:47:26.200343502 +0000 UTC m=+0.025909751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b3dc0afc3616b85630e2789924ba47b57baa3113f21ce9e2f802ae76703486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b3dc0afc3616b85630e2789924ba47b57baa3113f21ce9e2f802ae76703486/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b3dc0afc3616b85630e2789924ba47b57baa3113f21ce9e2f802ae76703486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:26 compute-0 podman[76984]: 2025-11-24 19:47:26.329668227 +0000 UTC m=+0.155234486 container init 5e394cb9a3e1dbf6147963d23da517eee8542c1f025bc8921a37bd373833517a (image=quay.io/ceph/ceph:v18, name=mystifying_diffie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 19:47:26 compute-0 ceph-mon[75677]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 24 19:47:26 compute-0 ceph-mon[75677]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 24 19:47:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:47:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:47:26 compute-0 podman[76984]: 2025-11-24 19:47:26.339866615 +0000 UTC m=+0.165432814 container start 5e394cb9a3e1dbf6147963d23da517eee8542c1f025bc8921a37bd373833517a (image=quay.io/ceph/ceph:v18, name=mystifying_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:47:26 compute-0 podman[76984]: 2025-11-24 19:47:26.343710316 +0000 UTC m=+0.169276515 container attach 5e394cb9a3e1dbf6147963d23da517eee8542c1f025bc8921a37bd373833517a (image=quay.io/ceph/ceph:v18, name=mystifying_diffie, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 19:47:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019919504 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:47:26 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 24 19:47:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:26 compute-0 ceph-mgr[75975]: [cephadm INFO root] Set ssh ssh_user
Nov 24 19:47:26 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 24 19:47:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 24 19:47:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:26 compute-0 ceph-mgr[75975]: [cephadm INFO root] Set ssh ssh_config
Nov 24 19:47:26 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 24 19:47:26 compute-0 ceph-mgr[75975]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 24 19:47:26 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 24 19:47:26 compute-0 mystifying_diffie[77000]: ssh user set to ceph-admin. sudo will be used
Nov 24 19:47:26 compute-0 systemd[1]: libpod-5e394cb9a3e1dbf6147963d23da517eee8542c1f025bc8921a37bd373833517a.scope: Deactivated successfully.
Nov 24 19:47:26 compute-0 podman[76984]: 2025-11-24 19:47:26.939418678 +0000 UTC m=+0.764984897 container died 5e394cb9a3e1dbf6147963d23da517eee8542c1f025bc8921a37bd373833517a (image=quay.io/ceph/ceph:v18, name=mystifying_diffie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3b3dc0afc3616b85630e2789924ba47b57baa3113f21ce9e2f802ae76703486-merged.mount: Deactivated successfully.
Nov 24 19:47:26 compute-0 podman[76984]: 2025-11-24 19:47:26.993677262 +0000 UTC m=+0.819243461 container remove 5e394cb9a3e1dbf6147963d23da517eee8542c1f025bc8921a37bd373833517a (image=quay.io/ceph/ceph:v18, name=mystifying_diffie, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 19:47:27 compute-0 systemd[1]: libpod-conmon-5e394cb9a3e1dbf6147963d23da517eee8542c1f025bc8921a37bd373833517a.scope: Deactivated successfully.
Nov 24 19:47:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ofslrn(active, since 2s)
Nov 24 19:47:27 compute-0 podman[77039]: 2025-11-24 19:47:27.10362633 +0000 UTC m=+0.076218033 container create 18c87bed9263df9edcdcb3fe9c4f2ca862264ca14f38a3722c72f68637664daf (image=quay.io/ceph/ceph:v18, name=strange_haibt, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 19:47:27 compute-0 systemd[1]: Started libpod-conmon-18c87bed9263df9edcdcb3fe9c4f2ca862264ca14f38a3722c72f68637664daf.scope.
Nov 24 19:47:27 compute-0 podman[77039]: 2025-11-24 19:47:27.070948132 +0000 UTC m=+0.043539885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43bef8eeeac6fb00419becb2a9491af6a2d203a17efd6e625ef8e5f853017c8/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43bef8eeeac6fb00419becb2a9491af6a2d203a17efd6e625ef8e5f853017c8/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43bef8eeeac6fb00419becb2a9491af6a2d203a17efd6e625ef8e5f853017c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43bef8eeeac6fb00419becb2a9491af6a2d203a17efd6e625ef8e5f853017c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43bef8eeeac6fb00419becb2a9491af6a2d203a17efd6e625ef8e5f853017c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:27 compute-0 podman[77039]: 2025-11-24 19:47:27.199423575 +0000 UTC m=+0.172015278 container init 18c87bed9263df9edcdcb3fe9c4f2ca862264ca14f38a3722c72f68637664daf (image=quay.io/ceph/ceph:v18, name=strange_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 24 19:47:27 compute-0 podman[77039]: 2025-11-24 19:47:27.209645693 +0000 UTC m=+0.182237396 container start 18c87bed9263df9edcdcb3fe9c4f2ca862264ca14f38a3722c72f68637664daf (image=quay.io/ceph/ceph:v18, name=strange_haibt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 19:47:27 compute-0 podman[77039]: 2025-11-24 19:47:27.214420429 +0000 UTC m=+0.187012142 container attach 18c87bed9263df9edcdcb3fe9c4f2ca862264ca14f38a3722c72f68637664daf (image=quay.io/ceph/ceph:v18, name=strange_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 19:47:27 compute-0 ceph-mon[75677]: [24/Nov/2025:19:47:25] ENGINE Bus STARTING
Nov 24 19:47:27 compute-0 ceph-mon[75677]: [24/Nov/2025:19:47:25] ENGINE Serving on https://192.168.122.100:7150
Nov 24 19:47:27 compute-0 ceph-mon[75677]: [24/Nov/2025:19:47:25] ENGINE Client ('192.168.122.100', 42634) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 19:47:27 compute-0 ceph-mon[75677]: [24/Nov/2025:19:47:25] ENGINE Serving on http://192.168.122.100:8765
Nov 24 19:47:27 compute-0 ceph-mon[75677]: [24/Nov/2025:19:47:25] ENGINE Bus STARTED
Nov 24 19:47:27 compute-0 ceph-mon[75677]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:27 compute-0 ceph-mon[75677]: mgrmap e8: compute-0.ofslrn(active, since 2s)
Nov 24 19:47:27 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 24 19:47:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:27 compute-0 ceph-mgr[75975]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 24 19:47:27 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 24 19:47:27 compute-0 ceph-mgr[75975]: [cephadm INFO root] Set ssh private key
Nov 24 19:47:27 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 24 19:47:27 compute-0 systemd[1]: libpod-18c87bed9263df9edcdcb3fe9c4f2ca862264ca14f38a3722c72f68637664daf.scope: Deactivated successfully.
Nov 24 19:47:27 compute-0 podman[77039]: 2025-11-24 19:47:27.749731784 +0000 UTC m=+0.722323497 container died 18c87bed9263df9edcdcb3fe9c4f2ca862264ca14f38a3722c72f68637664daf (image=quay.io/ceph/ceph:v18, name=strange_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 19:47:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c43bef8eeeac6fb00419becb2a9491af6a2d203a17efd6e625ef8e5f853017c8-merged.mount: Deactivated successfully.
Nov 24 19:47:27 compute-0 podman[77039]: 2025-11-24 19:47:27.806186597 +0000 UTC m=+0.778778310 container remove 18c87bed9263df9edcdcb3fe9c4f2ca862264ca14f38a3722c72f68637664daf (image=quay.io/ceph/ceph:v18, name=strange_haibt, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:27 compute-0 systemd[1]: libpod-conmon-18c87bed9263df9edcdcb3fe9c4f2ca862264ca14f38a3722c72f68637664daf.scope: Deactivated successfully.
Nov 24 19:47:27 compute-0 podman[77094]: 2025-11-24 19:47:27.896647233 +0000 UTC m=+0.060787818 container create 0d55fe1d04fb7790bd6c998657113e18649583f58eb0df5eb9e47a3d249c9242 (image=quay.io/ceph/ceph:v18, name=priceless_thompson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:27 compute-0 systemd[1]: Started libpod-conmon-0d55fe1d04fb7790bd6c998657113e18649583f58eb0df5eb9e47a3d249c9242.scope.
Nov 24 19:47:27 compute-0 podman[77094]: 2025-11-24 19:47:27.869264663 +0000 UTC m=+0.033405298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acbb7a393f383d8f39cad799100b9b3f3d0260c193292b91fef345cd819d5b2d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acbb7a393f383d8f39cad799100b9b3f3d0260c193292b91fef345cd819d5b2d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acbb7a393f383d8f39cad799100b9b3f3d0260c193292b91fef345cd819d5b2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acbb7a393f383d8f39cad799100b9b3f3d0260c193292b91fef345cd819d5b2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acbb7a393f383d8f39cad799100b9b3f3d0260c193292b91fef345cd819d5b2d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:27 compute-0 podman[77094]: 2025-11-24 19:47:27.990571828 +0000 UTC m=+0.154712433 container init 0d55fe1d04fb7790bd6c998657113e18649583f58eb0df5eb9e47a3d249c9242 (image=quay.io/ceph/ceph:v18, name=priceless_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 19:47:28 compute-0 podman[77094]: 2025-11-24 19:47:28.001773682 +0000 UTC m=+0.165914247 container start 0d55fe1d04fb7790bd6c998657113e18649583f58eb0df5eb9e47a3d249c9242 (image=quay.io/ceph/ceph:v18, name=priceless_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:28 compute-0 podman[77094]: 2025-11-24 19:47:28.006096346 +0000 UTC m=+0.170236911 container attach 0d55fe1d04fb7790bd6c998657113e18649583f58eb0df5eb9e47a3d249c9242 (image=quay.io/ceph/ceph:v18, name=priceless_thompson, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 19:47:28 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:28 compute-0 ceph-mon[75677]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:28 compute-0 ceph-mon[75677]: Set ssh ssh_user
Nov 24 19:47:28 compute-0 ceph-mon[75677]: Set ssh ssh_config
Nov 24 19:47:28 compute-0 ceph-mon[75677]: ssh user set to ceph-admin. sudo will be used
Nov 24 19:47:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:28 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 24 19:47:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:28 compute-0 ceph-mgr[75975]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 24 19:47:28 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 24 19:47:28 compute-0 systemd[1]: libpod-0d55fe1d04fb7790bd6c998657113e18649583f58eb0df5eb9e47a3d249c9242.scope: Deactivated successfully.
Nov 24 19:47:28 compute-0 podman[77094]: 2025-11-24 19:47:28.553487749 +0000 UTC m=+0.717628324 container died 0d55fe1d04fb7790bd6c998657113e18649583f58eb0df5eb9e47a3d249c9242 (image=quay.io/ceph/ceph:v18, name=priceless_thompson, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-acbb7a393f383d8f39cad799100b9b3f3d0260c193292b91fef345cd819d5b2d-merged.mount: Deactivated successfully.
Nov 24 19:47:28 compute-0 podman[77094]: 2025-11-24 19:47:28.611458811 +0000 UTC m=+0.775599376 container remove 0d55fe1d04fb7790bd6c998657113e18649583f58eb0df5eb9e47a3d249c9242 (image=quay.io/ceph/ceph:v18, name=priceless_thompson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 19:47:28 compute-0 systemd[1]: libpod-conmon-0d55fe1d04fb7790bd6c998657113e18649583f58eb0df5eb9e47a3d249c9242.scope: Deactivated successfully.
Nov 24 19:47:28 compute-0 podman[77148]: 2025-11-24 19:47:28.706825665 +0000 UTC m=+0.066183139 container create bad22f7e77e3eee1d0a5ae89b5c9281daf79405b127cace6a1fa3ce06eeed732 (image=quay.io/ceph/ceph:v18, name=friendly_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:28 compute-0 systemd[1]: Started libpod-conmon-bad22f7e77e3eee1d0a5ae89b5c9281daf79405b127cace6a1fa3ce06eeed732.scope.
Nov 24 19:47:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:28 compute-0 podman[77148]: 2025-11-24 19:47:28.678773959 +0000 UTC m=+0.038131483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a91337b03ba8fcac87aa80ce31112cba0f9f6ac9de8827b8bdc8218236fc14a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a91337b03ba8fcac87aa80ce31112cba0f9f6ac9de8827b8bdc8218236fc14a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a91337b03ba8fcac87aa80ce31112cba0f9f6ac9de8827b8bdc8218236fc14a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:28 compute-0 podman[77148]: 2025-11-24 19:47:28.804910921 +0000 UTC m=+0.164268455 container init bad22f7e77e3eee1d0a5ae89b5c9281daf79405b127cace6a1fa3ce06eeed732 (image=quay.io/ceph/ceph:v18, name=friendly_poitras, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 19:47:28 compute-0 podman[77148]: 2025-11-24 19:47:28.816233678 +0000 UTC m=+0.175591152 container start bad22f7e77e3eee1d0a5ae89b5c9281daf79405b127cace6a1fa3ce06eeed732 (image=quay.io/ceph/ceph:v18, name=friendly_poitras, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 19:47:28 compute-0 podman[77148]: 2025-11-24 19:47:28.820607902 +0000 UTC m=+0.179965376 container attach bad22f7e77e3eee1d0a5ae89b5c9281daf79405b127cace6a1fa3ce06eeed732 (image=quay.io/ceph/ceph:v18, name=friendly_poitras, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 19:47:29 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:29 compute-0 friendly_poitras[77165]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpq/naS75i0p6uJaWnKCab5lu4CrqTN3+CHRWdI+y0JcjMQFlQEWKCLvjxtaSQM6esgeTOjC+4a+QI5roQJcOW+BopiuhGWJsBjXZz17AO3TzaRSnIqxDFz7hlJqQ2tQOtoFOB0lg0IIudECoXkMDCVOf7aTPmfDgltRwyeteDY3TauNcgjQr1iawQyPPhU6rNPlQqeekOd/RbZF9mcJ2V4SGOE6dQl9eEIoL5piyfrYRupkSp7KYgGOi/fJ2cl9SR01oPKgVhTGXqLztmey9gOe0XzixE0KQf5Rdt8atAWOXjyCKTZCR5Jjd3mmAaabFvixjubynPR5Nd2J+egWu7RwqG8CexVPBCeJR8PkSsogclTJ1jhskViWp/M8AmzE1RIVW3TDglNV124rHHb+BDIZKYsu5aTMJ1fKrlOcEGQcMbRpWHPPsg1Qh/QasVc0abma8JclzopUW+SEQHtNVggY7Ac1KwesKGTAOXS/kQk0D939JLGIW/wq4IQJPggd0= zuul@controller
Nov 24 19:47:29 compute-0 ceph-mon[75677]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:29 compute-0 ceph-mon[75677]: Set ssh ssh_identity_key
Nov 24 19:47:29 compute-0 ceph-mon[75677]: Set ssh private key
Nov 24 19:47:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:29 compute-0 systemd[1]: libpod-bad22f7e77e3eee1d0a5ae89b5c9281daf79405b127cace6a1fa3ce06eeed732.scope: Deactivated successfully.
Nov 24 19:47:29 compute-0 podman[77148]: 2025-11-24 19:47:29.356929585 +0000 UTC m=+0.716287069 container died bad22f7e77e3eee1d0a5ae89b5c9281daf79405b127cace6a1fa3ce06eeed732 (image=quay.io/ceph/ceph:v18, name=friendly_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 24 19:47:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a91337b03ba8fcac87aa80ce31112cba0f9f6ac9de8827b8bdc8218236fc14a-merged.mount: Deactivated successfully.
Nov 24 19:47:29 compute-0 podman[77148]: 2025-11-24 19:47:29.409892436 +0000 UTC m=+0.769249900 container remove bad22f7e77e3eee1d0a5ae89b5c9281daf79405b127cace6a1fa3ce06eeed732 (image=quay.io/ceph/ceph:v18, name=friendly_poitras, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:29 compute-0 systemd[1]: libpod-conmon-bad22f7e77e3eee1d0a5ae89b5c9281daf79405b127cace6a1fa3ce06eeed732.scope: Deactivated successfully.
Nov 24 19:47:29 compute-0 podman[77203]: 2025-11-24 19:47:29.515013586 +0000 UTC m=+0.048876554 container create 711640583a7bd7e225195a09cff5edf63b8b257b7a230212dd3274c7177cee60 (image=quay.io/ceph/ceph:v18, name=optimistic_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:29 compute-0 systemd[1]: Started libpod-conmon-711640583a7bd7e225195a09cff5edf63b8b257b7a230212dd3274c7177cee60.scope.
Nov 24 19:47:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef44ac29f6ad8793cbfc5613311bd4bfb7cb4a94936919b4ff6d7b88fdb6c4b3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef44ac29f6ad8793cbfc5613311bd4bfb7cb4a94936919b4ff6d7b88fdb6c4b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef44ac29f6ad8793cbfc5613311bd4bfb7cb4a94936919b4ff6d7b88fdb6c4b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:29 compute-0 podman[77203]: 2025-11-24 19:47:29.492271689 +0000 UTC m=+0.026134657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:29 compute-0 podman[77203]: 2025-11-24 19:47:29.59934244 +0000 UTC m=+0.133205438 container init 711640583a7bd7e225195a09cff5edf63b8b257b7a230212dd3274c7177cee60 (image=quay.io/ceph/ceph:v18, name=optimistic_rubin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:29 compute-0 podman[77203]: 2025-11-24 19:47:29.608506551 +0000 UTC m=+0.142369509 container start 711640583a7bd7e225195a09cff5edf63b8b257b7a230212dd3274c7177cee60 (image=quay.io/ceph/ceph:v18, name=optimistic_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 19:47:29 compute-0 podman[77203]: 2025-11-24 19:47:29.612317991 +0000 UTC m=+0.146181009 container attach 711640583a7bd7e225195a09cff5edf63b8b257b7a230212dd3274c7177cee60 (image=quay.io/ceph/ceph:v18, name=optimistic_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 19:47:30 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:30 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:30 compute-0 ceph-mon[75677]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:30 compute-0 ceph-mon[75677]: Set ssh ssh_identity_pub
Nov 24 19:47:30 compute-0 ceph-mon[75677]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:30 compute-0 sshd-session[77245]: Accepted publickey for ceph-admin from 192.168.122.100 port 37092 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:30 compute-0 systemd-logind[795]: New session 21 of user ceph-admin.
Nov 24 19:47:30 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 19:47:30 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 19:47:30 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 19:47:30 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 24 19:47:30 compute-0 systemd[77249]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:30 compute-0 sshd-session[77260]: Accepted publickey for ceph-admin from 192.168.122.100 port 37106 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:30 compute-0 systemd[77249]: Queued start job for default target Main User Target.
Nov 24 19:47:30 compute-0 systemd-logind[795]: New session 23 of user ceph-admin.
Nov 24 19:47:30 compute-0 systemd[77249]: Created slice User Application Slice.
Nov 24 19:47:30 compute-0 systemd[77249]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 19:47:30 compute-0 systemd[77249]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 19:47:30 compute-0 systemd[77249]: Reached target Paths.
Nov 24 19:47:30 compute-0 systemd[77249]: Reached target Timers.
Nov 24 19:47:30 compute-0 systemd[77249]: Starting D-Bus User Message Bus Socket...
Nov 24 19:47:30 compute-0 systemd[77249]: Starting Create User's Volatile Files and Directories...
Nov 24 19:47:30 compute-0 systemd[77249]: Listening on D-Bus User Message Bus Socket.
Nov 24 19:47:30 compute-0 systemd[77249]: Reached target Sockets.
Nov 24 19:47:30 compute-0 systemd[77249]: Finished Create User's Volatile Files and Directories.
Nov 24 19:47:30 compute-0 systemd[77249]: Reached target Basic System.
Nov 24 19:47:30 compute-0 systemd[77249]: Reached target Main User Target.
Nov 24 19:47:30 compute-0 systemd[77249]: Startup finished in 162ms.
Nov 24 19:47:30 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 24 19:47:30 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Nov 24 19:47:30 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 24 19:47:30 compute-0 sshd-session[77245]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:30 compute-0 sshd-session[77260]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:30 compute-0 sudo[77269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:30 compute-0 sudo[77269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:30 compute-0 sudo[77269]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:30 compute-0 sudo[77294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:30 compute-0 sudo[77294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:30 compute-0 sudo[77294]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:31 compute-0 sshd-session[77319]: Accepted publickey for ceph-admin from 192.168.122.100 port 37112 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:31 compute-0 systemd-logind[795]: New session 24 of user ceph-admin.
Nov 24 19:47:31 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 24 19:47:31 compute-0 sshd-session[77319]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:31 compute-0 sudo[77323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:31 compute-0 sudo[77323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:31 compute-0 sudo[77323]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:31 compute-0 sudo[77348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 24 19:47:31 compute-0 sudo[77348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:31 compute-0 ceph-mon[75677]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:31 compute-0 sudo[77348]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:31 compute-0 sshd-session[77373]: Accepted publickey for ceph-admin from 192.168.122.100 port 37120 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:31 compute-0 systemd-logind[795]: New session 25 of user ceph-admin.
Nov 24 19:47:31 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 24 19:47:31 compute-0 sshd-session[77373]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053006 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:47:31 compute-0 sudo[77377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:31 compute-0 sudo[77377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:31 compute-0 sudo[77377]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:31 compute-0 sudo[77402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 24 19:47:31 compute-0 sudo[77402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:31 compute-0 sudo[77402]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:31 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 24 19:47:31 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 24 19:47:32 compute-0 sshd-session[77427]: Accepted publickey for ceph-admin from 192.168.122.100 port 37124 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:32 compute-0 systemd-logind[795]: New session 26 of user ceph-admin.
Nov 24 19:47:32 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 24 19:47:32 compute-0 sshd-session[77427]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:32 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:32 compute-0 sudo[77431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:32 compute-0 sudo[77431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:32 compute-0 sudo[77431]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:32 compute-0 sudo[77456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:47:32 compute-0 sudo[77456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:32 compute-0 sudo[77456]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:32 compute-0 ceph-mon[75677]: Deploying cephadm binary to compute-0
Nov 24 19:47:32 compute-0 sshd-session[77481]: Accepted publickey for ceph-admin from 192.168.122.100 port 37132 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:32 compute-0 systemd-logind[795]: New session 27 of user ceph-admin.
Nov 24 19:47:32 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 24 19:47:32 compute-0 sshd-session[77481]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:32 compute-0 sudo[77485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:32 compute-0 sudo[77485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:32 compute-0 sudo[77485]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:32 compute-0 sudo[77510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:47:32 compute-0 sudo[77510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:32 compute-0 sudo[77510]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:33 compute-0 sshd-session[77535]: Accepted publickey for ceph-admin from 192.168.122.100 port 37140 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:33 compute-0 systemd-logind[795]: New session 28 of user ceph-admin.
Nov 24 19:47:33 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 24 19:47:33 compute-0 sshd-session[77535]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:33 compute-0 sudo[77539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:33 compute-0 sudo[77539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:33 compute-0 sudo[77539]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:33 compute-0 sudo[77564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 24 19:47:33 compute-0 sudo[77564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:33 compute-0 sudo[77564]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:33 compute-0 sshd-session[77589]: Accepted publickey for ceph-admin from 192.168.122.100 port 37148 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:33 compute-0 systemd-logind[795]: New session 29 of user ceph-admin.
Nov 24 19:47:33 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 24 19:47:33 compute-0 sshd-session[77589]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:33 compute-0 sudo[77593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:33 compute-0 sudo[77593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:33 compute-0 sudo[77593]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:33 compute-0 sudo[77618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:47:33 compute-0 sudo[77618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:33 compute-0 sudo[77618]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:34 compute-0 sshd-session[77643]: Accepted publickey for ceph-admin from 192.168.122.100 port 37160 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:34 compute-0 systemd-logind[795]: New session 30 of user ceph-admin.
Nov 24 19:47:34 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 24 19:47:34 compute-0 sshd-session[77643]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:34 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:34 compute-0 sudo[77647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:34 compute-0 sudo[77647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:34 compute-0 sudo[77647]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:34 compute-0 sudo[77672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 24 19:47:34 compute-0 sudo[77672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:34 compute-0 sudo[77672]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:34 compute-0 sshd-session[77697]: Accepted publickey for ceph-admin from 192.168.122.100 port 37168 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:34 compute-0 systemd-logind[795]: New session 31 of user ceph-admin.
Nov 24 19:47:34 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 24 19:47:34 compute-0 sshd-session[77697]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:35 compute-0 sshd-session[77724]: Accepted publickey for ceph-admin from 192.168.122.100 port 37180 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:35 compute-0 systemd-logind[795]: New session 32 of user ceph-admin.
Nov 24 19:47:35 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 24 19:47:35 compute-0 sshd-session[77724]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:35 compute-0 sudo[77728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:35 compute-0 sudo[77728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:35 compute-0 sudo[77728]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:35 compute-0 sudo[77753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 24 19:47:35 compute-0 sudo[77753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:35 compute-0 sudo[77753]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:35 compute-0 sshd-session[77778]: Accepted publickey for ceph-admin from 192.168.122.100 port 37188 ssh2: RSA SHA256:SqoBM8S/ckCnYxebZz3iu2IyoOvxh/QVUIlcNZve+n8
Nov 24 19:47:35 compute-0 systemd-logind[795]: New session 33 of user ceph-admin.
Nov 24 19:47:35 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Nov 24 19:47:35 compute-0 sshd-session[77778]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 19:47:35 compute-0 sudo[77782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:35 compute-0 sudo[77782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:35 compute-0 sudo[77782]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:36 compute-0 sudo[77807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 24 19:47:36 compute-0 sudo[77807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:36 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:36 compute-0 sudo[77807]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 19:47:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:36 compute-0 ceph-mgr[75975]: [cephadm INFO root] Added host compute-0
Nov 24 19:47:36 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 24 19:47:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 19:47:36 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:47:36 compute-0 optimistic_rubin[77219]: Added host 'compute-0' with addr '192.168.122.100'
Nov 24 19:47:36 compute-0 systemd[1]: libpod-711640583a7bd7e225195a09cff5edf63b8b257b7a230212dd3274c7177cee60.scope: Deactivated successfully.
Nov 24 19:47:36 compute-0 podman[77203]: 2025-11-24 19:47:36.427426877 +0000 UTC m=+6.961289835 container died 711640583a7bd7e225195a09cff5edf63b8b257b7a230212dd3274c7177cee60 (image=quay.io/ceph/ceph:v18, name=optimistic_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:47:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef44ac29f6ad8793cbfc5613311bd4bfb7cb4a94936919b4ff6d7b88fdb6c4b3-merged.mount: Deactivated successfully.
Nov 24 19:47:36 compute-0 sudo[77853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:36 compute-0 podman[77203]: 2025-11-24 19:47:36.492055834 +0000 UTC m=+7.025918802 container remove 711640583a7bd7e225195a09cff5edf63b8b257b7a230212dd3274c7177cee60 (image=quay.io/ceph/ceph:v18, name=optimistic_rubin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 19:47:36 compute-0 sudo[77853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:36 compute-0 sudo[77853]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:36 compute-0 systemd[1]: libpod-conmon-711640583a7bd7e225195a09cff5edf63b8b257b7a230212dd3274c7177cee60.scope: Deactivated successfully.
Nov 24 19:47:36 compute-0 podman[77891]: 2025-11-24 19:47:36.599997898 +0000 UTC m=+0.072720571 container create 10c02694999c98fff30c28225f0713ca4fd09db50788331dc7cf17c8b188b72b (image=quay.io/ceph/ceph:v18, name=awesome_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:47:36 compute-0 sudo[77893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:36 compute-0 sudo[77893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:36 compute-0 sudo[77893]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:36 compute-0 systemd[1]: Started libpod-conmon-10c02694999c98fff30c28225f0713ca4fd09db50788331dc7cf17c8b188b72b.scope.
Nov 24 19:47:36 compute-0 podman[77891]: 2025-11-24 19:47:36.569778265 +0000 UTC m=+0.042500978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c06b3463bc103f8f7a44d2761f5fe8d1ad9a54588578a76fc0a5462fd7cf32f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c06b3463bc103f8f7a44d2761f5fe8d1ad9a54588578a76fc0a5462fd7cf32f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c06b3463bc103f8f7a44d2761f5fe8d1ad9a54588578a76fc0a5462fd7cf32f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:36 compute-0 sudo[77931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:36 compute-0 sudo[77931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:36 compute-0 sudo[77931]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:47:36 compute-0 podman[77891]: 2025-11-24 19:47:36.727128777 +0000 UTC m=+0.199851480 container init 10c02694999c98fff30c28225f0713ca4fd09db50788331dc7cf17c8b188b72b (image=quay.io/ceph/ceph:v18, name=awesome_liskov, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:36 compute-0 podman[77891]: 2025-11-24 19:47:36.741479653 +0000 UTC m=+0.214202316 container start 10c02694999c98fff30c28225f0713ca4fd09db50788331dc7cf17c8b188b72b (image=quay.io/ceph/ceph:v18, name=awesome_liskov, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 24 19:47:36 compute-0 podman[77891]: 2025-11-24 19:47:36.745538599 +0000 UTC m=+0.218261272 container attach 10c02694999c98fff30c28225f0713ca4fd09db50788331dc7cf17c8b188b72b (image=quay.io/ceph/ceph:v18, name=awesome_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:36 compute-0 sudo[77961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 24 19:47:36 compute-0 sudo[77961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:37 compute-0 podman[78027]: 2025-11-24 19:47:37.137474701 +0000 UTC m=+0.046377429 container create 30fec7fd477465decf9e4132d8b9d22c399383876dace497a51e81c17671c049 (image=quay.io/ceph/ceph:v18, name=cool_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:47:37 compute-0 systemd[1]: Started libpod-conmon-30fec7fd477465decf9e4132d8b9d22c399383876dace497a51e81c17671c049.scope.
Nov 24 19:47:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:37 compute-0 podman[78027]: 2025-11-24 19:47:37.210254162 +0000 UTC m=+0.119156910 container init 30fec7fd477465decf9e4132d8b9d22c399383876dace497a51e81c17671c049 (image=quay.io/ceph/ceph:v18, name=cool_gauss, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 19:47:37 compute-0 podman[78027]: 2025-11-24 19:47:37.116071019 +0000 UTC m=+0.024973777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:37 compute-0 podman[78027]: 2025-11-24 19:47:37.216753213 +0000 UTC m=+0.125655931 container start 30fec7fd477465decf9e4132d8b9d22c399383876dace497a51e81c17671c049 (image=quay.io/ceph/ceph:v18, name=cool_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:37 compute-0 podman[78027]: 2025-11-24 19:47:37.222705939 +0000 UTC m=+0.131608647 container attach 30fec7fd477465decf9e4132d8b9d22c399383876dace497a51e81c17671c049 (image=quay.io/ceph/ceph:v18, name=cool_gauss, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 19:47:37 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:37 compute-0 ceph-mgr[75975]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 24 19:47:37 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 24 19:47:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 19:47:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:37 compute-0 awesome_liskov[77941]: Scheduled mon update...
Nov 24 19:47:37 compute-0 systemd[1]: libpod-10c02694999c98fff30c28225f0713ca4fd09db50788331dc7cf17c8b188b72b.scope: Deactivated successfully.
Nov 24 19:47:37 compute-0 podman[77891]: 2025-11-24 19:47:37.311034838 +0000 UTC m=+0.783757481 container died 10c02694999c98fff30c28225f0713ca4fd09db50788331dc7cf17c8b188b72b (image=quay.io/ceph/ceph:v18, name=awesome_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 19:47:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c06b3463bc103f8f7a44d2761f5fe8d1ad9a54588578a76fc0a5462fd7cf32f-merged.mount: Deactivated successfully.
Nov 24 19:47:37 compute-0 podman[77891]: 2025-11-24 19:47:37.356300896 +0000 UTC m=+0.829023539 container remove 10c02694999c98fff30c28225f0713ca4fd09db50788331dc7cf17c8b188b72b (image=quay.io/ceph/ceph:v18, name=awesome_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 19:47:37 compute-0 systemd[1]: libpod-conmon-10c02694999c98fff30c28225f0713ca4fd09db50788331dc7cf17c8b188b72b.scope: Deactivated successfully.
Nov 24 19:47:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:37 compute-0 ceph-mon[75677]: Added host compute-0
Nov 24 19:47:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:47:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:37 compute-0 podman[78060]: 2025-11-24 19:47:37.448281912 +0000 UTC m=+0.065781528 container create 797d965e3d345ad874d5f34de59dd76fdb864a435b4c9832d77533c8d8bc6f1f (image=quay.io/ceph/ceph:v18, name=silly_torvalds, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 19:47:37 compute-0 systemd[1]: Started libpod-conmon-797d965e3d345ad874d5f34de59dd76fdb864a435b4c9832d77533c8d8bc6f1f.scope.
Nov 24 19:47:37 compute-0 cool_gauss[78043]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 24 19:47:37 compute-0 podman[78060]: 2025-11-24 19:47:37.415868631 +0000 UTC m=+0.033368297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:37 compute-0 podman[78027]: 2025-11-24 19:47:37.51448569 +0000 UTC m=+0.423388448 container died 30fec7fd477465decf9e4132d8b9d22c399383876dace497a51e81c17671c049 (image=quay.io/ceph/ceph:v18, name=cool_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:47:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:37 compute-0 systemd[1]: libpod-30fec7fd477465decf9e4132d8b9d22c399383876dace497a51e81c17671c049.scope: Deactivated successfully.
Nov 24 19:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d9297624c2d47a77425ed8f764e47df444a97fad5423e161f8650fd793124/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d9297624c2d47a77425ed8f764e47df444a97fad5423e161f8650fd793124/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d9297624c2d47a77425ed8f764e47df444a97fad5423e161f8650fd793124/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:37 compute-0 podman[78060]: 2025-11-24 19:47:37.546683845 +0000 UTC m=+0.164216372 container init 797d965e3d345ad874d5f34de59dd76fdb864a435b4c9832d77533c8d8bc6f1f (image=quay.io/ceph/ceph:v18, name=silly_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 24 19:47:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1632d1b1e66b6ffb5829c4f2bbce765472cb4104a97470265b71858694e510da-merged.mount: Deactivated successfully.
Nov 24 19:47:37 compute-0 podman[78060]: 2025-11-24 19:47:37.560130589 +0000 UTC m=+0.177630195 container start 797d965e3d345ad874d5f34de59dd76fdb864a435b4c9832d77533c8d8bc6f1f (image=quay.io/ceph/ceph:v18, name=silly_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:47:37 compute-0 podman[78060]: 2025-11-24 19:47:37.565095019 +0000 UTC m=+0.182594625 container attach 797d965e3d345ad874d5f34de59dd76fdb864a435b4c9832d77533c8d8bc6f1f (image=quay.io/ceph/ceph:v18, name=silly_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 19:47:37 compute-0 podman[78027]: 2025-11-24 19:47:37.592413967 +0000 UTC m=+0.501316715 container remove 30fec7fd477465decf9e4132d8b9d22c399383876dace497a51e81c17671c049 (image=quay.io/ceph/ceph:v18, name=cool_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:47:37 compute-0 systemd[1]: libpod-conmon-30fec7fd477465decf9e4132d8b9d22c399383876dace497a51e81c17671c049.scope: Deactivated successfully.
Nov 24 19:47:37 compute-0 sudo[77961]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 24 19:47:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:37 compute-0 sudo[78091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:37 compute-0 sudo[78091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:37 compute-0 sudo[78091]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:37 compute-0 sudo[78116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:37 compute-0 sudo[78116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:37 compute-0 sudo[78116]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:37 compute-0 sudo[78142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:37 compute-0 sudo[78142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:37 compute-0 sudo[78142]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:38 compute-0 sudo[78185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 19:47:38 compute-0 sudo[78185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:38 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:38 compute-0 ceph-mgr[75975]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 24 19:47:38 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 24 19:47:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 19:47:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:38 compute-0 silly_torvalds[78076]: Scheduled mgr update...
Nov 24 19:47:38 compute-0 systemd[1]: libpod-797d965e3d345ad874d5f34de59dd76fdb864a435b4c9832d77533c8d8bc6f1f.scope: Deactivated successfully.
Nov 24 19:47:38 compute-0 podman[78060]: 2025-11-24 19:47:38.181788362 +0000 UTC m=+0.799287978 container died 797d965e3d345ad874d5f34de59dd76fdb864a435b4c9832d77533c8d8bc6f1f (image=quay.io/ceph/ceph:v18, name=silly_torvalds, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 19:47:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17d9297624c2d47a77425ed8f764e47df444a97fad5423e161f8650fd793124-merged.mount: Deactivated successfully.
Nov 24 19:47:38 compute-0 podman[78060]: 2025-11-24 19:47:38.234162727 +0000 UTC m=+0.851662343 container remove 797d965e3d345ad874d5f34de59dd76fdb864a435b4c9832d77533c8d8bc6f1f (image=quay.io/ceph/ceph:v18, name=silly_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 19:47:38 compute-0 systemd[1]: libpod-conmon-797d965e3d345ad874d5f34de59dd76fdb864a435b4c9832d77533c8d8bc6f1f.scope: Deactivated successfully.
Nov 24 19:47:38 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:38 compute-0 podman[78238]: 2025-11-24 19:47:38.315671397 +0000 UTC m=+0.057468180 container create afbe300d3886cfd6ea6895ac9414bfb1d793d141817b37e7998b895c7dc15c5e (image=quay.io/ceph/ceph:v18, name=agitated_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 19:47:38 compute-0 sudo[78185]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:47:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:38 compute-0 systemd[1]: Started libpod-conmon-afbe300d3886cfd6ea6895ac9414bfb1d793d141817b37e7998b895c7dc15c5e.scope.
Nov 24 19:47:38 compute-0 podman[78238]: 2025-11-24 19:47:38.285971667 +0000 UTC m=+0.027768500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:38 compute-0 ceph-mon[75677]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:38 compute-0 ceph-mon[75677]: Saving service mon spec with placement count:5
Nov 24 19:47:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0373b42335f020dfb961d46353c61f5c3ccb89093daa2c746290fd2c2dbabd45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0373b42335f020dfb961d46353c61f5c3ccb89093daa2c746290fd2c2dbabd45/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0373b42335f020dfb961d46353c61f5c3ccb89093daa2c746290fd2c2dbabd45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:38 compute-0 sudo[78260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:38 compute-0 sudo[78260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:38 compute-0 podman[78238]: 2025-11-24 19:47:38.416958547 +0000 UTC m=+0.158755310 container init afbe300d3886cfd6ea6895ac9414bfb1d793d141817b37e7998b895c7dc15c5e (image=quay.io/ceph/ceph:v18, name=agitated_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:38 compute-0 sudo[78260]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:38 compute-0 podman[78238]: 2025-11-24 19:47:38.42661639 +0000 UTC m=+0.168413133 container start afbe300d3886cfd6ea6895ac9414bfb1d793d141817b37e7998b895c7dc15c5e (image=quay.io/ceph/ceph:v18, name=agitated_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 19:47:38 compute-0 podman[78238]: 2025-11-24 19:47:38.430112602 +0000 UTC m=+0.171909365 container attach afbe300d3886cfd6ea6895ac9414bfb1d793d141817b37e7998b895c7dc15c5e (image=quay.io/ceph/ceph:v18, name=agitated_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:47:38 compute-0 sudo[78289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:38 compute-0 sudo[78289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:38 compute-0 sudo[78289]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:38 compute-0 sudo[78315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:38 compute-0 sudo[78315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:38 compute-0 sudo[78315]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:38 compute-0 sudo[78340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 19:47:38 compute-0 sudo[78340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:38 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:38 compute-0 ceph-mgr[75975]: [cephadm INFO root] Saving service crash spec with placement *
Nov 24 19:47:38 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 24 19:47:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 24 19:47:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:38 compute-0 agitated_napier[78264]: Scheduled crash update...
Nov 24 19:47:38 compute-0 systemd[1]: libpod-afbe300d3886cfd6ea6895ac9414bfb1d793d141817b37e7998b895c7dc15c5e.scope: Deactivated successfully.
Nov 24 19:47:38 compute-0 podman[78238]: 2025-11-24 19:47:38.973570442 +0000 UTC m=+0.715367205 container died afbe300d3886cfd6ea6895ac9414bfb1d793d141817b37e7998b895c7dc15c5e (image=quay.io/ceph/ceph:v18, name=agitated_napier, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0373b42335f020dfb961d46353c61f5c3ccb89093daa2c746290fd2c2dbabd45-merged.mount: Deactivated successfully.
Nov 24 19:47:39 compute-0 podman[78238]: 2025-11-24 19:47:39.023535544 +0000 UTC m=+0.765332297 container remove afbe300d3886cfd6ea6895ac9414bfb1d793d141817b37e7998b895c7dc15c5e (image=quay.io/ceph/ceph:v18, name=agitated_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:39 compute-0 systemd[1]: libpod-conmon-afbe300d3886cfd6ea6895ac9414bfb1d793d141817b37e7998b895c7dc15c5e.scope: Deactivated successfully.
Nov 24 19:47:39 compute-0 podman[78449]: 2025-11-24 19:47:39.111558625 +0000 UTC m=+0.060459719 container create d5c690e308ce012e694abd377b5b1ac26b733d29625af36f6d18de946a3971ea (image=quay.io/ceph/ceph:v18, name=nice_bhaskara, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 19:47:39 compute-0 systemd[1]: Started libpod-conmon-d5c690e308ce012e694abd377b5b1ac26b733d29625af36f6d18de946a3971ea.scope.
Nov 24 19:47:39 compute-0 podman[78449]: 2025-11-24 19:47:39.086830906 +0000 UTC m=+0.035732010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffba9cced0a46431d8bf7c2c1b3b39d077ef5ee8feb4417c9195f7511755217/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffba9cced0a46431d8bf7c2c1b3b39d077ef5ee8feb4417c9195f7511755217/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffba9cced0a46431d8bf7c2c1b3b39d077ef5ee8feb4417c9195f7511755217/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:39 compute-0 podman[78449]: 2025-11-24 19:47:39.228892246 +0000 UTC m=+0.177793350 container init d5c690e308ce012e694abd377b5b1ac26b733d29625af36f6d18de946a3971ea (image=quay.io/ceph/ceph:v18, name=nice_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 19:47:39 compute-0 podman[78449]: 2025-11-24 19:47:39.243482439 +0000 UTC m=+0.192383533 container start d5c690e308ce012e694abd377b5b1ac26b733d29625af36f6d18de946a3971ea (image=quay.io/ceph/ceph:v18, name=nice_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:39 compute-0 podman[78449]: 2025-11-24 19:47:39.248896011 +0000 UTC m=+0.197797115 container attach d5c690e308ce012e694abd377b5b1ac26b733d29625af36f6d18de946a3971ea (image=quay.io/ceph/ceph:v18, name=nice_bhaskara, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:47:39 compute-0 podman[78480]: 2025-11-24 19:47:39.253236406 +0000 UTC m=+0.097957424 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:47:39 compute-0 ceph-mon[75677]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:39 compute-0 ceph-mon[75677]: Saving service mgr spec with placement count:2
Nov 24 19:47:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:39 compute-0 podman[78480]: 2025-11-24 19:47:39.550167312 +0000 UTC m=+0.394888360 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 19:47:39 compute-0 sudo[78340]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:47:39 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 24 19:47:39 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3341424493' entity='client.admin' 
Nov 24 19:47:39 compute-0 systemd[1]: libpod-d5c690e308ce012e694abd377b5b1ac26b733d29625af36f6d18de946a3971ea.scope: Deactivated successfully.
Nov 24 19:47:39 compute-0 podman[78449]: 2025-11-24 19:47:39.816049753 +0000 UTC m=+0.764950877 container died d5c690e308ce012e694abd377b5b1ac26b733d29625af36f6d18de946a3971ea (image=quay.io/ceph/ceph:v18, name=nice_bhaskara, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:39 compute-0 sudo[78560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:39 compute-0 sudo[78560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:39 compute-0 sudo[78560]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ffba9cced0a46431d8bf7c2c1b3b39d077ef5ee8feb4417c9195f7511755217-merged.mount: Deactivated successfully.
Nov 24 19:47:39 compute-0 podman[78449]: 2025-11-24 19:47:39.878798311 +0000 UTC m=+0.827699405 container remove d5c690e308ce012e694abd377b5b1ac26b733d29625af36f6d18de946a3971ea (image=quay.io/ceph/ceph:v18, name=nice_bhaskara, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 19:47:39 compute-0 systemd[1]: libpod-conmon-d5c690e308ce012e694abd377b5b1ac26b733d29625af36f6d18de946a3971ea.scope: Deactivated successfully.
Nov 24 19:47:39 compute-0 sudo[78594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:39 compute-0 sudo[78594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:39 compute-0 sudo[78594]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:39 compute-0 podman[78611]: 2025-11-24 19:47:39.97246524 +0000 UTC m=+0.064099864 container create ee52fc22c78fe082c31d8d81ba068c8869f34cfd6e9cb1f5c825bea1931a84bd (image=quay.io/ceph/ceph:v18, name=stupefied_lehmann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 19:47:40 compute-0 systemd[1]: Started libpod-conmon-ee52fc22c78fe082c31d8d81ba068c8869f34cfd6e9cb1f5c825bea1931a84bd.scope.
Nov 24 19:47:40 compute-0 sudo[78635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:40 compute-0 sudo[78635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:40 compute-0 sudo[78635]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:40 compute-0 podman[78611]: 2025-11-24 19:47:39.950691999 +0000 UTC m=+0.042326633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ed0f92ca7a4eae9bc9cfd5365168bdc85261e1342d580be1ff37ff4d5b29e9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ed0f92ca7a4eae9bc9cfd5365168bdc85261e1342d580be1ff37ff4d5b29e9d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ed0f92ca7a4eae9bc9cfd5365168bdc85261e1342d580be1ff37ff4d5b29e9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:40 compute-0 podman[78611]: 2025-11-24 19:47:40.082106939 +0000 UTC m=+0.173741603 container init ee52fc22c78fe082c31d8d81ba068c8869f34cfd6e9cb1f5c825bea1931a84bd (image=quay.io/ceph/ceph:v18, name=stupefied_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 19:47:40 compute-0 podman[78611]: 2025-11-24 19:47:40.095525191 +0000 UTC m=+0.187159815 container start ee52fc22c78fe082c31d8d81ba068c8869f34cfd6e9cb1f5c825bea1931a84bd (image=quay.io/ceph/ceph:v18, name=stupefied_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:47:40 compute-0 podman[78611]: 2025-11-24 19:47:40.099782783 +0000 UTC m=+0.191417417 container attach ee52fc22c78fe082c31d8d81ba068c8869f34cfd6e9cb1f5c825bea1931a84bd (image=quay.io/ceph/ceph:v18, name=stupefied_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 24 19:47:40 compute-0 sudo[78667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 19:47:40 compute-0 sudo[78667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:40 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:40 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78704 (sysctl)
Nov 24 19:47:40 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 24 19:47:40 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 24 19:47:40 compute-0 ceph-mon[75677]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:40 compute-0 ceph-mon[75677]: Saving service crash spec with placement *
Nov 24 19:47:40 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:40 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3341424493' entity='client.admin' 
Nov 24 19:47:40 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:40 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 24 19:47:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:40 compute-0 systemd[1]: libpod-ee52fc22c78fe082c31d8d81ba068c8869f34cfd6e9cb1f5c825bea1931a84bd.scope: Deactivated successfully.
Nov 24 19:47:40 compute-0 sudo[78667]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:40 compute-0 podman[78747]: 2025-11-24 19:47:40.756612409 +0000 UTC m=+0.033916331 container died ee52fc22c78fe082c31d8d81ba068c8869f34cfd6e9cb1f5c825bea1931a84bd (image=quay.io/ceph/ceph:v18, name=stupefied_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ed0f92ca7a4eae9bc9cfd5365168bdc85261e1342d580be1ff37ff4d5b29e9d-merged.mount: Deactivated successfully.
Nov 24 19:47:40 compute-0 podman[78747]: 2025-11-24 19:47:40.806942001 +0000 UTC m=+0.084245883 container remove ee52fc22c78fe082c31d8d81ba068c8869f34cfd6e9cb1f5c825bea1931a84bd (image=quay.io/ceph/ceph:v18, name=stupefied_lehmann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 19:47:40 compute-0 systemd[1]: libpod-conmon-ee52fc22c78fe082c31d8d81ba068c8869f34cfd6e9cb1f5c825bea1931a84bd.scope: Deactivated successfully.
Nov 24 19:47:40 compute-0 sudo[78762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:40 compute-0 sudo[78762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:40 compute-0 sudo[78762]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:40 compute-0 podman[78769]: 2025-11-24 19:47:40.896141013 +0000 UTC m=+0.054280776 container create 67f76d90c9f3ccb6d4ada95b533a581b5d55be7dbebf21e90be9cbe03c130e0e (image=quay.io/ceph/ceph:v18, name=bold_buck, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 19:47:40 compute-0 systemd[1]: Started libpod-conmon-67f76d90c9f3ccb6d4ada95b533a581b5d55be7dbebf21e90be9cbe03c130e0e.scope.
Nov 24 19:47:40 compute-0 podman[78769]: 2025-11-24 19:47:40.877636127 +0000 UTC m=+0.035775920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:40 compute-0 sudo[78801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:40 compute-0 sudo[78801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b31a746b318fbc445876f680c9bdca2b35ed24c5f9dcdf689f31d6f77404306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b31a746b318fbc445876f680c9bdca2b35ed24c5f9dcdf689f31d6f77404306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b31a746b318fbc445876f680c9bdca2b35ed24c5f9dcdf689f31d6f77404306/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:40 compute-0 sudo[78801]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:41 compute-0 podman[78769]: 2025-11-24 19:47:41.019334348 +0000 UTC m=+0.177474141 container init 67f76d90c9f3ccb6d4ada95b533a581b5d55be7dbebf21e90be9cbe03c130e0e (image=quay.io/ceph/ceph:v18, name=bold_buck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 19:47:41 compute-0 podman[78769]: 2025-11-24 19:47:41.031038565 +0000 UTC m=+0.189178348 container start 67f76d90c9f3ccb6d4ada95b533a581b5d55be7dbebf21e90be9cbe03c130e0e (image=quay.io/ceph/ceph:v18, name=bold_buck, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 19:47:41 compute-0 podman[78769]: 2025-11-24 19:47:41.035363449 +0000 UTC m=+0.193503242 container attach 67f76d90c9f3ccb6d4ada95b533a581b5d55be7dbebf21e90be9cbe03c130e0e (image=quay.io/ceph/ceph:v18, name=bold_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 19:47:41 compute-0 sudo[78831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:41 compute-0 sudo[78831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:41 compute-0 sudo[78831]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:41 compute-0 sudo[78858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 24 19:47:41 compute-0 sudo[78858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:41 compute-0 sudo[78858]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:47:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:41 compute-0 sudo[78920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:41 compute-0 sudo[78920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:41 compute-0 sudo[78920]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:41 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 19:47:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:41 compute-0 ceph-mgr[75975]: [cephadm INFO root] Added label _admin to host compute-0
Nov 24 19:47:41 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 24 19:47:41 compute-0 bold_buck[78826]: Added label _admin to host compute-0
Nov 24 19:47:41 compute-0 sudo[78945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:41 compute-0 sudo[78945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:41 compute-0 systemd[1]: libpod-67f76d90c9f3ccb6d4ada95b533a581b5d55be7dbebf21e90be9cbe03c130e0e.scope: Deactivated successfully.
Nov 24 19:47:41 compute-0 podman[78769]: 2025-11-24 19:47:41.592340984 +0000 UTC m=+0.750480767 container died 67f76d90c9f3ccb6d4ada95b533a581b5d55be7dbebf21e90be9cbe03c130e0e (image=quay.io/ceph/ceph:v18, name=bold_buck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 19:47:41 compute-0 sudo[78945]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b31a746b318fbc445876f680c9bdca2b35ed24c5f9dcdf689f31d6f77404306-merged.mount: Deactivated successfully.
Nov 24 19:47:41 compute-0 podman[78769]: 2025-11-24 19:47:41.648758245 +0000 UTC m=+0.806898038 container remove 67f76d90c9f3ccb6d4ada95b533a581b5d55be7dbebf21e90be9cbe03c130e0e (image=quay.io/ceph/ceph:v18, name=bold_buck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 19:47:41 compute-0 systemd[1]: libpod-conmon-67f76d90c9f3ccb6d4ada95b533a581b5d55be7dbebf21e90be9cbe03c130e0e.scope: Deactivated successfully.
Nov 24 19:47:41 compute-0 sudo[78973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:41 compute-0 sudo[78973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:41 compute-0 sudo[78973]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:41 compute-0 ceph-mon[75677]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:47:41 compute-0 podman[79007]: 2025-11-24 19:47:41.71217736 +0000 UTC m=+0.041790328 container create ac58b5884be28254f9f7c5275b6adddaef6f150e505ecfab689498824b5ce1c3 (image=quay.io/ceph/ceph:v18, name=nice_napier, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 19:47:41 compute-0 systemd[1]: Started libpod-conmon-ac58b5884be28254f9f7c5275b6adddaef6f150e505ecfab689498824b5ce1c3.scope.
Nov 24 19:47:41 compute-0 sudo[79020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- inventory --format=json-pretty --filter-for-batch
Nov 24 19:47:41 compute-0 sudo[79020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:41 compute-0 podman[79007]: 2025-11-24 19:47:41.694409584 +0000 UTC m=+0.024022532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22702b6754522960c2971b8a19b56c9ecc5d70c1cc61f87a04fb45ffe9ebb75a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22702b6754522960c2971b8a19b56c9ecc5d70c1cc61f87a04fb45ffe9ebb75a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22702b6754522960c2971b8a19b56c9ecc5d70c1cc61f87a04fb45ffe9ebb75a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:41 compute-0 podman[79007]: 2025-11-24 19:47:41.817558667 +0000 UTC m=+0.147171665 container init ac58b5884be28254f9f7c5275b6adddaef6f150e505ecfab689498824b5ce1c3 (image=quay.io/ceph/ceph:v18, name=nice_napier, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 19:47:41 compute-0 podman[79007]: 2025-11-24 19:47:41.825579128 +0000 UTC m=+0.155192066 container start ac58b5884be28254f9f7c5275b6adddaef6f150e505ecfab689498824b5ce1c3 (image=quay.io/ceph/ceph:v18, name=nice_napier, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 19:47:41 compute-0 podman[79007]: 2025-11-24 19:47:41.833498136 +0000 UTC m=+0.163111114 container attach ac58b5884be28254f9f7c5275b6adddaef6f150e505ecfab689498824b5ce1c3 (image=quay.io/ceph/ceph:v18, name=nice_napier, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 19:47:42 compute-0 podman[79094]: 2025-11-24 19:47:42.185817036 +0000 UTC m=+0.073875440 container create bfadc8163d748058db44022239a8dbe0df5d1d4b7761b3f84b01f4c33b41a909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 19:47:42 compute-0 systemd[1]: Started libpod-conmon-bfadc8163d748058db44022239a8dbe0df5d1d4b7761b3f84b01f4c33b41a909.scope.
Nov 24 19:47:42 compute-0 podman[79094]: 2025-11-24 19:47:42.154850513 +0000 UTC m=+0.042908967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:47:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:42 compute-0 ceph-mgr[75975]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 19:47:42 compute-0 podman[79094]: 2025-11-24 19:47:42.280167434 +0000 UTC m=+0.168225838 container init bfadc8163d748058db44022239a8dbe0df5d1d4b7761b3f84b01f4c33b41a909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 19:47:42 compute-0 podman[79094]: 2025-11-24 19:47:42.289271763 +0000 UTC m=+0.177330167 container start bfadc8163d748058db44022239a8dbe0df5d1d4b7761b3f84b01f4c33b41a909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 19:47:42 compute-0 pensive_hermann[79130]: 167 167
Nov 24 19:47:42 compute-0 systemd[1]: libpod-bfadc8163d748058db44022239a8dbe0df5d1d4b7761b3f84b01f4c33b41a909.scope: Deactivated successfully.
Nov 24 19:47:42 compute-0 podman[79094]: 2025-11-24 19:47:42.296659097 +0000 UTC m=+0.184717501 container attach bfadc8163d748058db44022239a8dbe0df5d1d4b7761b3f84b01f4c33b41a909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:42 compute-0 podman[79094]: 2025-11-24 19:47:42.297090958 +0000 UTC m=+0.185149372 container died bfadc8163d748058db44022239a8dbe0df5d1d4b7761b3f84b01f4c33b41a909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:47:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2ebf55f4b1839ffb00d8b20a0e18c2e3caccf4d383cbb5f8b634ba95f5b0c1c-merged.mount: Deactivated successfully.
Nov 24 19:47:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 24 19:47:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3510363987' entity='client.admin' 
Nov 24 19:47:42 compute-0 podman[79094]: 2025-11-24 19:47:42.346525427 +0000 UTC m=+0.234583801 container remove bfadc8163d748058db44022239a8dbe0df5d1d4b7761b3f84b01f4c33b41a909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:47:42 compute-0 systemd[1]: libpod-ac58b5884be28254f9f7c5275b6adddaef6f150e505ecfab689498824b5ce1c3.scope: Deactivated successfully.
Nov 24 19:47:42 compute-0 podman[79007]: 2025-11-24 19:47:42.367256751 +0000 UTC m=+0.696869709 container died ac58b5884be28254f9f7c5275b6adddaef6f150e505ecfab689498824b5ce1c3 (image=quay.io/ceph/ceph:v18, name=nice_napier, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 24 19:47:42 compute-0 systemd[1]: libpod-conmon-bfadc8163d748058db44022239a8dbe0df5d1d4b7761b3f84b01f4c33b41a909.scope: Deactivated successfully.
Nov 24 19:47:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-22702b6754522960c2971b8a19b56c9ecc5d70c1cc61f87a04fb45ffe9ebb75a-merged.mount: Deactivated successfully.
Nov 24 19:47:42 compute-0 podman[79007]: 2025-11-24 19:47:42.426635399 +0000 UTC m=+0.756248367 container remove ac58b5884be28254f9f7c5275b6adddaef6f150e505ecfab689498824b5ce1c3 (image=quay.io/ceph/ceph:v18, name=nice_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:42 compute-0 systemd[1]: libpod-conmon-ac58b5884be28254f9f7c5275b6adddaef6f150e505ecfab689498824b5ce1c3.scope: Deactivated successfully.
Nov 24 19:47:42 compute-0 podman[79163]: 2025-11-24 19:47:42.519906559 +0000 UTC m=+0.064612357 container create b7aa3d63bce031743ef74cf21d221b3c37625da1127785f0705093f433ac1bf3 (image=quay.io/ceph/ceph:v18, name=great_cray, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 19:47:42 compute-0 systemd[1]: Started libpod-conmon-b7aa3d63bce031743ef74cf21d221b3c37625da1127785f0705093f433ac1bf3.scope.
Nov 24 19:47:42 compute-0 podman[79163]: 2025-11-24 19:47:42.492847368 +0000 UTC m=+0.037553216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a1a6bcee3eab1d55f45bad3ed9a3b2b4b17e9e7f51436674846289608248da/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a1a6bcee3eab1d55f45bad3ed9a3b2b4b17e9e7f51436674846289608248da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a1a6bcee3eab1d55f45bad3ed9a3b2b4b17e9e7f51436674846289608248da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:42 compute-0 podman[79163]: 2025-11-24 19:47:42.626493708 +0000 UTC m=+0.171199546 container init b7aa3d63bce031743ef74cf21d221b3c37625da1127785f0705093f433ac1bf3 (image=quay.io/ceph/ceph:v18, name=great_cray, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:42 compute-0 podman[79163]: 2025-11-24 19:47:42.635903525 +0000 UTC m=+0.180609323 container start b7aa3d63bce031743ef74cf21d221b3c37625da1127785f0705093f433ac1bf3 (image=quay.io/ceph/ceph:v18, name=great_cray, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:42 compute-0 podman[79163]: 2025-11-24 19:47:42.641008309 +0000 UTC m=+0.185714157 container attach b7aa3d63bce031743ef74cf21d221b3c37625da1127785f0705093f433ac1bf3 (image=quay.io/ceph/ceph:v18, name=great_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 24 19:47:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 24 19:47:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/448016234' entity='client.admin' 
Nov 24 19:47:43 compute-0 great_cray[79179]: set mgr/dashboard/cluster/status
Nov 24 19:47:43 compute-0 systemd[1]: libpod-b7aa3d63bce031743ef74cf21d221b3c37625da1127785f0705093f433ac1bf3.scope: Deactivated successfully.
Nov 24 19:47:43 compute-0 podman[79163]: 2025-11-24 19:47:43.299711614 +0000 UTC m=+0.844417432 container died b7aa3d63bce031743ef74cf21d221b3c37625da1127785f0705093f433ac1bf3 (image=quay.io/ceph/ceph:v18, name=great_cray, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5a1a6bcee3eab1d55f45bad3ed9a3b2b4b17e9e7f51436674846289608248da-merged.mount: Deactivated successfully.
Nov 24 19:47:43 compute-0 ceph-mon[75677]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:43 compute-0 ceph-mon[75677]: Added label _admin to host compute-0
Nov 24 19:47:43 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3510363987' entity='client.admin' 
Nov 24 19:47:43 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/448016234' entity='client.admin' 
Nov 24 19:47:43 compute-0 podman[79163]: 2025-11-24 19:47:43.352523921 +0000 UTC m=+0.897229679 container remove b7aa3d63bce031743ef74cf21d221b3c37625da1127785f0705093f433ac1bf3 (image=quay.io/ceph/ceph:v18, name=great_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:43 compute-0 systemd[1]: libpod-conmon-b7aa3d63bce031743ef74cf21d221b3c37625da1127785f0705093f433ac1bf3.scope: Deactivated successfully.
Nov 24 19:47:43 compute-0 sudo[74644]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:43 compute-0 podman[79224]: 2025-11-24 19:47:43.566208372 +0000 UTC m=+0.066308213 container create 3ab191f50d334de348a412d2124b5b292b189ea6270783b3105f19d25954cd55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 19:47:43 compute-0 systemd[1]: Started libpod-conmon-3ab191f50d334de348a412d2124b5b292b189ea6270783b3105f19d25954cd55.scope.
Nov 24 19:47:43 compute-0 podman[79224]: 2025-11-24 19:47:43.538559976 +0000 UTC m=+0.038659857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:47:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae02907c886a87f2cb10febf319c93c27e2808088367c26af657f7ad1bd8f4a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae02907c886a87f2cb10febf319c93c27e2808088367c26af657f7ad1bd8f4a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae02907c886a87f2cb10febf319c93c27e2808088367c26af657f7ad1bd8f4a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae02907c886a87f2cb10febf319c93c27e2808088367c26af657f7ad1bd8f4a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:43 compute-0 podman[79224]: 2025-11-24 19:47:43.681774426 +0000 UTC m=+0.181874327 container init 3ab191f50d334de348a412d2124b5b292b189ea6270783b3105f19d25954cd55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 19:47:43 compute-0 podman[79224]: 2025-11-24 19:47:43.695356083 +0000 UTC m=+0.195455924 container start 3ab191f50d334de348a412d2124b5b292b189ea6270783b3105f19d25954cd55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:43 compute-0 podman[79224]: 2025-11-24 19:47:43.699126542 +0000 UTC m=+0.199226413 container attach 3ab191f50d334de348a412d2124b5b292b189ea6270783b3105f19d25954cd55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:43 compute-0 sudo[79268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqadlwhhetjeynaxsuxeqnnfkjmpwmxk ; /usr/bin/python3'
Nov 24 19:47:43 compute-0 sudo[79268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:43 compute-0 python3[79270]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:47:44 compute-0 podman[79271]: 2025-11-24 19:47:44.051084923 +0000 UTC m=+0.069729382 container create bd8b9d1b1a7d64d79c82c1d4a4df526590ec276cdc0d96faed426734436dbccf (image=quay.io/ceph/ceph:v18, name=affectionate_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:47:44 compute-0 systemd[1]: Started libpod-conmon-bd8b9d1b1a7d64d79c82c1d4a4df526590ec276cdc0d96faed426734436dbccf.scope.
Nov 24 19:47:44 compute-0 podman[79271]: 2025-11-24 19:47:44.021756084 +0000 UTC m=+0.040400563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540bacf7eec2cf0ed651ea1d5b7671e6c2aeb832cbf88519c5f50052bcec5e68/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540bacf7eec2cf0ed651ea1d5b7671e6c2aeb832cbf88519c5f50052bcec5e68/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:44 compute-0 podman[79271]: 2025-11-24 19:47:44.160105476 +0000 UTC m=+0.178749955 container init bd8b9d1b1a7d64d79c82c1d4a4df526590ec276cdc0d96faed426734436dbccf (image=quay.io/ceph/ceph:v18, name=affectionate_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 19:47:44 compute-0 podman[79271]: 2025-11-24 19:47:44.170797897 +0000 UTC m=+0.189442366 container start bd8b9d1b1a7d64d79c82c1d4a4df526590ec276cdc0d96faed426734436dbccf (image=quay.io/ceph/ceph:v18, name=affectionate_cohen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 19:47:44 compute-0 podman[79271]: 2025-11-24 19:47:44.174712049 +0000 UTC m=+0.193356568 container attach bd8b9d1b1a7d64d79c82c1d4a4df526590ec276cdc0d96faed426734436dbccf (image=quay.io/ceph/ceph:v18, name=affectionate_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 19:47:44 compute-0 ceph-mgr[75975]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 24 19:47:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:44 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 24 19:47:44 compute-0 ceph-mon[75677]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 24 19:47:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 24 19:47:44 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2489509431' entity='client.admin' 
Nov 24 19:47:44 compute-0 systemd[1]: libpod-bd8b9d1b1a7d64d79c82c1d4a4df526590ec276cdc0d96faed426734436dbccf.scope: Deactivated successfully.
Nov 24 19:47:44 compute-0 podman[79326]: 2025-11-24 19:47:44.772398383 +0000 UTC m=+0.032906075 container died bd8b9d1b1a7d64d79c82c1d4a4df526590ec276cdc0d96faed426734436dbccf (image=quay.io/ceph/ceph:v18, name=affectionate_cohen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 19:47:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-540bacf7eec2cf0ed651ea1d5b7671e6c2aeb832cbf88519c5f50052bcec5e68-merged.mount: Deactivated successfully.
Nov 24 19:47:44 compute-0 podman[79326]: 2025-11-24 19:47:44.826049662 +0000 UTC m=+0.086557334 container remove bd8b9d1b1a7d64d79c82c1d4a4df526590ec276cdc0d96faed426734436dbccf (image=quay.io/ceph/ceph:v18, name=affectionate_cohen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:44 compute-0 systemd[1]: libpod-conmon-bd8b9d1b1a7d64d79c82c1d4a4df526590ec276cdc0d96faed426734436dbccf.scope: Deactivated successfully.
Nov 24 19:47:44 compute-0 sudo[79268]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:45 compute-0 distracted_napier[79240]: [
Nov 24 19:47:45 compute-0 distracted_napier[79240]:     {
Nov 24 19:47:45 compute-0 distracted_napier[79240]:         "available": false,
Nov 24 19:47:45 compute-0 distracted_napier[79240]:         "ceph_device": false,
Nov 24 19:47:45 compute-0 distracted_napier[79240]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:         "lsm_data": {},
Nov 24 19:47:45 compute-0 distracted_napier[79240]:         "lvs": [],
Nov 24 19:47:45 compute-0 distracted_napier[79240]:         "path": "/dev/sr0",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:         "rejected_reasons": [
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "Insufficient space (<5GB)",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "Has a FileSystem"
Nov 24 19:47:45 compute-0 distracted_napier[79240]:         ],
Nov 24 19:47:45 compute-0 distracted_napier[79240]:         "sys_api": {
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "actuators": null,
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "device_nodes": "sr0",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "devname": "sr0",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "human_readable_size": "482.00 KB",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "id_bus": "ata",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "model": "QEMU DVD-ROM",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "nr_requests": "2",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "parent": "/dev/sr0",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "partitions": {},
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "path": "/dev/sr0",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "removable": "1",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "rev": "2.5+",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "ro": "0",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "rotational": "1",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "sas_address": "",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "sas_device_handle": "",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "scheduler_mode": "mq-deadline",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "sectors": 0,
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "sectorsize": "2048",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "size": 493568.0,
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "support_discard": "2048",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "type": "disk",
Nov 24 19:47:45 compute-0 distracted_napier[79240]:             "vendor": "QEMU"
Nov 24 19:47:45 compute-0 distracted_napier[79240]:         }
Nov 24 19:47:45 compute-0 distracted_napier[79240]:     }
Nov 24 19:47:45 compute-0 distracted_napier[79240]: ]
Nov 24 19:47:45 compute-0 systemd[1]: libpod-3ab191f50d334de348a412d2124b5b292b189ea6270783b3105f19d25954cd55.scope: Deactivated successfully.
Nov 24 19:47:45 compute-0 systemd[1]: libpod-3ab191f50d334de348a412d2124b5b292b189ea6270783b3105f19d25954cd55.scope: Consumed 1.574s CPU time.
Nov 24 19:47:45 compute-0 podman[79224]: 2025-11-24 19:47:45.267464112 +0000 UTC m=+1.767563993 container died 3ab191f50d334de348a412d2124b5b292b189ea6270783b3105f19d25954cd55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae02907c886a87f2cb10febf319c93c27e2808088367c26af657f7ad1bd8f4a3-merged.mount: Deactivated successfully.
Nov 24 19:47:45 compute-0 podman[79224]: 2025-11-24 19:47:45.328739411 +0000 UTC m=+1.828839222 container remove 3ab191f50d334de348a412d2124b5b292b189ea6270783b3105f19d25954cd55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 19:47:45 compute-0 systemd[1]: libpod-conmon-3ab191f50d334de348a412d2124b5b292b189ea6270783b3105f19d25954cd55.scope: Deactivated successfully.
Nov 24 19:47:45 compute-0 ceph-mon[75677]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:45 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2489509431' entity='client.admin' 
Nov 24 19:47:45 compute-0 sudo[79020]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:47:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:47:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:47:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:47:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 19:47:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 19:47:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:47:45 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:47:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:47:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:47:45 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 24 19:47:45 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 24 19:47:45 compute-0 sudo[80958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:45 compute-0 sudo[80958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:45 compute-0 sudo[80958]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:45 compute-0 sudo[80983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 19:47:45 compute-0 sudo[80983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:45 compute-0 sudo[80983]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:45 compute-0 sudo[81022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:45 compute-0 sudo[81022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:45 compute-0 sudo[81022]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:45 compute-0 sudo[81080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph
Nov 24 19:47:45 compute-0 sudo[81080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:45 compute-0 sudo[81080]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:45 compute-0 sudo[81129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lssyrgeycmsufgurbtaaoohuaolgqrud ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764013665.2275636-37624-78280700372480/async_wrapper.py j108131258578 30 /home/zuul/.ansible/tmp/ansible-tmp-1764013665.2275636-37624-78280700372480/AnsiballZ_command.py _'
Nov 24 19:47:45 compute-0 sudo[81129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:45 compute-0 sudo[81133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:45 compute-0 sudo[81133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:45 compute-0 sudo[81133]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:45 compute-0 sudo[81158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph/ceph.conf.new
Nov 24 19:47:45 compute-0 ansible-async_wrapper.py[81132]: Invoked with j108131258578 30 /home/zuul/.ansible/tmp/ansible-tmp-1764013665.2275636-37624-78280700372480/AnsiballZ_command.py _
Nov 24 19:47:45 compute-0 sudo[81158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:45 compute-0 sudo[81158]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:45 compute-0 ansible-async_wrapper.py[81185]: Starting module and watcher
Nov 24 19:47:45 compute-0 ansible-async_wrapper.py[81185]: Start watching 81186 (30)
Nov 24 19:47:45 compute-0 ansible-async_wrapper.py[81186]: Start module (81186)
Nov 24 19:47:45 compute-0 ansible-async_wrapper.py[81132]: Return async_wrapper task started.
Nov 24 19:47:45 compute-0 sudo[81129]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:45 compute-0 sudo[81187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:45 compute-0 sudo[81187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:46 compute-0 sudo[81187]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:46 compute-0 sudo[81213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:47:46 compute-0 sudo[81213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:46 compute-0 sudo[81213]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:46 compute-0 python3[81188]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:47:46 compute-0 sudo[81238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:46 compute-0 sudo[81238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:46 compute-0 sudo[81238]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:46 compute-0 podman[81239]: 2025-11-24 19:47:46.188426284 +0000 UTC m=+0.070712367 container create dbb095c7b456773c520188213b3cf6b884f5edd38b2782685f3afa8e3342088a (image=quay.io/ceph/ceph:v18, name=angry_diffie, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:46 compute-0 systemd[1]: Started libpod-conmon-dbb095c7b456773c520188213b3cf6b884f5edd38b2782685f3afa8e3342088a.scope.
Nov 24 19:47:46 compute-0 podman[81239]: 2025-11-24 19:47:46.159720881 +0000 UTC m=+0.042007014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:46 compute-0 sudo[81276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph/ceph.conf.new
Nov 24 19:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d787ecc2743fa42bc19a06b941862249959def0a2e8d79da6bdb235b11f4cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d787ecc2743fa42bc19a06b941862249959def0a2e8d79da6bdb235b11f4cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:46 compute-0 sudo[81276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:46 compute-0 sudo[81276]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:46 compute-0 podman[81239]: 2025-11-24 19:47:46.294549941 +0000 UTC m=+0.176836024 container init dbb095c7b456773c520188213b3cf6b884f5edd38b2782685f3afa8e3342088a (image=quay.io/ceph/ceph:v18, name=angry_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 19:47:46 compute-0 podman[81239]: 2025-11-24 19:47:46.306012672 +0000 UTC m=+0.188298735 container start dbb095c7b456773c520188213b3cf6b884f5edd38b2782685f3afa8e3342088a (image=quay.io/ceph/ceph:v18, name=angry_diffie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 19:47:46 compute-0 podman[81239]: 2025-11-24 19:47:46.309149324 +0000 UTC m=+0.191435387 container attach dbb095c7b456773c520188213b3cf6b884f5edd38b2782685f3afa8e3342088a (image=quay.io/ceph/ceph:v18, name=angry_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:47:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 19:47:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:47:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:47:46 compute-0 ceph-mon[75677]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 19:47:46 compute-0 sudo[81330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:46 compute-0 sudo[81330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:46 compute-0 sudo[81330]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:46 compute-0 sudo[81355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph/ceph.conf.new
Nov 24 19:47:46 compute-0 sudo[81355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:46 compute-0 sudo[81355]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:46 compute-0 sudo[81380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:46 compute-0 sudo[81380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:46 compute-0 sudo[81380]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:47:46 compute-0 sudo[81414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph/ceph.conf.new
Nov 24 19:47:46 compute-0 sudo[81414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:46 compute-0 sudo[81414]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:46 compute-0 sudo[81449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:46 compute-0 sudo[81449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:46 compute-0 sudo[81449]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:46 compute-0 sudo[81474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 19:47:46 compute-0 sudo[81474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:46 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:47:46 compute-0 angry_diffie[81301]: 
Nov 24 19:47:46 compute-0 angry_diffie[81301]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 19:47:46 compute-0 sudo[81474]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:46 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.conf
Nov 24 19:47:46 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.conf
Nov 24 19:47:46 compute-0 systemd[1]: libpod-dbb095c7b456773c520188213b3cf6b884f5edd38b2782685f3afa8e3342088a.scope: Deactivated successfully.
Nov 24 19:47:46 compute-0 podman[81239]: 2025-11-24 19:47:46.956252425 +0000 UTC m=+0.838538508 container died dbb095c7b456773c520188213b3cf6b884f5edd38b2782685f3afa8e3342088a (image=quay.io/ceph/ceph:v18, name=angry_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-57d787ecc2743fa42bc19a06b941862249959def0a2e8d79da6bdb235b11f4cb-merged.mount: Deactivated successfully.
Nov 24 19:47:47 compute-0 podman[81239]: 2025-11-24 19:47:47.022763392 +0000 UTC m=+0.905049485 container remove dbb095c7b456773c520188213b3cf6b884f5edd38b2782685f3afa8e3342088a (image=quay.io/ceph/ceph:v18, name=angry_diffie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:47 compute-0 sudo[81501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:47 compute-0 sudo[81501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:47 compute-0 sudo[81501]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 systemd[1]: libpod-conmon-dbb095c7b456773c520188213b3cf6b884f5edd38b2782685f3afa8e3342088a.scope: Deactivated successfully.
Nov 24 19:47:47 compute-0 ansible-async_wrapper.py[81186]: Module complete (81186)
Nov 24 19:47:47 compute-0 sudo[81562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config
Nov 24 19:47:47 compute-0 sudo[81562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:47 compute-0 sudo[81562]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 sudo[81587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:47 compute-0 sudo[81587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:47 compute-0 sudo[81587]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 sudo[81644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cibmfjvbeuupndatiohizpkdmzcqkpvc ; /usr/bin/python3'
Nov 24 19:47:47 compute-0 sudo[81644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:47 compute-0 sudo[81625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config
Nov 24 19:47:47 compute-0 sudo[81625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:47 compute-0 sudo[81625]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 ceph-mon[75677]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:47 compute-0 sudo[81663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:47 compute-0 sudo[81663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:47 compute-0 sudo[81663]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 python3[81660]: ansible-ansible.legacy.async_status Invoked with jid=j108131258578.81132 mode=status _async_dir=/root/.ansible_async
Nov 24 19:47:47 compute-0 sudo[81644]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 sudo[81688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.conf.new
Nov 24 19:47:47 compute-0 sudo[81688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:47 compute-0 sudo[81688]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 sudo[81714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:47 compute-0 sudo[81714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:47 compute-0 sudo[81714]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 sudo[81801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sodcdstadabopyvaxmhicokgrxxpqswt ; /usr/bin/python3'
Nov 24 19:47:47 compute-0 sudo[81801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:47 compute-0 sudo[81768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:47:47 compute-0 sudo[81768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:47 compute-0 sudo[81768]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 sudo[81812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:47 compute-0 sudo[81812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:47 compute-0 sudo[81812]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 python3[81809]: ansible-ansible.legacy.async_status Invoked with jid=j108131258578.81132 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 19:47:47 compute-0 sudo[81801]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:47 compute-0 sudo[81837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.conf.new
Nov 24 19:47:47 compute-0 sudo[81837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:47 compute-0 sudo[81837]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 sudo[81885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:48 compute-0 sudo[81885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[81885]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 sudo[81910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.conf.new
Nov 24 19:47:48 compute-0 sudo[81910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[81910]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 sudo[81981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epiefjhvujwchesevaalkzcfzpxvmsme ; /usr/bin/python3'
Nov 24 19:47:48 compute-0 sudo[81937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:48 compute-0 sudo[81981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:48 compute-0 sudo[81937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[81937]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:48 compute-0 sudo[81986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.conf.new
Nov 24 19:47:48 compute-0 sudo[81986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[81986]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 python3[81984]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 19:47:48 compute-0 sudo[82011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:48 compute-0 sudo[82011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[82011]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 ceph-mon[75677]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:47:48 compute-0 ceph-mon[75677]: Updating compute-0:/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.conf
Nov 24 19:47:48 compute-0 sudo[81981]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 sudo[82038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.conf.new /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.conf
Nov 24 19:47:48 compute-0 sudo[82038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[82038]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 19:47:48 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 19:47:48 compute-0 sudo[82063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:48 compute-0 sudo[82063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[82063]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 sudo[82088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 19:47:48 compute-0 sudo[82088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[82088]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 sudo[82113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:48 compute-0 sudo[82113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[82113]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 sudo[82160]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwxxularmwbleitzirbwtotxbepzkovw ; /usr/bin/python3'
Nov 24 19:47:48 compute-0 sudo[82160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:48 compute-0 sudo[82162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph
Nov 24 19:47:48 compute-0 sudo[82162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[82162]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:48 compute-0 python3[82167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:47:48 compute-0 sudo[82189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:48 compute-0 sudo[82189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:48 compute-0 sudo[82189]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 sudo[82215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph/ceph.client.admin.keyring.new
Nov 24 19:47:49 compute-0 sudo[82215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:49 compute-0 podman[82213]: 2025-11-24 19:47:49.010271048 +0000 UTC m=+0.058303751 container create aa75276fc6480f7a21dbe5f4c76ef8998372a311c87b1bee86a444fe3ab31cd3 (image=quay.io/ceph/ceph:v18, name=distracted_torvalds, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 19:47:49 compute-0 sudo[82215]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 systemd[1]: Started libpod-conmon-aa75276fc6480f7a21dbe5f4c76ef8998372a311c87b1bee86a444fe3ab31cd3.scope.
Nov 24 19:47:49 compute-0 podman[82213]: 2025-11-24 19:47:48.990978592 +0000 UTC m=+0.039011275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e87048f6e5778855c9041f2e100636c631abc2192f202e51fc867339df21885/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e87048f6e5778855c9041f2e100636c631abc2192f202e51fc867339df21885/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e87048f6e5778855c9041f2e100636c631abc2192f202e51fc867339df21885/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:49 compute-0 sudo[82252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:49 compute-0 sudo[82252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:49 compute-0 sudo[82252]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 podman[82213]: 2025-11-24 19:47:49.115994094 +0000 UTC m=+0.164026777 container init aa75276fc6480f7a21dbe5f4c76ef8998372a311c87b1bee86a444fe3ab31cd3 (image=quay.io/ceph/ceph:v18, name=distracted_torvalds, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 19:47:49 compute-0 podman[82213]: 2025-11-24 19:47:49.127736552 +0000 UTC m=+0.175769255 container start aa75276fc6480f7a21dbe5f4c76ef8998372a311c87b1bee86a444fe3ab31cd3 (image=quay.io/ceph/ceph:v18, name=distracted_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:47:49 compute-0 podman[82213]: 2025-11-24 19:47:49.132235561 +0000 UTC m=+0.180268234 container attach aa75276fc6480f7a21dbe5f4c76ef8998372a311c87b1bee86a444fe3ab31cd3 (image=quay.io/ceph/ceph:v18, name=distracted_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:49 compute-0 sudo[82284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:47:49 compute-0 sudo[82284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:49 compute-0 sudo[82284]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 sudo[82309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:49 compute-0 sudo[82309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:49 compute-0 sudo[82309]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 sudo[82334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph/ceph.client.admin.keyring.new
Nov 24 19:47:49 compute-0 sudo[82334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:49 compute-0 sudo[82334]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 ceph-mon[75677]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:49 compute-0 sudo[82401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:49 compute-0 sudo[82401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:49 compute-0 sudo[82401]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 sudo[82426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph/ceph.client.admin.keyring.new
Nov 24 19:47:49 compute-0 sudo[82426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:49 compute-0 sudo[82426]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:47:49 compute-0 distracted_torvalds[82272]: 
Nov 24 19:47:49 compute-0 distracted_torvalds[82272]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 19:47:49 compute-0 systemd[1]: libpod-aa75276fc6480f7a21dbe5f4c76ef8998372a311c87b1bee86a444fe3ab31cd3.scope: Deactivated successfully.
Nov 24 19:47:49 compute-0 podman[82213]: 2025-11-24 19:47:49.701360264 +0000 UTC m=+0.749392957 container died aa75276fc6480f7a21dbe5f4c76ef8998372a311c87b1bee86a444fe3ab31cd3 (image=quay.io/ceph/ceph:v18, name=distracted_torvalds, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 19:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e87048f6e5778855c9041f2e100636c631abc2192f202e51fc867339df21885-merged.mount: Deactivated successfully.
Nov 24 19:47:49 compute-0 sudo[82451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:49 compute-0 sudo[82451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:49 compute-0 podman[82213]: 2025-11-24 19:47:49.760930728 +0000 UTC m=+0.808963401 container remove aa75276fc6480f7a21dbe5f4c76ef8998372a311c87b1bee86a444fe3ab31cd3 (image=quay.io/ceph/ceph:v18, name=distracted_torvalds, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:47:49 compute-0 sudo[82451]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 systemd[1]: libpod-conmon-aa75276fc6480f7a21dbe5f4c76ef8998372a311c87b1bee86a444fe3ab31cd3.scope: Deactivated successfully.
Nov 24 19:47:49 compute-0 sudo[82160]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 sudo[82492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph/ceph.client.admin.keyring.new
Nov 24 19:47:49 compute-0 sudo[82492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:49 compute-0 sudo[82492]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:49 compute-0 sudo[82517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:49 compute-0 sudo[82517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:49 compute-0 sudo[82517]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 sudo[82542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 24 19:47:50 compute-0 sudo[82542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:50 compute-0 sudo[82542]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.client.admin.keyring
Nov 24 19:47:50 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.client.admin.keyring
Nov 24 19:47:50 compute-0 sudo[82591]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zokxifocsrwezpebobrohctedopnqakv ; /usr/bin/python3'
Nov 24 19:47:50 compute-0 sudo[82591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:50 compute-0 sudo[82590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:50 compute-0 sudo[82590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:50 compute-0 sudo[82590]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 sudo[82618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config
Nov 24 19:47:50 compute-0 sudo[82618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:50 compute-0 python3[82607]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:47:50 compute-0 sudo[82618]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:50 compute-0 podman[82643]: 2025-11-24 19:47:50.315651914 +0000 UTC m=+0.051595056 container create bdf4bbf3274433bfa9fdddb8cb6ed1db4ed715dd2353c31dc08d3db03febe007 (image=quay.io/ceph/ceph:v18, name=pedantic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 19:47:50 compute-0 systemd[1]: Started libpod-conmon-bdf4bbf3274433bfa9fdddb8cb6ed1db4ed715dd2353c31dc08d3db03febe007.scope.
Nov 24 19:47:50 compute-0 sudo[82644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:50 compute-0 sudo[82644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:50 compute-0 sudo[82644]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:50 compute-0 podman[82643]: 2025-11-24 19:47:50.293701158 +0000 UTC m=+0.029644370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/916c2e5fbd92bf49c9f63dc493365110d84ea8ae5afb936af5aadd633fdc9343/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/916c2e5fbd92bf49c9f63dc493365110d84ea8ae5afb936af5aadd633fdc9343/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/916c2e5fbd92bf49c9f63dc493365110d84ea8ae5afb936af5aadd633fdc9343/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:50 compute-0 podman[82643]: 2025-11-24 19:47:50.419431739 +0000 UTC m=+0.155374901 container init bdf4bbf3274433bfa9fdddb8cb6ed1db4ed715dd2353c31dc08d3db03febe007 (image=quay.io/ceph/ceph:v18, name=pedantic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 19:47:50 compute-0 podman[82643]: 2025-11-24 19:47:50.431509186 +0000 UTC m=+0.167452358 container start bdf4bbf3274433bfa9fdddb8cb6ed1db4ed715dd2353c31dc08d3db03febe007 (image=quay.io/ceph/ceph:v18, name=pedantic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:47:50 compute-0 ceph-mon[75677]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 19:47:50 compute-0 podman[82643]: 2025-11-24 19:47:50.436620171 +0000 UTC m=+0.172563343 container attach bdf4bbf3274433bfa9fdddb8cb6ed1db4ed715dd2353c31dc08d3db03febe007 (image=quay.io/ceph/ceph:v18, name=pedantic_dirac, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:47:50 compute-0 sudo[82687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config
Nov 24 19:47:50 compute-0 sudo[82687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:50 compute-0 sudo[82687]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 sudo[82713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:50 compute-0 sudo[82713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:50 compute-0 sudo[82713]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 sudo[82738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.client.admin.keyring.new
Nov 24 19:47:50 compute-0 sudo[82738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:50 compute-0 sudo[82738]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 sudo[82763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:50 compute-0 sudo[82763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:50 compute-0 sudo[82763]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 sudo[82790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:47:50 compute-0 sudo[82790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:50 compute-0 sudo[82790]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 sudo[82832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:50 compute-0 sudo[82832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:50 compute-0 sudo[82832]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:50 compute-0 ansible-async_wrapper.py[81185]: Done in kid B.
Nov 24 19:47:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 24 19:47:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4176660969' entity='client.admin' 
Nov 24 19:47:50 compute-0 systemd[1]: libpod-bdf4bbf3274433bfa9fdddb8cb6ed1db4ed715dd2353c31dc08d3db03febe007.scope: Deactivated successfully.
Nov 24 19:47:50 compute-0 podman[82643]: 2025-11-24 19:47:50.977331258 +0000 UTC m=+0.713274430 container died bdf4bbf3274433bfa9fdddb8cb6ed1db4ed715dd2353c31dc08d3db03febe007 (image=quay.io/ceph/ceph:v18, name=pedantic_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:47:51 compute-0 sudo[82857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.client.admin.keyring.new
Nov 24 19:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-916c2e5fbd92bf49c9f63dc493365110d84ea8ae5afb936af5aadd633fdc9343-merged.mount: Deactivated successfully.
Nov 24 19:47:51 compute-0 sudo[82857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:51 compute-0 sudo[82857]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:51 compute-0 podman[82643]: 2025-11-24 19:47:51.04213329 +0000 UTC m=+0.778076452 container remove bdf4bbf3274433bfa9fdddb8cb6ed1db4ed715dd2353c31dc08d3db03febe007 (image=quay.io/ceph/ceph:v18, name=pedantic_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 19:47:51 compute-0 systemd[1]: libpod-conmon-bdf4bbf3274433bfa9fdddb8cb6ed1db4ed715dd2353c31dc08d3db03febe007.scope: Deactivated successfully.
Nov 24 19:47:51 compute-0 sudo[82591]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:51 compute-0 sudo[82919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:51 compute-0 sudo[82919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:51 compute-0 sudo[82919]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:51 compute-0 sudo[82981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmuymcbqztllyozmznmceiiniptjkejt ; /usr/bin/python3'
Nov 24 19:47:51 compute-0 sudo[82981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:51 compute-0 sudo[82952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.client.admin.keyring.new
Nov 24 19:47:51 compute-0 sudo[82952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:51 compute-0 sudo[82952]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:51 compute-0 sudo[82995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:51 compute-0 sudo[82995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:51 compute-0 sudo[82995]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:51 compute-0 python3[82992]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:47:51 compute-0 sudo[83020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.client.admin.keyring.new
Nov 24 19:47:51 compute-0 sudo[83020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:51 compute-0 sudo[83020]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:51 compute-0 ceph-mon[75677]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:47:51 compute-0 ceph-mon[75677]: Updating compute-0:/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.client.admin.keyring
Nov 24 19:47:51 compute-0 ceph-mon[75677]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:51 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4176660969' entity='client.admin' 
Nov 24 19:47:51 compute-0 podman[83044]: 2025-11-24 19:47:51.502744194 +0000 UTC m=+0.066208889 container create a6d2aa6da06e763b14fabb9b39992e9002e0f05b3ecdc85404aadb8b9d04a03c (image=quay.io/ceph/ceph:v18, name=sad_chaplygin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 19:47:51 compute-0 sudo[83049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:51 compute-0 sudo[83049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:51 compute-0 sudo[83049]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:51 compute-0 systemd[1]: Started libpod-conmon-a6d2aa6da06e763b14fabb9b39992e9002e0f05b3ecdc85404aadb8b9d04a03c.scope.
Nov 24 19:47:51 compute-0 podman[83044]: 2025-11-24 19:47:51.475242121 +0000 UTC m=+0.038706796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63f91f8fa470b982a14b5f755995818cc6f944bb2de85625d9972802871b503/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63f91f8fa470b982a14b5f755995818cc6f944bb2de85625d9972802871b503/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63f91f8fa470b982a14b5f755995818cc6f944bb2de85625d9972802871b503/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:51 compute-0 podman[83044]: 2025-11-24 19:47:51.603241962 +0000 UTC m=+0.166706677 container init a6d2aa6da06e763b14fabb9b39992e9002e0f05b3ecdc85404aadb8b9d04a03c (image=quay.io/ceph/ceph:v18, name=sad_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 19:47:51 compute-0 podman[83044]: 2025-11-24 19:47:51.614156919 +0000 UTC m=+0.177621614 container start a6d2aa6da06e763b14fabb9b39992e9002e0f05b3ecdc85404aadb8b9d04a03c (image=quay.io/ceph/ceph:v18, name=sad_chaplygin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 19:47:51 compute-0 podman[83044]: 2025-11-24 19:47:51.618435922 +0000 UTC m=+0.181900617 container attach a6d2aa6da06e763b14fabb9b39992e9002e0f05b3ecdc85404aadb8b9d04a03c (image=quay.io/ceph/ceph:v18, name=sad_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 19:47:51 compute-0 sudo[83085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-05e060a3-406b-57f0-89d2-ec35f5b09305/var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.client.admin.keyring.new /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/config/ceph.client.admin.keyring
Nov 24 19:47:51 compute-0 sudo[83085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:51 compute-0 sudo[83085]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:47:51 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:47:51 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:47:51 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:51 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev 4c8f66cc-c6a4-4ec2-bddd-bc2cd124e01b (Updating crash deployment (+1 -> 1))
Nov 24 19:47:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 24 19:47:51 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 19:47:51 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 19:47:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:47:51 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:47:51 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 24 19:47:51 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 24 19:47:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:47:51 compute-0 sudo[83114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:51 compute-0 sudo[83114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:51 compute-0 sudo[83114]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:51 compute-0 sudo[83139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:51 compute-0 sudo[83139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:51 compute-0 sudo[83139]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:51 compute-0 sudo[83164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:51 compute-0 sudo[83164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:51 compute-0 sudo[83164]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:52 compute-0 sudo[83208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:47:52 compute-0 sudo[83208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 24 19:47:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/559470897' entity='client.admin' 
Nov 24 19:47:52 compute-0 systemd[1]: libpod-a6d2aa6da06e763b14fabb9b39992e9002e0f05b3ecdc85404aadb8b9d04a03c.scope: Deactivated successfully.
Nov 24 19:47:52 compute-0 podman[83044]: 2025-11-24 19:47:52.164016077 +0000 UTC m=+0.727480762 container died a6d2aa6da06e763b14fabb9b39992e9002e0f05b3ecdc85404aadb8b9d04a03c (image=quay.io/ceph/ceph:v18, name=sad_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a63f91f8fa470b982a14b5f755995818cc6f944bb2de85625d9972802871b503-merged.mount: Deactivated successfully.
Nov 24 19:47:52 compute-0 podman[83044]: 2025-11-24 19:47:52.22582066 +0000 UTC m=+0.789285355 container remove a6d2aa6da06e763b14fabb9b39992e9002e0f05b3ecdc85404aadb8b9d04a03c (image=quay.io/ceph/ceph:v18, name=sad_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 19:47:52 compute-0 systemd[1]: libpod-conmon-a6d2aa6da06e763b14fabb9b39992e9002e0f05b3ecdc85404aadb8b9d04a03c.scope: Deactivated successfully.
Nov 24 19:47:52 compute-0 sudo[82981]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:52 compute-0 sudo[83317]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgpnjwvgyapgixtnlgflosfqpgikcdpj ; /usr/bin/python3'
Nov 24 19:47:52 compute-0 sudo[83317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:52 compute-0 podman[83304]: 2025-11-24 19:47:52.51453631 +0000 UTC m=+0.067609406 container create 676f69c89f3866274988a4171892498340ea0366d1cc755fbe4633660272f560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:47:52 compute-0 systemd[1]: Started libpod-conmon-676f69c89f3866274988a4171892498340ea0366d1cc755fbe4633660272f560.scope.
Nov 24 19:47:52 compute-0 podman[83304]: 2025-11-24 19:47:52.485010566 +0000 UTC m=+0.038083702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:47:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:52 compute-0 podman[83304]: 2025-11-24 19:47:52.610111001 +0000 UTC m=+0.163184127 container init 676f69c89f3866274988a4171892498340ea0366d1cc755fbe4633660272f560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 19:47:52 compute-0 podman[83304]: 2025-11-24 19:47:52.618778748 +0000 UTC m=+0.171851844 container start 676f69c89f3866274988a4171892498340ea0366d1cc755fbe4633660272f560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:52 compute-0 podman[83304]: 2025-11-24 19:47:52.623192643 +0000 UTC m=+0.176265750 container attach 676f69c89f3866274988a4171892498340ea0366d1cc755fbe4633660272f560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:47:52 compute-0 competent_nobel[83329]: 167 167
Nov 24 19:47:52 compute-0 systemd[1]: libpod-676f69c89f3866274988a4171892498340ea0366d1cc755fbe4633660272f560.scope: Deactivated successfully.
Nov 24 19:47:52 compute-0 podman[83304]: 2025-11-24 19:47:52.626257294 +0000 UTC m=+0.179330450 container died 676f69c89f3866274988a4171892498340ea0366d1cc755fbe4633660272f560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:47:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 19:47:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 19:47:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:47:52 compute-0 ceph-mon[75677]: Deploying daemon crash.compute-0 on compute-0
Nov 24 19:47:52 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/559470897' entity='client.admin' 
Nov 24 19:47:52 compute-0 ceph-mon[75677]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:52 compute-0 python3[83326]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c4a46a266b891625d8bc4d078f158f164836dbaf51ec0ec866a8df02a93aaed-merged.mount: Deactivated successfully.
Nov 24 19:47:52 compute-0 podman[83304]: 2025-11-24 19:47:52.69008718 +0000 UTC m=+0.243160296 container remove 676f69c89f3866274988a4171892498340ea0366d1cc755fbe4633660272f560 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:47:52 compute-0 systemd[1]: libpod-conmon-676f69c89f3866274988a4171892498340ea0366d1cc755fbe4633660272f560.scope: Deactivated successfully.
Nov 24 19:47:52 compute-0 systemd[1]: Reloading.
Nov 24 19:47:52 compute-0 podman[83345]: 2025-11-24 19:47:52.753905966 +0000 UTC m=+0.069446365 container create 629753f7614d5bb86c0992263818039b04ec18afeb32613cad08a3c49b9b9d2a (image=quay.io/ceph/ceph:v18, name=musing_goldwasser, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 19:47:52 compute-0 podman[83345]: 2025-11-24 19:47:52.726890506 +0000 UTC m=+0.042430965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:52 compute-0 systemd-rc-local-generator[83390]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:47:52 compute-0 systemd-sysv-generator[83393]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:47:53 compute-0 systemd[1]: Started libpod-conmon-629753f7614d5bb86c0992263818039b04ec18afeb32613cad08a3c49b9b9d2a.scope.
Nov 24 19:47:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc680accc49a237c439daecd074cf17561c9c133f7a8d10559d52b38e0218a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc680accc49a237c439daecd074cf17561c9c133f7a8d10559d52b38e0218a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc680accc49a237c439daecd074cf17561c9c133f7a8d10559d52b38e0218a2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:53 compute-0 systemd[1]: Reloading.
Nov 24 19:47:53 compute-0 podman[83345]: 2025-11-24 19:47:53.086078137 +0000 UTC m=+0.401618596 container init 629753f7614d5bb86c0992263818039b04ec18afeb32613cad08a3c49b9b9d2a (image=quay.io/ceph/ceph:v18, name=musing_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:47:53 compute-0 podman[83345]: 2025-11-24 19:47:53.099461509 +0000 UTC m=+0.415001908 container start 629753f7614d5bb86c0992263818039b04ec18afeb32613cad08a3c49b9b9d2a (image=quay.io/ceph/ceph:v18, name=musing_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:53 compute-0 podman[83345]: 2025-11-24 19:47:53.103463795 +0000 UTC m=+0.419004244 container attach 629753f7614d5bb86c0992263818039b04ec18afeb32613cad08a3c49b9b9d2a (image=quay.io/ceph/ceph:v18, name=musing_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:47:53 compute-0 systemd-rc-local-generator[83432]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:47:53 compute-0 systemd-sysv-generator[83437]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:47:53 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:47:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 24 19:47:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3121323268' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 24 19:47:53 compute-0 podman[83508]: 2025-11-24 19:47:53.700015309 +0000 UTC m=+0.055223272 container create 82a5b30abd5b0c4210683c7414eea59b0441b2a5310fd94e406d2261792e9361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 19:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2057e2ac4499aedb95eb34d9e7b4ac1421b2d1d3f840885c19ab325a631292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2057e2ac4499aedb95eb34d9e7b4ac1421b2d1d3f840885c19ab325a631292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2057e2ac4499aedb95eb34d9e7b4ac1421b2d1d3f840885c19ab325a631292/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2057e2ac4499aedb95eb34d9e7b4ac1421b2d1d3f840885c19ab325a631292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:53 compute-0 podman[83508]: 2025-11-24 19:47:53.765375464 +0000 UTC m=+0.120583467 container init 82a5b30abd5b0c4210683c7414eea59b0441b2a5310fd94e406d2261792e9361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 19:47:53 compute-0 podman[83508]: 2025-11-24 19:47:53.673565933 +0000 UTC m=+0.028773966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:47:53 compute-0 podman[83508]: 2025-11-24 19:47:53.776996959 +0000 UTC m=+0.132204952 container start 82a5b30abd5b0c4210683c7414eea59b0441b2a5310fd94e406d2261792e9361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:53 compute-0 bash[83508]: 82a5b30abd5b0c4210683c7414eea59b0441b2a5310fd94e406d2261792e9361
Nov 24 19:47:53 compute-0 systemd[1]: Started Ceph crash.compute-0 for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:47:53 compute-0 sudo[83208]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:47:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:47:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 24 19:47:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:53 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev 4c8f66cc-c6a4-4ec2-bddd-bc2cd124e01b (Updating crash deployment (+1 -> 1))
Nov 24 19:47:53 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event 4c8f66cc-c6a4-4ec2-bddd-bc2cd124e01b (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 24 19:47:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 24 19:47:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:53 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7df82161-ca13-494c-8543-a581b7d3b83d does not exist
Nov 24 19:47:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 19:47:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:53 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev b60e0cb9-d8a6-42d1-aded-3ebe22da8bc7 (Updating mgr deployment (+1 -> 2))
Nov 24 19:47:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.veokpu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 24 19:47:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.veokpu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 19:47:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.veokpu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 19:47:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 19:47:53 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 19:47:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:47:53 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:47:53 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.veokpu on compute-0
Nov 24 19:47:53 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.veokpu on compute-0
Nov 24 19:47:53 compute-0 sudo[83530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:53 compute-0 sudo[83530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:53 compute-0 sudo[83530]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0[83525]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 24 19:47:54 compute-0 sudo[83555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:54 compute-0 sudo[83555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:54 compute-0 sudo[83555]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:54 compute-0 sudo[83582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 24 19:47:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:47:54 compute-0 sudo[83582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:54 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3121323268' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 24 19:47:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.veokpu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 19:47:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.veokpu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 19:47:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 19:47:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:47:54 compute-0 sudo[83582]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3121323268' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 24 19:47:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 24 19:47:54 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 24 19:47:54 compute-0 musing_goldwasser[83400]: set require_min_compat_client to mimic
Nov 24 19:47:54 compute-0 systemd[1]: libpod-629753f7614d5bb86c0992263818039b04ec18afeb32613cad08a3c49b9b9d2a.scope: Deactivated successfully.
Nov 24 19:47:54 compute-0 podman[83345]: 2025-11-24 19:47:54.174100637 +0000 UTC m=+1.489641006 container died 629753f7614d5bb86c0992263818039b04ec18afeb32613cad08a3c49b9b9d2a (image=quay.io/ceph/ceph:v18, name=musing_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 19:47:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdc680accc49a237c439daecd074cf17561c9c133f7a8d10559d52b38e0218a2-merged.mount: Deactivated successfully.
Nov 24 19:47:54 compute-0 podman[83345]: 2025-11-24 19:47:54.236801352 +0000 UTC m=+1.552341751 container remove 629753f7614d5bb86c0992263818039b04ec18afeb32613cad08a3c49b9b9d2a (image=quay.io/ceph/ceph:v18, name=musing_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 19:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0[83525]: 2025-11-24T19:47:54.239+0000 7fb613fff640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 19:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0[83525]: 2025-11-24T19:47:54.239+0000 7fb613fff640 -1 AuthRegistry(0x7fb614066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 19:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0[83525]: 2025-11-24T19:47:54.240+0000 7fb613fff640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 19:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0[83525]: 2025-11-24T19:47:54.240+0000 7fb613fff640 -1 AuthRegistry(0x7fb613ffe000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 19:47:54 compute-0 sudo[83607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0[83525]: 2025-11-24T19:47:54.242+0000 7fb612ffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 24 19:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0[83525]: 2025-11-24T19:47:54.242+0000 7fb613fff640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 24 19:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0[83525]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 24 19:47:54 compute-0 sudo[83607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:54 compute-0 systemd[1]: libpod-conmon-629753f7614d5bb86c0992263818039b04ec18afeb32613cad08a3c49b9b9d2a.scope: Deactivated successfully.
Nov 24 19:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-crash-compute-0[83525]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 24 19:47:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:54 compute-0 sudo[83317]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:54 compute-0 ceph-mgr[75975]: [progress INFO root] Writing back 1 completed events
Nov 24 19:47:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 19:47:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:47:54 compute-0 podman[83700]: 2025-11-24 19:47:54.706874105 +0000 UTC m=+0.058847536 container create dc4970b97df09e497e4f0027ae2e11c122929c4334dfa74483b0c297446bb3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 19:47:54 compute-0 systemd[1]: Started libpod-conmon-dc4970b97df09e497e4f0027ae2e11c122929c4334dfa74483b0c297446bb3c1.scope.
Nov 24 19:47:54 compute-0 podman[83700]: 2025-11-24 19:47:54.677065672 +0000 UTC m=+0.029039163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:47:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:54 compute-0 sudo[83740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bczcvetdofluyhqbjrfohjtbkrapswog ; /usr/bin/python3'
Nov 24 19:47:54 compute-0 sudo[83740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:54 compute-0 podman[83700]: 2025-11-24 19:47:54.805251858 +0000 UTC m=+0.157225329 container init dc4970b97df09e497e4f0027ae2e11c122929c4334dfa74483b0c297446bb3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 19:47:54 compute-0 podman[83700]: 2025-11-24 19:47:54.816122494 +0000 UTC m=+0.168095935 container start dc4970b97df09e497e4f0027ae2e11c122929c4334dfa74483b0c297446bb3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:54 compute-0 podman[83700]: 2025-11-24 19:47:54.819886983 +0000 UTC m=+0.171860424 container attach dc4970b97df09e497e4f0027ae2e11c122929c4334dfa74483b0c297446bb3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:54 compute-0 elegant_feistel[83741]: 167 167
Nov 24 19:47:54 compute-0 podman[83700]: 2025-11-24 19:47:54.824511174 +0000 UTC m=+0.176484605 container died dc4970b97df09e497e4f0027ae2e11c122929c4334dfa74483b0c297446bb3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:54 compute-0 systemd[1]: libpod-dc4970b97df09e497e4f0027ae2e11c122929c4334dfa74483b0c297446bb3c1.scope: Deactivated successfully.
Nov 24 19:47:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd6b283494150656cb4c2e35b2d786903df0f66135275c8aeea68a11252922ce-merged.mount: Deactivated successfully.
Nov 24 19:47:54 compute-0 podman[83700]: 2025-11-24 19:47:54.883876763 +0000 UTC m=+0.235850194 container remove dc4970b97df09e497e4f0027ae2e11c122929c4334dfa74483b0c297446bb3c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:54 compute-0 systemd[1]: libpod-conmon-dc4970b97df09e497e4f0027ae2e11c122929c4334dfa74483b0c297446bb3c1.scope: Deactivated successfully.
Nov 24 19:47:54 compute-0 systemd[1]: Reloading.
Nov 24 19:47:54 compute-0 python3[83745]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:47:55 compute-0 systemd-rc-local-generator[83796]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:47:55 compute-0 systemd-sysv-generator[83802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:47:55 compute-0 podman[83764]: 2025-11-24 19:47:55.085371974 +0000 UTC m=+0.071135299 container create 2636754977b23d11132f935d8f4651444aab2dda6064621f43169b872589b2bc (image=quay.io/ceph/ceph:v18, name=gallant_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 19:47:55 compute-0 podman[83764]: 2025-11-24 19:47:55.064205979 +0000 UTC m=+0.049969284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:55 compute-0 ceph-mon[75677]: Deploying daemon mgr.compute-0.veokpu on compute-0
Nov 24 19:47:55 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3121323268' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 24 19:47:55 compute-0 ceph-mon[75677]: osdmap e3: 0 total, 0 up, 0 in
Nov 24 19:47:55 compute-0 ceph-mon[75677]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:55 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:55 compute-0 systemd[1]: Started libpod-conmon-2636754977b23d11132f935d8f4651444aab2dda6064621f43169b872589b2bc.scope.
Nov 24 19:47:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542d21a78a8ac4d68415c86590cd8416885cf2552da3f080555a83b290d4372b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542d21a78a8ac4d68415c86590cd8416885cf2552da3f080555a83b290d4372b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542d21a78a8ac4d68415c86590cd8416885cf2552da3f080555a83b290d4372b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:55 compute-0 systemd[1]: Reloading.
Nov 24 19:47:55 compute-0 podman[83764]: 2025-11-24 19:47:55.322389798 +0000 UTC m=+0.308153133 container init 2636754977b23d11132f935d8f4651444aab2dda6064621f43169b872589b2bc (image=quay.io/ceph/ceph:v18, name=gallant_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 19:47:55 compute-0 podman[83764]: 2025-11-24 19:47:55.334184967 +0000 UTC m=+0.319948292 container start 2636754977b23d11132f935d8f4651444aab2dda6064621f43169b872589b2bc (image=quay.io/ceph/ceph:v18, name=gallant_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:47:55 compute-0 podman[83764]: 2025-11-24 19:47:55.337632297 +0000 UTC m=+0.323395612 container attach 2636754977b23d11132f935d8f4651444aab2dda6064621f43169b872589b2bc (image=quay.io/ceph/ceph:v18, name=gallant_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:47:55 compute-0 systemd-sysv-generator[83852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:47:55 compute-0 systemd-rc-local-generator[83849]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:47:55 compute-0 systemd[1]: Starting Ceph mgr.compute-0.veokpu for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:47:55 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:55 compute-0 podman[83927]: 2025-11-24 19:47:55.932853067 +0000 UTC m=+0.065278176 container create 061ea047133fda8f3c40178c09a325b50a74782efaefa68c7d39be37a22d7870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-veokpu, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 19:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0233060cc1e886b3c637de798518c392d4b5cfba0eb0765f12b9746bf76a8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0233060cc1e886b3c637de798518c392d4b5cfba0eb0765f12b9746bf76a8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0233060cc1e886b3c637de798518c392d4b5cfba0eb0765f12b9746bf76a8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0233060cc1e886b3c637de798518c392d4b5cfba0eb0765f12b9746bf76a8e/merged/var/lib/ceph/mgr/ceph-compute-0.veokpu supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:55 compute-0 podman[83927]: 2025-11-24 19:47:55.989227437 +0000 UTC m=+0.121652586 container init 061ea047133fda8f3c40178c09a325b50a74782efaefa68c7d39be37a22d7870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-veokpu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:47:55 compute-0 podman[83927]: 2025-11-24 19:47:55.897523589 +0000 UTC m=+0.029948788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:47:55 compute-0 sudo[83941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:55 compute-0 podman[83927]: 2025-11-24 19:47:55.998775227 +0000 UTC m=+0.131200346 container start 061ea047133fda8f3c40178c09a325b50a74782efaefa68c7d39be37a22d7870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-veokpu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 19:47:56 compute-0 bash[83927]: 061ea047133fda8f3c40178c09a325b50a74782efaefa68c7d39be37a22d7870
Nov 24 19:47:56 compute-0 sudo[83941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:56 compute-0 sudo[83941]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:56 compute-0 systemd[1]: Started Ceph mgr.compute-0.veokpu for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:47:56 compute-0 sudo[83607]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:56 compute-0 ceph-mgr[83971]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 19:47:56 compute-0 ceph-mgr[83971]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:47:56 compute-0 ceph-mgr[83971]: pidfile_write: ignore empty --pid-file
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev b60e0cb9-d8a6-42d1-aded-3ebe22da8bc7 (Updating mgr deployment (+1 -> 2))
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event b60e0cb9-d8a6-42d1-aded-3ebe22da8bc7 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 sudo[83973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:56 compute-0 sudo[83973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:56 compute-0 sudo[83973]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:56 compute-0 ceph-mgr[83971]: mgr[py] Loading python module 'alerts'
Nov 24 19:47:56 compute-0 sudo[84026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:56 compute-0 sudo[84026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:56 compute-0 sudo[84021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:56 compute-0 sudo[84026]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:56 compute-0 sudo[84021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:56 compute-0 sudo[84021]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:56 compute-0 sudo[84072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 24 19:47:56 compute-0 sudo[84072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:56 compute-0 sudo[84075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:47:56 compute-0 sudo[84075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:56 compute-0 sudo[84075]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:56 compute-0 sudo[84122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:56 compute-0 sudo[84122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:56 compute-0 sudo[84122]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:56 compute-0 sudo[84147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:56 compute-0 sudo[84147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:56 compute-0 sudo[84147]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:56 compute-0 ceph-mgr[83971]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 19:47:56 compute-0 ceph-mgr[83971]: mgr[py] Loading python module 'balancer'
Nov 24 19:47:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-veokpu[83963]: 2025-11-24T19:47:56.453+0000 7f4394ad6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 19:47:56 compute-0 sudo[84183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:56 compute-0 sudo[84183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:56 compute-0 sudo[84183]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:56 compute-0 sudo[84072]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: [cephadm INFO root] Added host compute-0
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 24 19:47:56 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 24 19:47:56 compute-0 sudo[84216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 19:47:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:56 compute-0 gallant_bassi[83816]: Added host 'compute-0' with addr '192.168.122.100'
Nov 24 19:47:56 compute-0 gallant_bassi[83816]: Scheduled mon update...
Nov 24 19:47:56 compute-0 sudo[84216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:56 compute-0 gallant_bassi[83816]: Scheduled mgr update...
Nov 24 19:47:56 compute-0 gallant_bassi[83816]: Scheduled osd.default_drive_group update...
Nov 24 19:47:56 compute-0 systemd[1]: libpod-2636754977b23d11132f935d8f4651444aab2dda6064621f43169b872589b2bc.scope: Deactivated successfully.
Nov 24 19:47:56 compute-0 podman[83764]: 2025-11-24 19:47:56.662021433 +0000 UTC m=+1.647784748 container died 2636754977b23d11132f935d8f4651444aab2dda6064621f43169b872589b2bc (image=quay.io/ceph/ceph:v18, name=gallant_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 19:47:56 compute-0 ceph-mgr[83971]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 19:47:56 compute-0 ceph-mgr[83971]: mgr[py] Loading python module 'cephadm'
Nov 24 19:47:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-veokpu[83963]: 2025-11-24T19:47:56.695+0000 7f4394ad6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 19:47:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-542d21a78a8ac4d68415c86590cd8416885cf2552da3f080555a83b290d4372b-merged.mount: Deactivated successfully.
Nov 24 19:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:47:56 compute-0 podman[83764]: 2025-11-24 19:47:56.733240762 +0000 UTC m=+1.719004077 container remove 2636754977b23d11132f935d8f4651444aab2dda6064621f43169b872589b2bc (image=quay.io/ceph/ceph:v18, name=gallant_bassi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:56 compute-0 systemd[1]: libpod-conmon-2636754977b23d11132f935d8f4651444aab2dda6064621f43169b872589b2bc.scope: Deactivated successfully.
Nov 24 19:47:56 compute-0 sudo[83740]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:57 compute-0 sudo[84324]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aikuridasabjknbgyrzfxlincowjrxqu ; /usr/bin/python3'
Nov 24 19:47:57 compute-0 sudo[84324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 python3[84336]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:47:57 compute-0 podman[84353]: 2025-11-24 19:47:57.275679525 +0000 UTC m=+0.080245478 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 19:47:57 compute-0 podman[84373]: 2025-11-24 19:47:57.353382346 +0000 UTC m=+0.062117342 container create 2ecd129d0e48914cbefe89f637adad0d9b22437104cedb741d4cac0c43ce65bf (image=quay.io/ceph/ceph:v18, name=flamboyant_darwin, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 19:47:57 compute-0 systemd[1]: Started libpod-conmon-2ecd129d0e48914cbefe89f637adad0d9b22437104cedb741d4cac0c43ce65bf.scope.
Nov 24 19:47:57 compute-0 podman[84353]: 2025-11-24 19:47:57.398330066 +0000 UTC m=+0.202895959 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:57 compute-0 podman[84373]: 2025-11-24 19:47:57.327024963 +0000 UTC m=+0.035759999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:47:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271dc01bdf1a97ccb3ade602a040430bdfaf68285746cd9f0acc2a66c7634496/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271dc01bdf1a97ccb3ade602a040430bdfaf68285746cd9f0acc2a66c7634496/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271dc01bdf1a97ccb3ade602a040430bdfaf68285746cd9f0acc2a66c7634496/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:47:57 compute-0 podman[84373]: 2025-11-24 19:47:57.460437287 +0000 UTC m=+0.169172323 container init 2ecd129d0e48914cbefe89f637adad0d9b22437104cedb741d4cac0c43ce65bf (image=quay.io/ceph/ceph:v18, name=flamboyant_darwin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:47:57 compute-0 podman[84373]: 2025-11-24 19:47:57.47197598 +0000 UTC m=+0.180710976 container start 2ecd129d0e48914cbefe89f637adad0d9b22437104cedb741d4cac0c43ce65bf (image=quay.io/ceph/ceph:v18, name=flamboyant_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:47:57 compute-0 podman[84373]: 2025-11-24 19:47:57.476045156 +0000 UTC m=+0.184780182 container attach 2ecd129d0e48914cbefe89f637adad0d9b22437104cedb741d4cac0c43ce65bf (image=quay.io/ceph/ceph:v18, name=flamboyant_darwin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:47:57 compute-0 sudo[84216]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:47:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:47:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:47:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:47:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:47:57 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:47:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:47:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:47:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:47:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e0ff21bc-a037-4a04-aff0-c0de7f2d8efa does not exist
Nov 24 19:47:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 19:47:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:57 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev 10501a3d-7314-4869-94ea-bca72208c252 (Updating mgr deployment (-1 -> 1))
Nov 24 19:47:57 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.veokpu from compute-0 -- ports [8765]
Nov 24 19:47:57 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.veokpu from compute-0 -- ports [8765]
Nov 24 19:47:57 compute-0 sudo[84472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:57 compute-0 sudo[84472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:57 compute-0 sudo[84472]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:57 compute-0 sudo[84506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:57 compute-0 sudo[84506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:57 compute-0 sudo[84506]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:57 compute-0 sudo[84531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:58 compute-0 sudo[84531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:58 compute-0 sudo[84531]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 19:47:58 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884610515' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 19:47:58 compute-0 flamboyant_darwin[84402]: 
Nov 24 19:47:58 compute-0 flamboyant_darwin[84402]: {"fsid":"05e060a3-406b-57f0-89d2-ec35f5b09305","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":76,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-24T19:46:38.375320+0000","services":{}},"progress_events":{}}
Nov 24 19:47:58 compute-0 sudo[84562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --name mgr.compute-0.veokpu --force --tcp-ports 8765
Nov 24 19:47:58 compute-0 sudo[84562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:58 compute-0 ceph-mon[75677]: Added host compute-0
Nov 24 19:47:58 compute-0 ceph-mon[75677]: Saving service mon spec with placement compute-0
Nov 24 19:47:58 compute-0 ceph-mon[75677]: Saving service mgr spec with placement compute-0
Nov 24 19:47:58 compute-0 ceph-mon[75677]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 19:47:58 compute-0 ceph-mon[75677]: Saving service osd.default_drive_group spec with placement compute-0
Nov 24 19:47:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:47:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:47:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:58 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/884610515' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 19:47:58 compute-0 systemd[1]: libpod-2ecd129d0e48914cbefe89f637adad0d9b22437104cedb741d4cac0c43ce65bf.scope: Deactivated successfully.
Nov 24 19:47:58 compute-0 podman[84373]: 2025-11-24 19:47:58.096817946 +0000 UTC m=+0.805552962 container died 2ecd129d0e48914cbefe89f637adad0d9b22437104cedb741d4cac0c43ce65bf (image=quay.io/ceph/ceph:v18, name=flamboyant_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 24 19:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-271dc01bdf1a97ccb3ade602a040430bdfaf68285746cd9f0acc2a66c7634496-merged.mount: Deactivated successfully.
Nov 24 19:47:58 compute-0 podman[84373]: 2025-11-24 19:47:58.152500178 +0000 UTC m=+0.861235134 container remove 2ecd129d0e48914cbefe89f637adad0d9b22437104cedb741d4cac0c43ce65bf (image=quay.io/ceph/ceph:v18, name=flamboyant_darwin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 19:47:58 compute-0 systemd[1]: libpod-conmon-2ecd129d0e48914cbefe89f637adad0d9b22437104cedb741d4cac0c43ce65bf.scope: Deactivated successfully.
Nov 24 19:47:58 compute-0 sudo[84324]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:58 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.veokpu for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:47:58 compute-0 ceph-mgr[83971]: mgr[py] Loading python module 'crash'
Nov 24 19:47:58 compute-0 podman[84677]: 2025-11-24 19:47:58.738111305 +0000 UTC m=+0.091492584 container died 061ea047133fda8f3c40178c09a325b50a74782efaefa68c7d39be37a22d7870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-veokpu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 19:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d0233060cc1e886b3c637de798518c392d4b5cfba0eb0765f12b9746bf76a8e-merged.mount: Deactivated successfully.
Nov 24 19:47:58 compute-0 podman[84677]: 2025-11-24 19:47:58.800746909 +0000 UTC m=+0.154128208 container remove 061ea047133fda8f3c40178c09a325b50a74782efaefa68c7d39be37a22d7870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-veokpu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:47:58 compute-0 bash[84677]: ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-veokpu
Nov 24 19:47:58 compute-0 systemd[1]: ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@mgr.compute-0.veokpu.service: Main process exited, code=exited, status=143/n/a
Nov 24 19:47:58 compute-0 systemd[1]: ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@mgr.compute-0.veokpu.service: Failed with result 'exit-code'.
Nov 24 19:47:58 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.veokpu for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:47:58 compute-0 systemd[1]: ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@mgr.compute-0.veokpu.service: Consumed 3.880s CPU time.
Nov 24 19:47:59 compute-0 systemd[1]: Reloading.
Nov 24 19:47:59 compute-0 ceph-mon[75677]: Removing daemon mgr.compute-0.veokpu from compute-0 -- ports [8765]
Nov 24 19:47:59 compute-0 ceph-mon[75677]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:47:59 compute-0 systemd-sysv-generator[84762]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:47:59 compute-0 systemd-rc-local-generator[84758]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:47:59 compute-0 sudo[84562]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:59 compute-0 ceph-mgr[75975]: [progress INFO root] Writing back 2 completed events
Nov 24 19:47:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 19:47:59 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.veokpu
Nov 24 19:47:59 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.veokpu
Nov 24 19:47:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.veokpu"} v 0) v1
Nov 24 19:47:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.veokpu"}]: dispatch
Nov 24 19:47:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.veokpu"}]': finished
Nov 24 19:47:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 19:47:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:59 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev 10501a3d-7314-4869-94ea-bca72208c252 (Updating mgr deployment (-1 -> 1))
Nov 24 19:47:59 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event 10501a3d-7314-4869-94ea-bca72208c252 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Nov 24 19:47:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 19:47:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:47:59 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9df72904-ca81-4e10-8879-5b7619919831 does not exist
Nov 24 19:47:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:47:59 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:47:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:47:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:47:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:47:59 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:47:59 compute-0 sudo[84769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:59 compute-0 sudo[84769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:59 compute-0 sudo[84769]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:59 compute-0 sudo[84794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:47:59 compute-0 sudo[84794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:59 compute-0 sudo[84794]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:59 compute-0 sudo[84819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:47:59 compute-0 sudo[84819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:47:59 compute-0 sudo[84819]: pam_unix(sudo:session): session closed for user root
Nov 24 19:47:59 compute-0 sudo[84844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:47:59 compute-0 sudo[84844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:00 compute-0 podman[84910]: 2025-11-24 19:48:00.222224134 +0000 UTC m=+0.069308881 container create 5496901e0c7500122a77ae8d6932fafd4ecc86c7a3331e25ded98a5faeee7b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:00 compute-0 systemd[1]: Started libpod-conmon-5496901e0c7500122a77ae8d6932fafd4ecc86c7a3331e25ded98a5faeee7b56.scope.
Nov 24 19:48:00 compute-0 podman[84910]: 2025-11-24 19:48:00.195874951 +0000 UTC m=+0.042959778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:00 compute-0 podman[84910]: 2025-11-24 19:48:00.327997811 +0000 UTC m=+0.175082578 container init 5496901e0c7500122a77ae8d6932fafd4ecc86c7a3331e25ded98a5faeee7b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:00 compute-0 podman[84910]: 2025-11-24 19:48:00.33901845 +0000 UTC m=+0.186103227 container start 5496901e0c7500122a77ae8d6932fafd4ecc86c7a3331e25ded98a5faeee7b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:00 compute-0 podman[84910]: 2025-11-24 19:48:00.343555329 +0000 UTC m=+0.190640076 container attach 5496901e0c7500122a77ae8d6932fafd4ecc86c7a3331e25ded98a5faeee7b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:00 compute-0 stupefied_dirac[84926]: 167 167
Nov 24 19:48:00 compute-0 systemd[1]: libpod-5496901e0c7500122a77ae8d6932fafd4ecc86c7a3331e25ded98a5faeee7b56.scope: Deactivated successfully.
Nov 24 19:48:00 compute-0 podman[84910]: 2025-11-24 19:48:00.347778561 +0000 UTC m=+0.194863328 container died 5496901e0c7500122a77ae8d6932fafd4ecc86c7a3331e25ded98a5faeee7b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:00 compute-0 ceph-mon[75677]: Removing key for mgr.compute-0.veokpu
Nov 24 19:48:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.veokpu"}]: dispatch
Nov 24 19:48:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.veokpu"}]': finished
Nov 24 19:48:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:48:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:48:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-bea873f9855b90ff4e5c57a08959c2e96237161ee499f580b874dcf9f4d8651a-merged.mount: Deactivated successfully.
Nov 24 19:48:00 compute-0 podman[84910]: 2025-11-24 19:48:00.392996438 +0000 UTC m=+0.240081195 container remove 5496901e0c7500122a77ae8d6932fafd4ecc86c7a3331e25ded98a5faeee7b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:00 compute-0 systemd[1]: libpod-conmon-5496901e0c7500122a77ae8d6932fafd4ecc86c7a3331e25ded98a5faeee7b56.scope: Deactivated successfully.
Nov 24 19:48:00 compute-0 podman[84951]: 2025-11-24 19:48:00.627860995 +0000 UTC m=+0.074908895 container create c54de43777d479d995306377940fb940173983e7d7429a7d09b5ba8a00ef92d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wilbur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 24 19:48:00 compute-0 systemd[1]: Started libpod-conmon-c54de43777d479d995306377940fb940173983e7d7429a7d09b5ba8a00ef92d9.scope.
Nov 24 19:48:00 compute-0 podman[84951]: 2025-11-24 19:48:00.599774436 +0000 UTC m=+0.046822356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe149630971e16fa3810b0de1debc5e9fd36bc0ca649d5cd054d42c0cc67b6bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe149630971e16fa3810b0de1debc5e9fd36bc0ca649d5cd054d42c0cc67b6bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe149630971e16fa3810b0de1debc5e9fd36bc0ca649d5cd054d42c0cc67b6bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe149630971e16fa3810b0de1debc5e9fd36bc0ca649d5cd054d42c0cc67b6bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe149630971e16fa3810b0de1debc5e9fd36bc0ca649d5cd054d42c0cc67b6bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:00 compute-0 podman[84951]: 2025-11-24 19:48:00.739509857 +0000 UTC m=+0.186557807 container init c54de43777d479d995306377940fb940173983e7d7429a7d09b5ba8a00ef92d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wilbur, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 19:48:00 compute-0 podman[84951]: 2025-11-24 19:48:00.755057962 +0000 UTC m=+0.202105872 container start c54de43777d479d995306377940fb940173983e7d7429a7d09b5ba8a00ef92d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wilbur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:00 compute-0 podman[84951]: 2025-11-24 19:48:00.759242319 +0000 UTC m=+0.206290279 container attach c54de43777d479d995306377940fb940173983e7d7429a7d09b5ba8a00ef92d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wilbur, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 24 19:48:01 compute-0 ceph-mon[75677]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:01 compute-0 crazy_wilbur[84967]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:48:01 compute-0 crazy_wilbur[84967]: --> relative data size: 1.0
Nov 24 19:48:01 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 19:48:01 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e
Nov 24 19:48:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e"} v 0) v1
Nov 24 19:48:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1915477924' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e"}]: dispatch
Nov 24 19:48:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 24 19:48:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:48:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1915477924' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e"}]': finished
Nov 24 19:48:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 24 19:48:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 24 19:48:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 19:48:02 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:02 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 19:48:02 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 19:48:02 compute-0 lvm[85029]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 19:48:02 compute-0 lvm[85029]: VG ceph_vg0 finished
Nov 24 19:48:02 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 24 19:48:02 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 24 19:48:02 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 19:48:02 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:02 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 24 19:48:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 24 19:48:03 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3213135797' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 19:48:03 compute-0 crazy_wilbur[84967]:  stderr: got monmap epoch 1
Nov 24 19:48:03 compute-0 crazy_wilbur[84967]: --> Creating keyring file for osd.0
Nov 24 19:48:03 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 24 19:48:03 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 24 19:48:03 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e --setuser ceph --setgroup ceph
Nov 24 19:48:03 compute-0 ceph-mon[75677]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:03 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1915477924' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e"}]: dispatch
Nov 24 19:48:03 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1915477924' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e"}]': finished
Nov 24 19:48:03 compute-0 ceph-mon[75677]: osdmap e4: 1 total, 0 up, 1 in
Nov 24 19:48:03 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:03 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3213135797' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 19:48:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:04 compute-0 ceph-mgr[75975]: [progress INFO root] Writing back 3 completed events
Nov 24 19:48:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 19:48:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:04 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 24 19:48:04 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 19:48:05 compute-0 ceph-mon[75677]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:05 compute-0 ceph-mon[75677]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 24 19:48:05 compute-0 ceph-mon[75677]: Cluster is now healthy
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:03.195+0000 7feab2f23740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:03.196+0000 7feab2f23740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:03.196+0000 7feab2f23740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:03.196+0000 7feab2f23740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 19:48:05 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 722822cb-bac5-4aa4-891b-811a5e4def90
Nov 24 19:48:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "722822cb-bac5-4aa4-891b-811a5e4def90"} v 0) v1
Nov 24 19:48:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2997212084' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "722822cb-bac5-4aa4-891b-811a5e4def90"}]: dispatch
Nov 24 19:48:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 24 19:48:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:48:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2997212084' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "722822cb-bac5-4aa4-891b-811a5e4def90"}]': finished
Nov 24 19:48:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 24 19:48:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 24 19:48:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 19:48:06 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:06 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:06 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 19:48:06 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:06 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2997212084' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "722822cb-bac5-4aa4-891b-811a5e4def90"}]: dispatch
Nov 24 19:48:06 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2997212084' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "722822cb-bac5-4aa4-891b-811a5e4def90"}]': finished
Nov 24 19:48:06 compute-0 ceph-mon[75677]: osdmap e5: 2 total, 0 up, 2 in
Nov 24 19:48:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:06 compute-0 lvm[85962]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 19:48:06 compute-0 lvm[85962]: VG ceph_vg1 finished
Nov 24 19:48:06 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 19:48:06 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 24 19:48:06 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 24 19:48:06 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 19:48:06 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:06 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 24 19:48:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 24 19:48:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966137458' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 19:48:07 compute-0 crazy_wilbur[84967]:  stderr: got monmap epoch 1
Nov 24 19:48:07 compute-0 crazy_wilbur[84967]: --> Creating keyring file for osd.1
Nov 24 19:48:07 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 24 19:48:07 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 24 19:48:07 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 722822cb-bac5-4aa4-891b-811a5e4def90 --setuser ceph --setgroup ceph
Nov 24 19:48:07 compute-0 ceph-mon[75677]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:07 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2966137458' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 19:48:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:09 compute-0 ceph-mon[75677]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:07.231+0000 7f984f86c740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:07.231+0000 7f984f86c740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:07.231+0000 7f984f86c740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:07.232+0000 7f984f86c740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 19:48:09 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 720ccdfc-a888-49fd-ae51-8ab3d2ba9302
Nov 24 19:48:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302"} v 0) v1
Nov 24 19:48:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1746545115' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302"}]: dispatch
Nov 24 19:48:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 24 19:48:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:48:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1746545115' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302"}]': finished
Nov 24 19:48:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 24 19:48:10 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 24 19:48:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 19:48:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:10 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 19:48:10 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:10 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:10 compute-0 ceph-mon[75677]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:10 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1746545115' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302"}]: dispatch
Nov 24 19:48:10 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1746545115' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302"}]': finished
Nov 24 19:48:10 compute-0 ceph-mon[75677]: osdmap e6: 3 total, 0 up, 3 in
Nov 24 19:48:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:10 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 19:48:10 compute-0 lvm[86900]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 19:48:10 compute-0 lvm[86900]: VG ceph_vg2 finished
Nov 24 19:48:10 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 24 19:48:10 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 24 19:48:10 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 19:48:10 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:10 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 24 19:48:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 24 19:48:11 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3231808589' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 19:48:11 compute-0 crazy_wilbur[84967]:  stderr: got monmap epoch 1
Nov 24 19:48:11 compute-0 crazy_wilbur[84967]: --> Creating keyring file for osd.2
Nov 24 19:48:11 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 24 19:48:11 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 24 19:48:11 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 720ccdfc-a888-49fd-ae51-8ab3d2ba9302 --setuser ceph --setgroup ceph
Nov 24 19:48:11 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3231808589' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 19:48:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:12 compute-0 ceph-mon[75677]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:11.143+0000 7f441d7e8740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:11.143+0000 7f441d7e8740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:11.143+0000 7f441d7e8740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]:  stderr: 2025-11-24T19:48:11.144+0000 7f441d7e8740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 24 19:48:13 compute-0 crazy_wilbur[84967]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 24 19:48:13 compute-0 systemd[1]: libpod-c54de43777d479d995306377940fb940173983e7d7429a7d09b5ba8a00ef92d9.scope: Deactivated successfully.
Nov 24 19:48:13 compute-0 systemd[1]: libpod-c54de43777d479d995306377940fb940173983e7d7429a7d09b5ba8a00ef92d9.scope: Consumed 7.032s CPU time.
Nov 24 19:48:13 compute-0 podman[87802]: 2025-11-24 19:48:13.862126132 +0000 UTC m=+0.032642475 container died c54de43777d479d995306377940fb940173983e7d7429a7d09b5ba8a00ef92d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wilbur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 19:48:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe149630971e16fa3810b0de1debc5e9fd36bc0ca649d5cd054d42c0cc67b6bc-merged.mount: Deactivated successfully.
Nov 24 19:48:13 compute-0 podman[87802]: 2025-11-24 19:48:13.950419333 +0000 UTC m=+0.120935666 container remove c54de43777d479d995306377940fb940173983e7d7429a7d09b5ba8a00ef92d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 19:48:13 compute-0 systemd[1]: libpod-conmon-c54de43777d479d995306377940fb940173983e7d7429a7d09b5ba8a00ef92d9.scope: Deactivated successfully.
Nov 24 19:48:14 compute-0 sudo[84844]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:14 compute-0 sudo[87817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:14 compute-0 sudo[87817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:14 compute-0 sudo[87817]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:14 compute-0 sudo[87842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:14 compute-0 sudo[87842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:14 compute-0 sudo[87842]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:14 compute-0 sudo[87867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:14 compute-0 sudo[87867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:14 compute-0 sudo[87867]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:14 compute-0 sudo[87892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:48:14 compute-0 sudo[87892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:14 compute-0 podman[87957]: 2025-11-24 19:48:14.800580519 +0000 UTC m=+0.063500599 container create 0dcc53641f3878f79f01ab606442327200cf7a22f7d08c2b628a5c864787f521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 19:48:14 compute-0 systemd[1]: Started libpod-conmon-0dcc53641f3878f79f01ab606442327200cf7a22f7d08c2b628a5c864787f521.scope.
Nov 24 19:48:14 compute-0 podman[87957]: 2025-11-24 19:48:14.774128106 +0000 UTC m=+0.037048196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:14 compute-0 podman[87957]: 2025-11-24 19:48:14.90351347 +0000 UTC m=+0.166433560 container init 0dcc53641f3878f79f01ab606442327200cf7a22f7d08c2b628a5c864787f521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:14 compute-0 podman[87957]: 2025-11-24 19:48:14.914146523 +0000 UTC m=+0.177066613 container start 0dcc53641f3878f79f01ab606442327200cf7a22f7d08c2b628a5c864787f521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chebyshev, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:14 compute-0 podman[87957]: 2025-11-24 19:48:14.919036463 +0000 UTC m=+0.181956533 container attach 0dcc53641f3878f79f01ab606442327200cf7a22f7d08c2b628a5c864787f521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chebyshev, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 19:48:14 compute-0 stoic_chebyshev[87974]: 167 167
Nov 24 19:48:14 compute-0 systemd[1]: libpod-0dcc53641f3878f79f01ab606442327200cf7a22f7d08c2b628a5c864787f521.scope: Deactivated successfully.
Nov 24 19:48:14 compute-0 podman[87957]: 2025-11-24 19:48:14.920778642 +0000 UTC m=+0.183698712 container died 0dcc53641f3878f79f01ab606442327200cf7a22f7d08c2b628a5c864787f521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4f635a6ab579c9603b7124422bfab221e729adb56d5a07fe258f5d2e0e70c49-merged.mount: Deactivated successfully.
Nov 24 19:48:14 compute-0 podman[87957]: 2025-11-24 19:48:14.972147511 +0000 UTC m=+0.235067591 container remove 0dcc53641f3878f79f01ab606442327200cf7a22f7d08c2b628a5c864787f521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:14 compute-0 systemd[1]: libpod-conmon-0dcc53641f3878f79f01ab606442327200cf7a22f7d08c2b628a5c864787f521.scope: Deactivated successfully.
Nov 24 19:48:15 compute-0 podman[87998]: 2025-11-24 19:48:15.211919357 +0000 UTC m=+0.065872327 container create e0ead9a8c0e5b8d77bfd4579d518aeea43603b8d738eb3113863946284fe5014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 19:48:15 compute-0 systemd[1]: Started libpod-conmon-e0ead9a8c0e5b8d77bfd4579d518aeea43603b8d738eb3113863946284fe5014.scope.
Nov 24 19:48:15 compute-0 podman[87998]: 2025-11-24 19:48:15.1845428 +0000 UTC m=+0.038495820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f52cd4a069144391bba410ab2c9084d8796879eed3a77e7c3acc397eb73f1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f52cd4a069144391bba410ab2c9084d8796879eed3a77e7c3acc397eb73f1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f52cd4a069144391bba410ab2c9084d8796879eed3a77e7c3acc397eb73f1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f52cd4a069144391bba410ab2c9084d8796879eed3a77e7c3acc397eb73f1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:15 compute-0 podman[87998]: 2025-11-24 19:48:15.312888876 +0000 UTC m=+0.166841896 container init e0ead9a8c0e5b8d77bfd4579d518aeea43603b8d738eb3113863946284fe5014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:15 compute-0 podman[87998]: 2025-11-24 19:48:15.329342615 +0000 UTC m=+0.183295595 container start e0ead9a8c0e5b8d77bfd4579d518aeea43603b8d738eb3113863946284fe5014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:15 compute-0 podman[87998]: 2025-11-24 19:48:15.333770127 +0000 UTC m=+0.187723107 container attach e0ead9a8c0e5b8d77bfd4579d518aeea43603b8d738eb3113863946284fe5014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 19:48:15 compute-0 ceph-mon[75677]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:16 compute-0 infallible_poitras[88015]: {
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:     "0": [
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:         {
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "devices": [
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "/dev/loop3"
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             ],
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_name": "ceph_lv0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_size": "21470642176",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "name": "ceph_lv0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "tags": {
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.cluster_name": "ceph",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.crush_device_class": "",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.encrypted": "0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.osd_id": "0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.type": "block",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.vdo": "0"
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             },
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "type": "block",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "vg_name": "ceph_vg0"
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:         }
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:     ],
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:     "1": [
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:         {
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "devices": [
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "/dev/loop4"
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             ],
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_name": "ceph_lv1",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_size": "21470642176",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "name": "ceph_lv1",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "tags": {
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.cluster_name": "ceph",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.crush_device_class": "",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.encrypted": "0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.osd_id": "1",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.type": "block",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.vdo": "0"
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             },
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "type": "block",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "vg_name": "ceph_vg1"
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:         }
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:     ],
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:     "2": [
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:         {
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "devices": [
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "/dev/loop5"
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             ],
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_name": "ceph_lv2",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_size": "21470642176",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "name": "ceph_lv2",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "tags": {
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.cluster_name": "ceph",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.crush_device_class": "",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.encrypted": "0",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.osd_id": "2",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.type": "block",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:                 "ceph.vdo": "0"
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             },
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "type": "block",
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:             "vg_name": "ceph_vg2"
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:         }
Nov 24 19:48:16 compute-0 infallible_poitras[88015]:     ]
Nov 24 19:48:16 compute-0 infallible_poitras[88015]: }
Nov 24 19:48:16 compute-0 systemd[1]: libpod-e0ead9a8c0e5b8d77bfd4579d518aeea43603b8d738eb3113863946284fe5014.scope: Deactivated successfully.
Nov 24 19:48:16 compute-0 podman[87998]: 2025-11-24 19:48:16.11156876 +0000 UTC m=+0.965521740 container died e0ead9a8c0e5b8d77bfd4579d518aeea43603b8d738eb3113863946284fe5014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 19:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5f52cd4a069144391bba410ab2c9084d8796879eed3a77e7c3acc397eb73f1a-merged.mount: Deactivated successfully.
Nov 24 19:48:16 compute-0 podman[87998]: 2025-11-24 19:48:16.198638112 +0000 UTC m=+1.052591092 container remove e0ead9a8c0e5b8d77bfd4579d518aeea43603b8d738eb3113863946284fe5014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:16 compute-0 systemd[1]: libpod-conmon-e0ead9a8c0e5b8d77bfd4579d518aeea43603b8d738eb3113863946284fe5014.scope: Deactivated successfully.
Nov 24 19:48:16 compute-0 sudo[87892]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 24 19:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 19:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:16 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 24 19:48:16 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 24 19:48:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 19:48:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:16 compute-0 sudo[88038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:16 compute-0 sudo[88038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:16 compute-0 sudo[88038]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:16 compute-0 sudo[88063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:16 compute-0 sudo[88063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:16 compute-0 sudo[88063]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:16 compute-0 sudo[88088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:16 compute-0 sudo[88088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:16 compute-0 sudo[88088]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:16 compute-0 sudo[88113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:48:16 compute-0 sudo[88113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:17 compute-0 podman[88179]: 2025-11-24 19:48:17.059044004 +0000 UTC m=+0.066136451 container create c64ed5fe3f319fb9564f4b801dec5b465e37a90f9389d9d3458c938efce1073b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jemison, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 19:48:17 compute-0 systemd[1]: Started libpod-conmon-c64ed5fe3f319fb9564f4b801dec5b465e37a90f9389d9d3458c938efce1073b.scope.
Nov 24 19:48:17 compute-0 podman[88179]: 2025-11-24 19:48:17.030466468 +0000 UTC m=+0.037558975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:17 compute-0 podman[88179]: 2025-11-24 19:48:17.139444548 +0000 UTC m=+0.146537005 container init c64ed5fe3f319fb9564f4b801dec5b465e37a90f9389d9d3458c938efce1073b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:17 compute-0 podman[88179]: 2025-11-24 19:48:17.149974349 +0000 UTC m=+0.157066766 container start c64ed5fe3f319fb9564f4b801dec5b465e37a90f9389d9d3458c938efce1073b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 19:48:17 compute-0 podman[88179]: 2025-11-24 19:48:17.153168842 +0000 UTC m=+0.160261299 container attach c64ed5fe3f319fb9564f4b801dec5b465e37a90f9389d9d3458c938efce1073b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:48:17 compute-0 optimistic_jemison[88197]: 167 167
Nov 24 19:48:17 compute-0 systemd[1]: libpod-c64ed5fe3f319fb9564f4b801dec5b465e37a90f9389d9d3458c938efce1073b.scope: Deactivated successfully.
Nov 24 19:48:17 compute-0 podman[88179]: 2025-11-24 19:48:17.157141527 +0000 UTC m=+0.164233984 container died c64ed5fe3f319fb9564f4b801dec5b465e37a90f9389d9d3458c938efce1073b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jemison, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 19:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-055cf1ad3a1adb2731460928749476d908d4281789b97990e9c8eef7f2b1cc27-merged.mount: Deactivated successfully.
Nov 24 19:48:17 compute-0 podman[88179]: 2025-11-24 19:48:17.20629767 +0000 UTC m=+0.213390127 container remove c64ed5fe3f319fb9564f4b801dec5b465e37a90f9389d9d3458c938efce1073b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:17 compute-0 systemd[1]: libpod-conmon-c64ed5fe3f319fb9564f4b801dec5b465e37a90f9389d9d3458c938efce1073b.scope: Deactivated successfully.
Nov 24 19:48:17 compute-0 ceph-mon[75677]: Deploying daemon osd.0 on compute-0
Nov 24 19:48:17 compute-0 ceph-mon[75677]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:17 compute-0 podman[88230]: 2025-11-24 19:48:17.548751643 +0000 UTC m=+0.064306992 container create f21a44a967b0022a8e2c98ba785cb7ae92acbc5029350a4b2614159016abdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:17 compute-0 systemd[1]: Started libpod-conmon-f21a44a967b0022a8e2c98ba785cb7ae92acbc5029350a4b2614159016abdda4.scope.
Nov 24 19:48:17 compute-0 podman[88230]: 2025-11-24 19:48:17.522901891 +0000 UTC m=+0.038457240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54762b6ab5da310de5e702031f6826a0cb74e6f9560e47ae36fa5e43c7d55bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54762b6ab5da310de5e702031f6826a0cb74e6f9560e47ae36fa5e43c7d55bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54762b6ab5da310de5e702031f6826a0cb74e6f9560e47ae36fa5e43c7d55bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54762b6ab5da310de5e702031f6826a0cb74e6f9560e47ae36fa5e43c7d55bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54762b6ab5da310de5e702031f6826a0cb74e6f9560e47ae36fa5e43c7d55bd/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:17 compute-0 podman[88230]: 2025-11-24 19:48:17.639923791 +0000 UTC m=+0.155479140 container init f21a44a967b0022a8e2c98ba785cb7ae92acbc5029350a4b2614159016abdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 19:48:17 compute-0 podman[88230]: 2025-11-24 19:48:17.652166702 +0000 UTC m=+0.167722041 container start f21a44a967b0022a8e2c98ba785cb7ae92acbc5029350a4b2614159016abdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate-test, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:17 compute-0 podman[88230]: 2025-11-24 19:48:17.657122633 +0000 UTC m=+0.172677972 container attach f21a44a967b0022a8e2c98ba785cb7ae92acbc5029350a4b2614159016abdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate-test, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate-test[88246]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 24 19:48:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate-test[88246]:                             [--no-systemd] [--no-tmpfs]
Nov 24 19:48:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate-test[88246]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 19:48:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:18 compute-0 systemd[1]: libpod-f21a44a967b0022a8e2c98ba785cb7ae92acbc5029350a4b2614159016abdda4.scope: Deactivated successfully.
Nov 24 19:48:18 compute-0 podman[88230]: 2025-11-24 19:48:18.278183726 +0000 UTC m=+0.793739125 container died f21a44a967b0022a8e2c98ba785cb7ae92acbc5029350a4b2614159016abdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 19:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c54762b6ab5da310de5e702031f6826a0cb74e6f9560e47ae36fa5e43c7d55bd-merged.mount: Deactivated successfully.
Nov 24 19:48:18 compute-0 podman[88230]: 2025-11-24 19:48:18.355524689 +0000 UTC m=+0.871080028 container remove f21a44a967b0022a8e2c98ba785cb7ae92acbc5029350a4b2614159016abdda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:18 compute-0 systemd[1]: libpod-conmon-f21a44a967b0022a8e2c98ba785cb7ae92acbc5029350a4b2614159016abdda4.scope: Deactivated successfully.
Nov 24 19:48:19 compute-0 systemd[1]: Reloading.
Nov 24 19:48:19 compute-0 systemd-rc-local-generator[88310]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:48:19 compute-0 systemd-sysv-generator[88314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:48:19 compute-0 systemd[1]: Reloading.
Nov 24 19:48:19 compute-0 ceph-mon[75677]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:19 compute-0 systemd-sysv-generator[88353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:48:19 compute-0 systemd-rc-local-generator[88350]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:48:19 compute-0 systemd[1]: Starting Ceph osd.0 for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:48:19 compute-0 podman[88410]: 2025-11-24 19:48:19.981999283 +0000 UTC m=+0.065962428 container create b94bf50d382e997784b3a9177346b1fa5af39a989bd64fe686db71a5c21e8700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 19:48:20 compute-0 podman[88410]: 2025-11-24 19:48:19.955253036 +0000 UTC m=+0.039216221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee57210b5e6771f2ab521180e57ca9de80f4593562f13eae0f3c7253f9a351d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee57210b5e6771f2ab521180e57ca9de80f4593562f13eae0f3c7253f9a351d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee57210b5e6771f2ab521180e57ca9de80f4593562f13eae0f3c7253f9a351d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee57210b5e6771f2ab521180e57ca9de80f4593562f13eae0f3c7253f9a351d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee57210b5e6771f2ab521180e57ca9de80f4593562f13eae0f3c7253f9a351d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:20 compute-0 podman[88410]: 2025-11-24 19:48:20.076436225 +0000 UTC m=+0.160399420 container init b94bf50d382e997784b3a9177346b1fa5af39a989bd64fe686db71a5c21e8700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 19:48:20 compute-0 podman[88410]: 2025-11-24 19:48:20.091524492 +0000 UTC m=+0.175487637 container start b94bf50d382e997784b3a9177346b1fa5af39a989bd64fe686db71a5c21e8700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 24 19:48:20 compute-0 podman[88410]: 2025-11-24 19:48:20.095850533 +0000 UTC m=+0.179813728 container attach b94bf50d382e997784b3a9177346b1fa5af39a989bd64fe686db71a5c21e8700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 19:48:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate[88425]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 19:48:21 compute-0 bash[88410]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 19:48:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate[88425]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 19:48:21 compute-0 bash[88410]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 19:48:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate[88425]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 19:48:21 compute-0 bash[88410]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 19:48:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate[88425]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 19:48:21 compute-0 bash[88410]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 19:48:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate[88425]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:21 compute-0 bash[88410]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate[88425]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 19:48:21 compute-0 bash[88410]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 19:48:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate[88425]: --> ceph-volume raw activate successful for osd ID: 0
Nov 24 19:48:21 compute-0 bash[88410]: --> ceph-volume raw activate successful for osd ID: 0
Nov 24 19:48:21 compute-0 systemd[1]: libpod-b94bf50d382e997784b3a9177346b1fa5af39a989bd64fe686db71a5c21e8700.scope: Deactivated successfully.
Nov 24 19:48:21 compute-0 systemd[1]: libpod-b94bf50d382e997784b3a9177346b1fa5af39a989bd64fe686db71a5c21e8700.scope: Consumed 1.271s CPU time.
Nov 24 19:48:21 compute-0 conmon[88425]: conmon b94bf50d382e997784b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b94bf50d382e997784b3a9177346b1fa5af39a989bd64fe686db71a5c21e8700.scope/container/memory.events
Nov 24 19:48:21 compute-0 podman[88410]: 2025-11-24 19:48:21.339813909 +0000 UTC m=+1.423777054 container died b94bf50d382e997784b3a9177346b1fa5af39a989bd64fe686db71a5c21e8700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:21 compute-0 ceph-mon[75677]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-bee57210b5e6771f2ab521180e57ca9de80f4593562f13eae0f3c7253f9a351d-merged.mount: Deactivated successfully.
Nov 24 19:48:21 compute-0 podman[88410]: 2025-11-24 19:48:21.414030522 +0000 UTC m=+1.497993657 container remove b94bf50d382e997784b3a9177346b1fa5af39a989bd64fe686db71a5c21e8700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 19:48:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:21 compute-0 podman[88605]: 2025-11-24 19:48:21.728335645 +0000 UTC m=+0.064428503 container create bbba25ec9aab993a6ec967863c72691a0199c4b2ef0cb5cd039ab1c5c1217c87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5e4a3f8b409e84430e6fec6f94728f438965e4338a2c1121295fc8a94d7afd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5e4a3f8b409e84430e6fec6f94728f438965e4338a2c1121295fc8a94d7afd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5e4a3f8b409e84430e6fec6f94728f438965e4338a2c1121295fc8a94d7afd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5e4a3f8b409e84430e6fec6f94728f438965e4338a2c1121295fc8a94d7afd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5e4a3f8b409e84430e6fec6f94728f438965e4338a2c1121295fc8a94d7afd/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:21 compute-0 podman[88605]: 2025-11-24 19:48:21.700905677 +0000 UTC m=+0.036998585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:21 compute-0 podman[88605]: 2025-11-24 19:48:21.806657564 +0000 UTC m=+0.142750472 container init bbba25ec9aab993a6ec967863c72691a0199c4b2ef0cb5cd039ab1c5c1217c87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:21 compute-0 podman[88605]: 2025-11-24 19:48:21.819383262 +0000 UTC m=+0.155476130 container start bbba25ec9aab993a6ec967863c72691a0199c4b2ef0cb5cd039ab1c5c1217c87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:48:21 compute-0 bash[88605]: bbba25ec9aab993a6ec967863c72691a0199c4b2ef0cb5cd039ab1c5c1217c87
Nov 24 19:48:21 compute-0 systemd[1]: Started Ceph osd.0 for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:48:21 compute-0 ceph-osd[88624]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 19:48:21 compute-0 ceph-osd[88624]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 24 19:48:21 compute-0 ceph-osd[88624]: pidfile_write: ignore empty --pid-file
Nov 24 19:48:21 compute-0 sudo[88113]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bdev(0x560fcf9cd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bdev(0x560fcf9cd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bdev(0x560fcf9cd800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bdev(0x560fcf9cd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bdev(0x560fd0805800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bdev(0x560fd0805800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bdev(0x560fd0805800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bdev(0x560fd0805800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 19:48:21 compute-0 ceph-osd[88624]: bdev(0x560fd0805800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 19:48:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:48:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:48:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 24 19:48:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 19:48:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:48:21 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:21 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 24 19:48:21 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 24 19:48:22 compute-0 sudo[88637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:22 compute-0 sudo[88637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:22 compute-0 sudo[88637]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:22 compute-0 sudo[88662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:22 compute-0 sudo[88662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:22 compute-0 sudo[88662]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fcf9cd800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 19:48:22 compute-0 sudo[88687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:22 compute-0 sudo[88687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:22 compute-0 sudo[88687]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:22 compute-0 sudo[88714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:48:22 compute-0 sudo[88714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:22 compute-0 ceph-osd[88624]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 24 19:48:22 compute-0 ceph-osd[88624]: load: jerasure load: lrc 
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 19:48:22 compute-0 podman[88783]: 2025-11-24 19:48:22.774830556 +0000 UTC m=+0.072959472 container create 8d42d8ab0e2315863db46a7396e17a672e01c5284a354af78b44992326e06ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:22 compute-0 systemd[1]: Started libpod-conmon-8d42d8ab0e2315863db46a7396e17a672e01c5284a354af78b44992326e06ed8.scope.
Nov 24 19:48:22 compute-0 podman[88783]: 2025-11-24 19:48:22.745236583 +0000 UTC m=+0.043365549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:22 compute-0 podman[88783]: 2025-11-24 19:48:22.879355744 +0000 UTC m=+0.177484670 container init 8d42d8ab0e2315863db46a7396e17a672e01c5284a354af78b44992326e06ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pasteur, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 24 19:48:22 compute-0 podman[88783]: 2025-11-24 19:48:22.890425665 +0000 UTC m=+0.188554571 container start 8d42d8ab0e2315863db46a7396e17a672e01c5284a354af78b44992326e06ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 19:48:22 compute-0 affectionate_pasteur[88803]: 167 167
Nov 24 19:48:22 compute-0 systemd[1]: libpod-8d42d8ab0e2315863db46a7396e17a672e01c5284a354af78b44992326e06ed8.scope: Deactivated successfully.
Nov 24 19:48:22 compute-0 podman[88783]: 2025-11-24 19:48:22.903476317 +0000 UTC m=+0.201605233 container attach 8d42d8ab0e2315863db46a7396e17a672e01c5284a354af78b44992326e06ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:22 compute-0 podman[88783]: 2025-11-24 19:48:22.904059028 +0000 UTC m=+0.202187944 container died 8d42d8ab0e2315863db46a7396e17a672e01c5284a354af78b44992326e06ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pasteur, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 19:48:22 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:22 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:22 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 19:48:22 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:22 compute-0 ceph-mon[75677]: Deploying daemon osd.1 on compute-0
Nov 24 19:48:22 compute-0 ceph-mon[75677]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6387bb6c6634b6f5634d667f38528d8768e720ac6c5cbeb8ed2f34e8ab6711aa-merged.mount: Deactivated successfully.
Nov 24 19:48:22 compute-0 podman[88783]: 2025-11-24 19:48:22.957213165 +0000 UTC m=+0.255342051 container remove 8d42d8ab0e2315863db46a7396e17a672e01c5284a354af78b44992326e06ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 19:48:22 compute-0 ceph-osd[88624]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 19:48:22 compute-0 ceph-osd[88624]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0886c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0887400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0887400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0887400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bdev(0x560fd0887400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bluefs mount
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bluefs mount shared_bdev_used = 0
Nov 24 19:48:22 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 19:48:22 compute-0 systemd[1]: libpod-conmon-8d42d8ab0e2315863db46a7396e17a672e01c5284a354af78b44992326e06ed8.scope: Deactivated successfully.
Nov 24 19:48:22 compute-0 ceph-osd[88624]: rocksdb: RocksDB version: 7.9.2
Nov 24 19:48:22 compute-0 ceph-osd[88624]: rocksdb: Git sha 0
Nov 24 19:48:22 compute-0 ceph-osd[88624]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 19:48:22 compute-0 ceph-osd[88624]: rocksdb: DB SUMMARY
Nov 24 19:48:22 compute-0 ceph-osd[88624]: rocksdb: DB Session ID:  Z1I1IQH7M7508W03A4M1
Nov 24 19:48:22 compute-0 ceph-osd[88624]: rocksdb: CURRENT file:  CURRENT
Nov 24 19:48:22 compute-0 ceph-osd[88624]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 19:48:22 compute-0 ceph-osd[88624]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 19:48:22 compute-0 ceph-osd[88624]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                         Options.error_if_exists: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.create_if_missing: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                                     Options.env: 0x560fd0857c70
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                                Options.info_log: 0x560fcfa548a0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                              Options.statistics: (nil)
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.use_fsync: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                              Options.db_log_dir: 
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.write_buffer_manager: 0x560fd0960460
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.unordered_write: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.row_cache: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                              Options.wal_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.two_write_queues: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.wal_compression: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.atomic_flush: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.max_background_jobs: 4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.max_background_compactions: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.max_subcompactions: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.max_open_files: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Compression algorithms supported:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kZSTD supported: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kXpressCompression supported: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kBZip2Compression supported: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kLZ4Compression supported: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kZlibCompression supported: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kSnappyCompression supported: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa542c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa542c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa542c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa542c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa542c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa542c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa542c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa54240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa41090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa54240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa41090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa54240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa41090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 83fc463d-9f7b-41e6-988a-0faf96349f39
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013703049988, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013703050300, "job": 1, "event": "recovery_finished"}
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: freelist init
Nov 24 19:48:23 compute-0 ceph-osd[88624]: freelist _read_cfg
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluefs umount
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bdev(0x560fd0887400 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bdev(0x560fd0887400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bdev(0x560fd0887400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bdev(0x560fd0887400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bdev(0x560fd0887400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluefs mount
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluefs mount shared_bdev_used = 4718592
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: RocksDB version: 7.9.2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Git sha 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: DB SUMMARY
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: DB Session ID:  Z1I1IQH7M7508W03A4M0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: CURRENT file:  CURRENT
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                         Options.error_if_exists: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.create_if_missing: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                                     Options.env: 0x560fd0a08460
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                                Options.info_log: 0x560fcfa54300
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                              Options.statistics: (nil)
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.use_fsync: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                              Options.db_log_dir: 
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.write_buffer_manager: 0x560fd09606e0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.unordered_write: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.row_cache: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                              Options.wal_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.two_write_queues: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.wal_compression: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.atomic_flush: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.max_background_jobs: 4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.max_background_compactions: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.max_subcompactions: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.max_open_files: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Compression algorithms supported:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kZSTD supported: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kXpressCompression supported: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kBZip2Compression supported: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kLZ4Compression supported: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kZlibCompression supported: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         kSnappyCompression supported: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa4a4a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa4a4a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa4a4a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa4a4a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa4a4a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa4a4a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa4a4a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa411f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa4a460)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa41090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa4a460)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa41090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 podman[89030]: 2025-11-24 19:48:23.30589394 +0000 UTC m=+0.084073924 container create 7794cda5be530d560186a4ea13735d29ded70cc1ac9a743281b9ec5ee93b4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560fcfa4a460)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560fcfa41090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 83fc463d-9f7b-41e6-988a-0faf96349f39
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013703284423, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013703288470, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013703, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "83fc463d-9f7b-41e6-988a-0faf96349f39", "db_session_id": "Z1I1IQH7M7508W03A4M0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013703291527, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013703, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "83fc463d-9f7b-41e6-988a-0faf96349f39", "db_session_id": "Z1I1IQH7M7508W03A4M0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013703295003, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013703, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "83fc463d-9f7b-41e6-988a-0faf96349f39", "db_session_id": "Z1I1IQH7M7508W03A4M0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013703296888, "job": 1, "event": "recovery_finished"}
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560fd0a15c00
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: DB pointer 0x560fd0949a00
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 24 19:48:23 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 19:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa41090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa41090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa41090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 19:48:23 compute-0 ceph-osd[88624]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 19:48:23 compute-0 ceph-osd[88624]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 19:48:23 compute-0 ceph-osd[88624]: _get_class not permitted to load lua
Nov 24 19:48:23 compute-0 ceph-osd[88624]: _get_class not permitted to load sdk
Nov 24 19:48:23 compute-0 ceph-osd[88624]: _get_class not permitted to load test_remote_reads
Nov 24 19:48:23 compute-0 ceph-osd[88624]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 19:48:23 compute-0 ceph-osd[88624]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 19:48:23 compute-0 ceph-osd[88624]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 19:48:23 compute-0 ceph-osd[88624]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 19:48:23 compute-0 ceph-osd[88624]: osd.0 0 load_pgs
Nov 24 19:48:23 compute-0 ceph-osd[88624]: osd.0 0 load_pgs opened 0 pgs
Nov 24 19:48:23 compute-0 ceph-osd[88624]: osd.0 0 log_to_monitors true
Nov 24 19:48:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:48:23.330+0000 7f2cab13c740 -1 osd.0 0 log_to_monitors true
Nov 24 19:48:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 24 19:48:23 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 24 19:48:23 compute-0 systemd[1]: Started libpod-conmon-7794cda5be530d560186a4ea13735d29ded70cc1ac9a743281b9ec5ee93b4bdc.scope.
Nov 24 19:48:23 compute-0 podman[89030]: 2025-11-24 19:48:23.277049989 +0000 UTC m=+0.055230033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f00b5a008ff7898553b8d78e5e788640cfda174a77fc4ab5930d69648d024e5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f00b5a008ff7898553b8d78e5e788640cfda174a77fc4ab5930d69648d024e5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f00b5a008ff7898553b8d78e5e788640cfda174a77fc4ab5930d69648d024e5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f00b5a008ff7898553b8d78e5e788640cfda174a77fc4ab5930d69648d024e5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f00b5a008ff7898553b8d78e5e788640cfda174a77fc4ab5930d69648d024e5f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:23 compute-0 podman[89030]: 2025-11-24 19:48:23.423765005 +0000 UTC m=+0.201945039 container init 7794cda5be530d560186a4ea13735d29ded70cc1ac9a743281b9ec5ee93b4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 24 19:48:23 compute-0 podman[89030]: 2025-11-24 19:48:23.431448241 +0000 UTC m=+0.209628225 container start 7794cda5be530d560186a4ea13735d29ded70cc1ac9a743281b9ec5ee93b4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 24 19:48:23 compute-0 podman[89030]: 2025-11-24 19:48:23.435266643 +0000 UTC m=+0.213446627 container attach 7794cda5be530d560186a4ea13735d29ded70cc1ac9a743281b9ec5ee93b4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 24 19:48:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:48:23 compute-0 ceph-mon[75677]: from='osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 24 19:48:23 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 24 19:48:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 24 19:48:23 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 24 19:48:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 24 19:48:23 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 19:48:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 24 19:48:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 19:48:23 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:23 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:23 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:23 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 19:48:23 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:23 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate-test[89260]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 24 19:48:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate-test[89260]:                             [--no-systemd] [--no-tmpfs]
Nov 24 19:48:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate-test[89260]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 19:48:24 compute-0 systemd[1]: libpod-7794cda5be530d560186a4ea13735d29ded70cc1ac9a743281b9ec5ee93b4bdc.scope: Deactivated successfully.
Nov 24 19:48:24 compute-0 podman[89030]: 2025-11-24 19:48:24.018214664 +0000 UTC m=+0.796394658 container died 7794cda5be530d560186a4ea13735d29ded70cc1ac9a743281b9ec5ee93b4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate-test, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 19:48:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f00b5a008ff7898553b8d78e5e788640cfda174a77fc4ab5930d69648d024e5f-merged.mount: Deactivated successfully.
Nov 24 19:48:24 compute-0 podman[89030]: 2025-11-24 19:48:24.0878056 +0000 UTC m=+0.865985584 container remove 7794cda5be530d560186a4ea13735d29ded70cc1ac9a743281b9ec5ee93b4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:24 compute-0 systemd[1]: libpod-conmon-7794cda5be530d560186a4ea13735d29ded70cc1ac9a743281b9ec5ee93b4bdc.scope: Deactivated successfully.
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:48:24
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [balancer INFO root] No pools available
Nov 24 19:48:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 19:48:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:48:24 compute-0 systemd[1]: Reloading.
Nov 24 19:48:24 compute-0 systemd-rc-local-generator[89326]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:48:24 compute-0 systemd-sysv-generator[89329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:48:24 compute-0 systemd[1]: Reloading.
Nov 24 19:48:24 compute-0 systemd-sysv-generator[89372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:48:24 compute-0 systemd-rc-local-generator[89366]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:48:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 24 19:48:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:48:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 19:48:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 24 19:48:24 compute-0 ceph-osd[88624]: osd.0 0 done with init, starting boot process
Nov 24 19:48:24 compute-0 ceph-osd[88624]: osd.0 0 start_boot
Nov 24 19:48:24 compute-0 ceph-osd[88624]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 19:48:24 compute-0 ceph-osd[88624]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 19:48:24 compute-0 ceph-osd[88624]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 19:48:24 compute-0 ceph-osd[88624]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 19:48:24 compute-0 ceph-osd[88624]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 24 19:48:24 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 19:48:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:24 compute-0 ceph-mon[75677]: from='osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 24 19:48:24 compute-0 ceph-mon[75677]: osdmap e7: 3 total, 0 up, 3 in
Nov 24 19:48:24 compute-0 ceph-mon[75677]: from='osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 19:48:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:24 compute-0 ceph-mon[75677]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1291375232; not ready for session (expect reconnect)
Nov 24 19:48:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 19:48:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:24 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 19:48:25 compute-0 systemd[1]: Starting Ceph osd.1 for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:48:25 compute-0 podman[89425]: 2025-11-24 19:48:25.412563297 +0000 UTC m=+0.067682406 container create bfdd7bcd79cad7e6d62bdcd999d8764c63bdfbe9ccf09fd3caeeb0bbdcecabf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:25 compute-0 podman[89425]: 2025-11-24 19:48:25.379439836 +0000 UTC m=+0.034558985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6365dd4dfc03a096a50b77a332d532a7a164da1304708053cc2b9f9184966b97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6365dd4dfc03a096a50b77a332d532a7a164da1304708053cc2b9f9184966b97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6365dd4dfc03a096a50b77a332d532a7a164da1304708053cc2b9f9184966b97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6365dd4dfc03a096a50b77a332d532a7a164da1304708053cc2b9f9184966b97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6365dd4dfc03a096a50b77a332d532a7a164da1304708053cc2b9f9184966b97/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:25 compute-0 podman[89425]: 2025-11-24 19:48:25.535983173 +0000 UTC m=+0.191102332 container init bfdd7bcd79cad7e6d62bdcd999d8764c63bdfbe9ccf09fd3caeeb0bbdcecabf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 24 19:48:25 compute-0 podman[89425]: 2025-11-24 19:48:25.549081587 +0000 UTC m=+0.204200706 container start bfdd7bcd79cad7e6d62bdcd999d8764c63bdfbe9ccf09fd3caeeb0bbdcecabf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 19:48:25 compute-0 podman[89425]: 2025-11-24 19:48:25.553296125 +0000 UTC m=+0.208415304 container attach bfdd7bcd79cad7e6d62bdcd999d8764c63bdfbe9ccf09fd3caeeb0bbdcecabf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 19:48:25 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1291375232; not ready for session (expect reconnect)
Nov 24 19:48:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 19:48:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:25 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 19:48:25 compute-0 ceph-mon[75677]: from='osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 19:48:25 compute-0 ceph-mon[75677]: osdmap e8: 3 total, 0 up, 3 in
Nov 24 19:48:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate[89441]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 19:48:26 compute-0 bash[89425]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 19:48:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate[89441]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 19:48:26 compute-0 bash[89425]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 19:48:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate[89441]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 19:48:26 compute-0 bash[89425]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 19:48:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate[89441]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 19:48:26 compute-0 bash[89425]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 19:48:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate[89441]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:26 compute-0 bash[89425]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate[89441]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 19:48:26 compute-0 bash[89425]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 19:48:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate[89441]: --> ceph-volume raw activate successful for osd ID: 1
Nov 24 19:48:26 compute-0 bash[89425]: --> ceph-volume raw activate successful for osd ID: 1
Nov 24 19:48:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:26 compute-0 systemd[1]: libpod-bfdd7bcd79cad7e6d62bdcd999d8764c63bdfbe9ccf09fd3caeeb0bbdcecabf6.scope: Deactivated successfully.
Nov 24 19:48:26 compute-0 podman[89425]: 2025-11-24 19:48:26.769330416 +0000 UTC m=+1.424449525 container died bfdd7bcd79cad7e6d62bdcd999d8764c63bdfbe9ccf09fd3caeeb0bbdcecabf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:26 compute-0 systemd[1]: libpod-bfdd7bcd79cad7e6d62bdcd999d8764c63bdfbe9ccf09fd3caeeb0bbdcecabf6.scope: Consumed 1.238s CPU time.
Nov 24 19:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6365dd4dfc03a096a50b77a332d532a7a164da1304708053cc2b9f9184966b97-merged.mount: Deactivated successfully.
Nov 24 19:48:26 compute-0 podman[89425]: 2025-11-24 19:48:26.869294089 +0000 UTC m=+1.524413198 container remove bfdd7bcd79cad7e6d62bdcd999d8764c63bdfbe9ccf09fd3caeeb0bbdcecabf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1-activate, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:26 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1291375232; not ready for session (expect reconnect)
Nov 24 19:48:26 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 19:48:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 19:48:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:26 compute-0 ceph-mon[75677]: purged_snaps scrub starts
Nov 24 19:48:26 compute-0 ceph-mon[75677]: purged_snaps scrub ok
Nov 24 19:48:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:26 compute-0 ceph-mon[75677]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:27 compute-0 podman[89619]: 2025-11-24 19:48:27.215325231 +0000 UTC m=+0.072595897 container create 392320b68810e320de3e792990f44c5b755fc0c45e608682b2b54bd76d0e3e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 19:48:27 compute-0 podman[89619]: 2025-11-24 19:48:27.181468997 +0000 UTC m=+0.038739763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69989934b04b2b57fd752483a968c3ddd4b4971ec475f361b9c1afb35652d472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69989934b04b2b57fd752483a968c3ddd4b4971ec475f361b9c1afb35652d472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69989934b04b2b57fd752483a968c3ddd4b4971ec475f361b9c1afb35652d472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69989934b04b2b57fd752483a968c3ddd4b4971ec475f361b9c1afb35652d472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69989934b04b2b57fd752483a968c3ddd4b4971ec475f361b9c1afb35652d472/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:27 compute-0 podman[89619]: 2025-11-24 19:48:27.318533095 +0000 UTC m=+0.175803791 container init 392320b68810e320de3e792990f44c5b755fc0c45e608682b2b54bd76d0e3e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:27 compute-0 podman[89619]: 2025-11-24 19:48:27.328407007 +0000 UTC m=+0.185677703 container start 392320b68810e320de3e792990f44c5b755fc0c45e608682b2b54bd76d0e3e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 19:48:27 compute-0 bash[89619]: 392320b68810e320de3e792990f44c5b755fc0c45e608682b2b54bd76d0e3e59
Nov 24 19:48:27 compute-0 systemd[1]: Started Ceph osd.1 for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:48:27 compute-0 ceph-osd[89640]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 19:48:27 compute-0 ceph-osd[89640]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 24 19:48:27 compute-0 ceph-osd[89640]: pidfile_write: ignore empty --pid-file
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba392f9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba392f9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba392f9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba392f9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba3a131800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba3a131800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba3a131800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba3a131800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba3a131800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 19:48:27 compute-0 sudo[88714]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:48:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:48:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 24 19:48:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 24 19:48:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:48:27 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:27 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 24 19:48:27 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 24 19:48:27 compute-0 sudo[89653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:27 compute-0 sudo[89653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:27 compute-0 sudo[89653]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:27 compute-0 sudo[89678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:27 compute-0 sudo[89678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:27 compute-0 sudo[89678]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba392f9800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 19:48:27 compute-0 sudo[89703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:27 compute-0 sudo[89703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:27 compute-0 sudo[89703]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:27 compute-0 sudo[89730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:48:27 compute-0 sudo[89730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:27 compute-0 ceph-osd[89640]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 24 19:48:27 compute-0 ceph-osd[89640]: load: jerasure load: lrc 
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:27 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 19:48:27 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1291375232; not ready for session (expect reconnect)
Nov 24 19:48:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 19:48:27 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:27 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 19:48:28 compute-0 ceph-osd[88624]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.204 iops: 7476.157 elapsed_sec: 0.401
Nov 24 19:48:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : OSD bench result of 7476.156779 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 19:48:28 compute-0 ceph-osd[88624]: osd.0 0 waiting for initial osdmap
Nov 24 19:48:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:48:28.026+0000 7f2ca70bc640 -1 osd.0 0 waiting for initial osdmap
Nov 24 19:48:28 compute-0 ceph-osd[88624]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 24 19:48:28 compute-0 ceph-osd[88624]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 24 19:48:28 compute-0 ceph-osd[88624]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 24 19:48:28 compute-0 ceph-osd[88624]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 24 19:48:28 compute-0 ceph-osd[88624]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 19:48:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:48:28.055+0000 7f2ca26e4640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 19:48:28 compute-0 ceph-osd[88624]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 24 19:48:28 compute-0 ceph-osd[88624]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 19:48:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:28 compute-0 sudo[89832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uomfncspqrkrhoonlumubvjcpsqadyjm ; /usr/bin/python3'
Nov 24 19:48:28 compute-0 sudo[89832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:28 compute-0 podman[89828]: 2025-11-24 19:48:28.359609109 +0000 UTC m=+0.067958741 container create 691a79f8e311ec9746424b987787f229fb215b15638dfddc79befe214b000be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:28 compute-0 systemd[1]: Started libpod-conmon-691a79f8e311ec9746424b987787f229fb215b15638dfddc79befe214b000be7.scope.
Nov 24 19:48:28 compute-0 podman[89828]: 2025-11-24 19:48:28.330815469 +0000 UTC m=+0.039165151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 24 19:48:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:28 compute-0 ceph-mon[75677]: Deploying daemon osd.2 on compute-0
Nov 24 19:48:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:28 compute-0 ceph-mon[75677]: OSD bench result of 7476.156779 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 19:48:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 24 19:48:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:48:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 24 19:48:28 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232] boot
Nov 24 19:48:28 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 24 19:48:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 19:48:28 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:28 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:28 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:28 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:28 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:28 compute-0 ceph-osd[88624]: osd.0 9 state: booting -> active
Nov 24 19:48:28 compute-0 ceph-osd[89640]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 19:48:28 compute-0 ceph-osd[89640]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 19:48:28 compute-0 podman[89828]: 2025-11-24 19:48:28.471297403 +0000 UTC m=+0.179647105 container init 691a79f8e311ec9746424b987787f229fb215b15638dfddc79befe214b000be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b2c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b3400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b3400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b3400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b3400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluefs mount
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluefs mount shared_bdev_used = 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 19:48:28 compute-0 podman[89828]: 2025-11-24 19:48:28.482670589 +0000 UTC m=+0.191020201 container start 691a79f8e311ec9746424b987787f229fb215b15638dfddc79befe214b000be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 19:48:28 compute-0 podman[89828]: 2025-11-24 19:48:28.486329258 +0000 UTC m=+0.194678950 container attach 691a79f8e311ec9746424b987787f229fb215b15638dfddc79befe214b000be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: RocksDB version: 7.9.2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Git sha 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: DB SUMMARY
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: DB Session ID:  7QRAYLCMSF1WUGKCCYRM
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: CURRENT file:  CURRENT
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                         Options.error_if_exists: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.create_if_missing: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                                     Options.env: 0x55ba3a183c70
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                                Options.info_log: 0x55ba393808a0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                              Options.statistics: (nil)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.use_fsync: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                              Options.db_log_dir: 
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.write_buffer_manager: 0x55ba3a296460
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.unordered_write: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.row_cache: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                              Options.wal_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.two_write_queues: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.wal_compression: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.atomic_flush: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.max_background_jobs: 4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.max_background_compactions: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.max_subcompactions: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.max_open_files: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Compression algorithms supported:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kZSTD supported: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kXpressCompression supported: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kBZip2Compression supported: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kLZ4Compression supported: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kZlibCompression supported: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kSnappyCompression supported: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba393802c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba393802c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 19:48:28 compute-0 gracious_jemison[89849]: 167 167
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba393802c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 systemd[1]: libpod-691a79f8e311ec9746424b987787f229fb215b15638dfddc79befe214b000be7.scope: Deactivated successfully.
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 podman[89828]: 2025-11-24 19:48:28.495184493 +0000 UTC m=+0.203534175 container died 691a79f8e311ec9746424b987787f229fb215b15638dfddc79befe214b000be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba393802c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba393802c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba393802c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 python3[89843]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba393802c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dc0904c3ed2de554b1ec7cb73dffb609cbbd5043033c9ca08d4c8108af7b743-merged.mount: Deactivated successfully.
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b0ec6488-a465-49bf-bd25-b5ecf810b929
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013708535542, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013708536030, "job": 1, "event": "recovery_finished"}
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: freelist init
Nov 24 19:48:28 compute-0 ceph-osd[89640]: freelist _read_cfg
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluefs umount
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b3400 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 19:48:28 compute-0 podman[89828]: 2025-11-24 19:48:28.55494561 +0000 UTC m=+0.263295252 container remove 691a79f8e311ec9746424b987787f229fb215b15638dfddc79befe214b000be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:48:28 compute-0 systemd[1]: libpod-conmon-691a79f8e311ec9746424b987787f229fb215b15638dfddc79befe214b000be7.scope: Deactivated successfully.
Nov 24 19:48:28 compute-0 podman[90022]: 2025-11-24 19:48:28.589777378 +0000 UTC m=+0.045713717 container create e97ee4f435e6a38a9e56c10497931b3cd91153ab79f114a984a925677ecada6a (image=quay.io/ceph/ceph:v18, name=hungry_bose, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 19:48:28 compute-0 systemd[1]: Started libpod-conmon-e97ee4f435e6a38a9e56c10497931b3cd91153ab79f114a984a925677ecada6a.scope.
Nov 24 19:48:28 compute-0 podman[90022]: 2025-11-24 19:48:28.567021867 +0000 UTC m=+0.022958216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52738d51ae2af3af4a372a3e0a624ed52cfade8f1924dca8de53e0f348556fd3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52738d51ae2af3af4a372a3e0a624ed52cfade8f1924dca8de53e0f348556fd3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52738d51ae2af3af4a372a3e0a624ed52cfade8f1924dca8de53e0f348556fd3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:28 compute-0 podman[90022]: 2025-11-24 19:48:28.703472825 +0000 UTC m=+0.159409194 container init e97ee4f435e6a38a9e56c10497931b3cd91153ab79f114a984a925677ecada6a (image=quay.io/ceph/ceph:v18, name=hungry_bose, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:28 compute-0 podman[90022]: 2025-11-24 19:48:28.714424204 +0000 UTC m=+0.170360553 container start e97ee4f435e6a38a9e56c10497931b3cd91153ab79f114a984a925677ecada6a (image=quay.io/ceph/ceph:v18, name=hungry_bose, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 19:48:28 compute-0 podman[90022]: 2025-11-24 19:48:28.718768745 +0000 UTC m=+0.174705154 container attach e97ee4f435e6a38a9e56c10497931b3cd91153ab79f114a984a925677ecada6a (image=quay.io/ceph/ceph:v18, name=hungry_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b3400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b3400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b3400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bdev(0x55ba3a1b3400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluefs mount
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluefs mount shared_bdev_used = 4718592
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: RocksDB version: 7.9.2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Git sha 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: DB SUMMARY
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: DB Session ID:  7QRAYLCMSF1WUGKCCYRN
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: CURRENT file:  CURRENT
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                         Options.error_if_exists: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.create_if_missing: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                                     Options.env: 0x55ba3a33e3f0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                                Options.info_log: 0x55ba39380600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                              Options.statistics: (nil)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.use_fsync: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                              Options.db_log_dir: 
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.write_buffer_manager: 0x55ba3a296460
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.unordered_write: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.row_cache: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                              Options.wal_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.two_write_queues: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.wal_compression: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.atomic_flush: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.max_background_jobs: 4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.max_background_compactions: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.max_subcompactions: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.max_open_files: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Compression algorithms supported:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kZSTD supported: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kXpressCompression supported: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kBZip2Compression supported: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kLZ4Compression supported: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kZlibCompression supported: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         kSnappyCompression supported: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba39380380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ba3936d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b0ec6488-a465-49bf-bd25-b5ecf810b929
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013708800324, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013708806968, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013708, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b0ec6488-a465-49bf-bd25-b5ecf810b929", "db_session_id": "7QRAYLCMSF1WUGKCCYRN", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013708811154, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013708, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b0ec6488-a465-49bf-bd25-b5ecf810b929", "db_session_id": "7QRAYLCMSF1WUGKCCYRN", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013708815239, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013708, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b0ec6488-a465-49bf-bd25-b5ecf810b929", "db_session_id": "7QRAYLCMSF1WUGKCCYRN", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013708819289, "job": 1, "event": "recovery_finished"}
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ba394da000
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: DB pointer 0x55ba3a275a00
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 24 19:48:28 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 19:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 19:48:28 compute-0 ceph-osd[89640]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 19:48:28 compute-0 ceph-osd[89640]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 19:48:28 compute-0 ceph-osd[89640]: _get_class not permitted to load lua
Nov 24 19:48:28 compute-0 ceph-osd[89640]: _get_class not permitted to load sdk
Nov 24 19:48:28 compute-0 ceph-osd[89640]: _get_class not permitted to load test_remote_reads
Nov 24 19:48:28 compute-0 ceph-osd[89640]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 19:48:28 compute-0 ceph-osd[89640]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 19:48:28 compute-0 ceph-osd[89640]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 19:48:28 compute-0 ceph-osd[89640]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 19:48:28 compute-0 ceph-osd[89640]: osd.1 0 load_pgs
Nov 24 19:48:28 compute-0 ceph-osd[89640]: osd.1 0 load_pgs opened 0 pgs
Nov 24 19:48:28 compute-0 ceph-osd[89640]: osd.1 0 log_to_monitors true
Nov 24 19:48:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:48:28.850+0000 7f1a6e3be740 -1 osd.1 0 log_to_monitors true
Nov 24 19:48:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 24 19:48:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 24 19:48:28 compute-0 podman[90312]: 2025-11-24 19:48:28.948459476 +0000 UTC m=+0.066601889 container create 9a39d89894b861bc674ae4c156b62f2d6de872a119f7ed6184d6daaf27d59887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate-test, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 19:48:28 compute-0 systemd[1]: Started libpod-conmon-9a39d89894b861bc674ae4c156b62f2d6de872a119f7ed6184d6daaf27d59887.scope.
Nov 24 19:48:29 compute-0 podman[90312]: 2025-11-24 19:48:28.9193072 +0000 UTC m=+0.037449693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7c62a4c379c0476e6bf67a27dfb2bfced024e66167195f8e8886a349e03709a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7c62a4c379c0476e6bf67a27dfb2bfced024e66167195f8e8886a349e03709a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7c62a4c379c0476e6bf67a27dfb2bfced024e66167195f8e8886a349e03709a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7c62a4c379c0476e6bf67a27dfb2bfced024e66167195f8e8886a349e03709a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7c62a4c379c0476e6bf67a27dfb2bfced024e66167195f8e8886a349e03709a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:29 compute-0 podman[90312]: 2025-11-24 19:48:29.057913584 +0000 UTC m=+0.176056017 container init 9a39d89894b861bc674ae4c156b62f2d6de872a119f7ed6184d6daaf27d59887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 19:48:29 compute-0 podman[90312]: 2025-11-24 19:48:29.076981355 +0000 UTC m=+0.195123778 container start 9a39d89894b861bc674ae4c156b62f2d6de872a119f7ed6184d6daaf27d59887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate-test, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 19:48:29 compute-0 podman[90312]: 2025-11-24 19:48:29.082115779 +0000 UTC m=+0.200258262 container attach 9a39d89894b861bc674ae4c156b62f2d6de872a119f7ed6184d6daaf27d59887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 19:48:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 19:48:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2832412634' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 19:48:29 compute-0 hungry_bose[90082]: 
Nov 24 19:48:29 compute-0 hungry_bose[90082]: {"fsid":"05e060a3-406b-57f0-89d2-ec35f5b09305","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":107,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":9,"num_osds":3,"num_up_osds":1,"osd_up_since":1764013708,"num_in_osds":3,"osd_in_since":1764013690,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-24T19:48:26.276831+0000","services":{}},"progress_events":{}}
Nov 24 19:48:29 compute-0 systemd[1]: libpod-e97ee4f435e6a38a9e56c10497931b3cd91153ab79f114a984a925677ecada6a.scope: Deactivated successfully.
Nov 24 19:48:29 compute-0 podman[90022]: 2025-11-24 19:48:29.332167243 +0000 UTC m=+0.788103592 container died e97ee4f435e6a38a9e56c10497931b3cd91153ab79f114a984a925677ecada6a (image=quay.io/ceph/ceph:v18, name=hungry_bose, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-52738d51ae2af3af4a372a3e0a624ed52cfade8f1924dca8de53e0f348556fd3-merged.mount: Deactivated successfully.
Nov 24 19:48:29 compute-0 podman[90022]: 2025-11-24 19:48:29.401500966 +0000 UTC m=+0.857437295 container remove e97ee4f435e6a38a9e56c10497931b3cd91153ab79f114a984a925677ecada6a (image=quay.io/ceph/ceph:v18, name=hungry_bose, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:29 compute-0 systemd[1]: libpod-conmon-e97ee4f435e6a38a9e56c10497931b3cd91153ab79f114a984a925677ecada6a.scope: Deactivated successfully.
Nov 24 19:48:29 compute-0 sudo[89832]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 24 19:48:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:48:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 24 19:48:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 24 19:48:29 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 24 19:48:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 24 19:48:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 19:48:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e10 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 24 19:48:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:29 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:29 compute-0 ceph-mon[75677]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 19:48:29 compute-0 ceph-mon[75677]: osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232] boot
Nov 24 19:48:29 compute-0 ceph-mon[75677]: osdmap e9: 3 total, 1 up, 3 in
Nov 24 19:48:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 19:48:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:29 compute-0 ceph-mon[75677]: from='osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 24 19:48:29 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2832412634' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 19:48:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:29 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:29 compute-0 sudo[90390]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xehjjmabfubyvbrlrzxxnymqeazmwrje ; /usr/bin/python3'
Nov 24 19:48:29 compute-0 sudo[90390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate-test[90328]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 24 19:48:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate-test[90328]:                             [--no-systemd] [--no-tmpfs]
Nov 24 19:48:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate-test[90328]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 19:48:29 compute-0 systemd[1]: libpod-9a39d89894b861bc674ae4c156b62f2d6de872a119f7ed6184d6daaf27d59887.scope: Deactivated successfully.
Nov 24 19:48:29 compute-0 podman[90312]: 2025-11-24 19:48:29.776035503 +0000 UTC m=+0.894177916 container died 9a39d89894b861bc674ae4c156b62f2d6de872a119f7ed6184d6daaf27d59887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate-test, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 19:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7c62a4c379c0476e6bf67a27dfb2bfced024e66167195f8e8886a349e03709a-merged.mount: Deactivated successfully.
Nov 24 19:48:29 compute-0 podman[90312]: 2025-11-24 19:48:29.841037284 +0000 UTC m=+0.959179667 container remove 9a39d89894b861bc674ae4c156b62f2d6de872a119f7ed6184d6daaf27d59887 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 19:48:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 19:48:29 compute-0 systemd[1]: libpod-conmon-9a39d89894b861bc674ae4c156b62f2d6de872a119f7ed6184d6daaf27d59887.scope: Deactivated successfully.
Nov 24 19:48:29 compute-0 python3[90392]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:30 compute-0 podman[90409]: 2025-11-24 19:48:30.006513987 +0000 UTC m=+0.051527003 container create f37f1841575bf3401a7e3e6ae517b14ddacc524303ce02ac7bd2c61d8841e6e2 (image=quay.io/ceph/ceph:v18, name=goofy_khorana, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 19:48:30 compute-0 systemd[1]: Started libpod-conmon-f37f1841575bf3401a7e3e6ae517b14ddacc524303ce02ac7bd2c61d8841e6e2.scope.
Nov 24 19:48:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b6ac94572ea596269fc18024d4ae7cd35b9242de41e8deb2c48f88dab0ceaa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b6ac94572ea596269fc18024d4ae7cd35b9242de41e8deb2c48f88dab0ceaa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:30 compute-0 podman[90409]: 2025-11-24 19:48:29.990623247 +0000 UTC m=+0.035636273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:30 compute-0 podman[90409]: 2025-11-24 19:48:30.092951159 +0000 UTC m=+0.137964205 container init f37f1841575bf3401a7e3e6ae517b14ddacc524303ce02ac7bd2c61d8841e6e2 (image=quay.io/ceph/ceph:v18, name=goofy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:30 compute-0 podman[90409]: 2025-11-24 19:48:30.100229767 +0000 UTC m=+0.145242783 container start f37f1841575bf3401a7e3e6ae517b14ddacc524303ce02ac7bd2c61d8841e6e2 (image=quay.io/ceph/ceph:v18, name=goofy_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:30 compute-0 podman[90409]: 2025-11-24 19:48:30.104992035 +0000 UTC m=+0.150005071 container attach f37f1841575bf3401a7e3e6ae517b14ddacc524303ce02ac7bd2c61d8841e6e2 (image=quay.io/ceph/ceph:v18, name=goofy_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:30 compute-0 systemd[1]: Reloading.
Nov 24 19:48:30 compute-0 systemd-rc-local-generator[90468]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:48:30 compute-0 systemd-sysv-generator[90473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:48:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 19:48:30 compute-0 ceph-mgr[75975]: [devicehealth INFO root] creating mgr pool
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 24 19:48:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 19:48:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 19:48:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 24 19:48:30 compute-0 ceph-osd[89640]: osd.1 0 done with init, starting boot process
Nov 24 19:48:30 compute-0 ceph-osd[89640]: osd.1 0 start_boot
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 24 19:48:30 compute-0 ceph-osd[89640]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 19:48:30 compute-0 ceph-osd[89640]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 19:48:30 compute-0 ceph-osd[89640]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 19:48:30 compute-0 ceph-osd[89640]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 19:48:30 compute-0 ceph-osd[89640]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 19:48:30 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 24 19:48:30 compute-0 ceph-osd[88624]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 19:48:30 compute-0 ceph-osd[88624]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 24 19:48:30 compute-0 ceph-osd[88624]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 19:48:30 compute-0 systemd[1]: Reloading.
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:30 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:30 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:30 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:30 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:30 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/326699308; not ready for session (expect reconnect)
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 24 19:48:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:30 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:30 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:30 compute-0 ceph-mon[75677]: from='osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 24 19:48:30 compute-0 ceph-mon[75677]: osdmap e10: 3 total, 1 up, 3 in
Nov 24 19:48:30 compute-0 ceph-mon[75677]: from='osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 19:48:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 24 19:48:30 compute-0 ceph-mon[75677]: from='osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 19:48:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 24 19:48:30 compute-0 ceph-mon[75677]: osdmap e11: 3 total, 1 up, 3 in
Nov 24 19:48:30 compute-0 systemd-rc-local-generator[90531]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:48:30 compute-0 systemd-sysv-generator[90534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:48:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 19:48:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805779860' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:30 compute-0 systemd[1]: Starting Ceph osd.2 for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:48:31 compute-0 podman[90590]: 2025-11-24 19:48:31.029177999 +0000 UTC m=+0.055218213 container create 58bef80d6a429a33344af4a90222e77326a0f6106c29e8425a8fe7c0d15b548d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 19:48:31 compute-0 podman[90590]: 2025-11-24 19:48:31.00103879 +0000 UTC m=+0.027079024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c362de0b509599dd0ba9e57d24f499f7cc92fab76e08ec19c92d7afa5c6ae7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c362de0b509599dd0ba9e57d24f499f7cc92fab76e08ec19c92d7afa5c6ae7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c362de0b509599dd0ba9e57d24f499f7cc92fab76e08ec19c92d7afa5c6ae7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c362de0b509599dd0ba9e57d24f499f7cc92fab76e08ec19c92d7afa5c6ae7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c362de0b509599dd0ba9e57d24f499f7cc92fab76e08ec19c92d7afa5c6ae7/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:31 compute-0 podman[90590]: 2025-11-24 19:48:31.156772133 +0000 UTC m=+0.182812377 container init 58bef80d6a429a33344af4a90222e77326a0f6106c29e8425a8fe7c0d15b548d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 19:48:31 compute-0 podman[90590]: 2025-11-24 19:48:31.166693295 +0000 UTC m=+0.192733549 container start 58bef80d6a429a33344af4a90222e77326a0f6106c29e8425a8fe7c0d15b548d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 19:48:31 compute-0 podman[90590]: 2025-11-24 19:48:31.180491041 +0000 UTC m=+0.206531295 container attach 58bef80d6a429a33344af4a90222e77326a0f6106c29e8425a8fe7c0d15b548d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 24 19:48:31 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/326699308; not ready for session (expect reconnect)
Nov 24 19:48:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:31 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:31 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 24 19:48:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805779860' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 24 19:48:31 compute-0 goofy_khorana[90430]: pool 'vms' created
Nov 24 19:48:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 24 19:48:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:31 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:31 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:31 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:31 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 12 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:48:31 compute-0 systemd[1]: libpod-f37f1841575bf3401a7e3e6ae517b14ddacc524303ce02ac7bd2c61d8841e6e2.scope: Deactivated successfully.
Nov 24 19:48:31 compute-0 podman[90409]: 2025-11-24 19:48:31.52944645 +0000 UTC m=+1.574459476 container died f37f1841575bf3401a7e3e6ae517b14ddacc524303ce02ac7bd2c61d8841e6e2 (image=quay.io/ceph/ceph:v18, name=goofy_khorana, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 19:48:31 compute-0 ceph-mon[75677]: pgmap v34: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 19:48:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 24 19:48:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:31 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2805779860' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 24 19:48:31 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2805779860' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:31 compute-0 ceph-mon[75677]: osdmap e12: 3 total, 1 up, 3 in
Nov 24 19:48:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2b6ac94572ea596269fc18024d4ae7cd35b9242de41e8deb2c48f88dab0ceaa-merged.mount: Deactivated successfully.
Nov 24 19:48:31 compute-0 podman[90409]: 2025-11-24 19:48:31.616699265 +0000 UTC m=+1.661712311 container remove f37f1841575bf3401a7e3e6ae517b14ddacc524303ce02ac7bd2c61d8841e6e2 (image=quay.io/ceph/ceph:v18, name=goofy_khorana, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:31 compute-0 systemd[1]: libpod-conmon-f37f1841575bf3401a7e3e6ae517b14ddacc524303ce02ac7bd2c61d8841e6e2.scope: Deactivated successfully.
Nov 24 19:48:31 compute-0 sudo[90390]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:31 compute-0 sudo[90646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlnxjvbxqtqcgcrmhfbeuphhpuwrulxk ; /usr/bin/python3'
Nov 24 19:48:31 compute-0 sudo[90646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:31 compute-0 python3[90649]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:32 compute-0 podman[90663]: 2025-11-24 19:48:32.076764559 +0000 UTC m=+0.075695207 container create 77063d9a0ea44289d6272a1feb044f9df13582db14d47ed134d67b335c7e0739 (image=quay.io/ceph/ceph:v18, name=wizardly_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:32 compute-0 systemd[1]: Started libpod-conmon-77063d9a0ea44289d6272a1feb044f9df13582db14d47ed134d67b335c7e0739.scope.
Nov 24 19:48:32 compute-0 podman[90663]: 2025-11-24 19:48:32.046348732 +0000 UTC m=+0.045279480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc58ca914ddea80a8c9cb80cff61093ac1627a5275d0ffe5bdb3176a9a6d1bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc58ca914ddea80a8c9cb80cff61093ac1627a5275d0ffe5bdb3176a9a6d1bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:32 compute-0 podman[90663]: 2025-11-24 19:48:32.19684135 +0000 UTC m=+0.195772048 container init 77063d9a0ea44289d6272a1feb044f9df13582db14d47ed134d67b335c7e0739 (image=quay.io/ceph/ceph:v18, name=wizardly_moser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 19:48:32 compute-0 podman[90663]: 2025-11-24 19:48:32.209164781 +0000 UTC m=+0.208095449 container start 77063d9a0ea44289d6272a1feb044f9df13582db14d47ed134d67b335c7e0739 (image=quay.io/ceph/ceph:v18, name=wizardly_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:48:32 compute-0 podman[90663]: 2025-11-24 19:48:32.21765685 +0000 UTC m=+0.216587608 container attach 77063d9a0ea44289d6272a1feb044f9df13582db14d47ed134d67b335c7e0739 (image=quay.io/ceph/ceph:v18, name=wizardly_moser, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 19:48:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate[90605]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 19:48:32 compute-0 bash[90590]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 19:48:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate[90605]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 19:48:32 compute-0 bash[90590]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 19:48:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v37: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 19:48:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate[90605]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 19:48:32 compute-0 bash[90590]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 19:48:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate[90605]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 19:48:32 compute-0 bash[90590]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 19:48:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate[90605]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:32 compute-0 bash[90590]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate[90605]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 19:48:32 compute-0 bash[90590]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 19:48:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate[90605]: --> ceph-volume raw activate successful for osd ID: 2
Nov 24 19:48:32 compute-0 bash[90590]: --> ceph-volume raw activate successful for osd ID: 2
Nov 24 19:48:32 compute-0 systemd[1]: libpod-58bef80d6a429a33344af4a90222e77326a0f6106c29e8425a8fe7c0d15b548d.scope: Deactivated successfully.
Nov 24 19:48:32 compute-0 systemd[1]: libpod-58bef80d6a429a33344af4a90222e77326a0f6106c29e8425a8fe7c0d15b548d.scope: Consumed 1.197s CPU time.
Nov 24 19:48:32 compute-0 conmon[90605]: conmon 58bef80d6a429a33344a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58bef80d6a429a33344af4a90222e77326a0f6106c29e8425a8fe7c0d15b548d.scope/container/memory.events
Nov 24 19:48:32 compute-0 podman[90590]: 2025-11-24 19:48:32.352879688 +0000 UTC m=+1.378919972 container died 58bef80d6a429a33344af4a90222e77326a0f6106c29e8425a8fe7c0d15b548d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-54c362de0b509599dd0ba9e57d24f499f7cc92fab76e08ec19c92d7afa5c6ae7-merged.mount: Deactivated successfully.
Nov 24 19:48:32 compute-0 podman[90590]: 2025-11-24 19:48:32.456744165 +0000 UTC m=+1.482784379 container remove 58bef80d6a429a33344af4a90222e77326a0f6106c29e8425a8fe7c0d15b548d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 19:48:32 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/326699308; not ready for session (expect reconnect)
Nov 24 19:48:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:32 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:32 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 24 19:48:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 24 19:48:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 24 19:48:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:32 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:32 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:32 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:32 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 19:48:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:48:32 compute-0 ceph-mon[75677]: purged_snaps scrub starts
Nov 24 19:48:32 compute-0 ceph-mon[75677]: purged_snaps scrub ok
Nov 24 19:48:32 compute-0 ceph-mon[75677]: pgmap v37: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 19:48:32 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:32 compute-0 ceph-mon[75677]: osdmap e13: 3 total, 1 up, 3 in
Nov 24 19:48:32 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:32 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:32 compute-0 podman[90862]: 2025-11-24 19:48:32.813859447 +0000 UTC m=+0.083465884 container create 871186a32d76e1bdd06fd589c359babbce19fb26134869762d5e31ef0e473282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 19:48:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 19:48:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/534970478' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:32 compute-0 podman[90862]: 2025-11-24 19:48:32.776215013 +0000 UTC m=+0.045821480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01336d9f690abbc701becc327cb7f9bcc3c33940eec1c32801d3cf54101851c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01336d9f690abbc701becc327cb7f9bcc3c33940eec1c32801d3cf54101851c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01336d9f690abbc701becc327cb7f9bcc3c33940eec1c32801d3cf54101851c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01336d9f690abbc701becc327cb7f9bcc3c33940eec1c32801d3cf54101851c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01336d9f690abbc701becc327cb7f9bcc3c33940eec1c32801d3cf54101851c8/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:32 compute-0 podman[90862]: 2025-11-24 19:48:32.919085366 +0000 UTC m=+0.188691843 container init 871186a32d76e1bdd06fd589c359babbce19fb26134869762d5e31ef0e473282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:32 compute-0 podman[90862]: 2025-11-24 19:48:32.933760676 +0000 UTC m=+0.203367103 container start 871186a32d76e1bdd06fd589c359babbce19fb26134869762d5e31ef0e473282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:32 compute-0 bash[90862]: 871186a32d76e1bdd06fd589c359babbce19fb26134869762d5e31ef0e473282
Nov 24 19:48:32 compute-0 systemd[1]: Started Ceph osd.2 for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:48:32 compute-0 ceph-osd[90884]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 19:48:32 compute-0 ceph-osd[90884]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 24 19:48:32 compute-0 ceph-osd[90884]: pidfile_write: ignore empty --pid-file
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bdev(0x557d31659800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bdev(0x557d31659800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bdev(0x557d31659800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bdev(0x557d31659800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bdev(0x557d32491800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bdev(0x557d32491800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bdev(0x557d32491800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bdev(0x557d32491800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 24 19:48:32 compute-0 ceph-osd[90884]: bdev(0x557d32491800 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 19:48:33 compute-0 sudo[89730]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:48:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:48:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:33 compute-0 sudo[90897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:33 compute-0 sudo[90897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:33 compute-0 sudo[90897]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:33 compute-0 sudo[90922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:33 compute-0 sudo[90922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:33 compute-0 sudo[90922]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d31659800 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 19:48:33 compute-0 sudo[90947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:33 compute-0 sudo[90947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:33 compute-0 sudo[90947]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:33 compute-0 sudo[90975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:48:33 compute-0 sudo[90975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:33 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/326699308; not ready for session (expect reconnect)
Nov 24 19:48:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:33 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:33 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:33 compute-0 ceph-osd[90884]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 24 19:48:33 compute-0 ceph-osd[90884]: load: jerasure load: lrc 
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 19:48:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 24 19:48:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/534970478' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Nov 24 19:48:33 compute-0 wizardly_moser[90687]: pool 'volumes' created
Nov 24 19:48:33 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Nov 24 19:48:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:33 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:33 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:33 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:33 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:33 compute-0 ceph-osd[89640]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.711 iops: 7606.086 elapsed_sec: 0.394
Nov 24 19:48:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : OSD bench result of 7606.085651 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 19:48:33 compute-0 ceph-osd[89640]: osd.1 0 waiting for initial osdmap
Nov 24 19:48:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:48:33.555+0000 7f1a6ab55640 -1 osd.1 0 waiting for initial osdmap
Nov 24 19:48:33 compute-0 systemd[1]: libpod-77063d9a0ea44289d6272a1feb044f9df13582db14d47ed134d67b335c7e0739.scope: Deactivated successfully.
Nov 24 19:48:33 compute-0 ceph-mon[75677]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 19:48:33 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/534970478' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:33 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/534970478' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:33 compute-0 ceph-mon[75677]: osdmap e14: 3 total, 1 up, 3 in
Nov 24 19:48:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:33 compute-0 podman[90663]: 2025-11-24 19:48:33.56651271 +0000 UTC m=+1.565443398 container died 77063d9a0ea44289d6272a1feb044f9df13582db14d47ed134d67b335c7e0739 (image=quay.io/ceph/ceph:v18, name=wizardly_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:33 compute-0 ceph-osd[89640]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 19:48:33 compute-0 ceph-osd[89640]: osd.1 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 24 19:48:33 compute-0 ceph-osd[89640]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 19:48:33 compute-0 ceph-osd[89640]: osd.1 14 check_osdmap_features require_osd_release unknown -> reef
Nov 24 19:48:33 compute-0 ceph-osd[89640]: osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 19:48:33 compute-0 ceph-osd[89640]: osd.1 14 set_numa_affinity not setting numa affinity
Nov 24 19:48:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:48:33.588+0000 7f1a65966640 -1 osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 19:48:33 compute-0 ceph-osd[89640]: osd.1 14 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 24 19:48:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fc58ca914ddea80a8c9cb80cff61093ac1627a5275d0ffe5bdb3176a9a6d1bd-merged.mount: Deactivated successfully.
Nov 24 19:48:33 compute-0 podman[90663]: 2025-11-24 19:48:33.630971933 +0000 UTC m=+1.629902621 container remove 77063d9a0ea44289d6272a1feb044f9df13582db14d47ed134d67b335c7e0739 (image=quay.io/ceph/ceph:v18, name=wizardly_moser, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:33 compute-0 systemd[1]: libpod-conmon-77063d9a0ea44289d6272a1feb044f9df13582db14d47ed134d67b335c7e0739.scope: Deactivated successfully.
Nov 24 19:48:33 compute-0 sudo[90646]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:33 compute-0 sudo[91068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsscqfnoapvinekxtyczkekuoqafcxas ; /usr/bin/python3'
Nov 24 19:48:33 compute-0 sudo[91068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:33 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 19:48:33 compute-0 podman[91085]: 2025-11-24 19:48:33.942680654 +0000 UTC m=+0.065210616 container create 59f1ed9f51348f51096518d2ecde942d7a560868fa2016f686af09d4148e0613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 19:48:33 compute-0 python3[91080]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:33 compute-0 systemd[1]: Started libpod-conmon-59f1ed9f51348f51096518d2ecde942d7a560868fa2016f686af09d4148e0613.scope.
Nov 24 19:48:34 compute-0 podman[91085]: 2025-11-24 19:48:33.916295503 +0000 UTC m=+0.038825505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:34 compute-0 podman[91085]: 2025-11-24 19:48:34.049021371 +0000 UTC m=+0.171551383 container init 59f1ed9f51348f51096518d2ecde942d7a560868fa2016f686af09d4148e0613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 19:48:34 compute-0 podman[91085]: 2025-11-24 19:48:34.06372937 +0000 UTC m=+0.186259322 container start 59f1ed9f51348f51096518d2ecde942d7a560868fa2016f686af09d4148e0613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 19:48:34 compute-0 silly_jackson[91109]: 167 167
Nov 24 19:48:34 compute-0 systemd[1]: libpod-59f1ed9f51348f51096518d2ecde942d7a560868fa2016f686af09d4148e0613.scope: Deactivated successfully.
Nov 24 19:48:34 compute-0 podman[91101]: 2025-11-24 19:48:34.070305468 +0000 UTC m=+0.079155144 container create d8c6f47871c68fbda8e82445ca4802afd8b2035fa5070329ee33c7db57795d43 (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 19:48:34 compute-0 podman[91085]: 2025-11-24 19:48:34.0765493 +0000 UTC m=+0.199079252 container attach 59f1ed9f51348f51096518d2ecde942d7a560868fa2016f686af09d4148e0613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:48:34 compute-0 podman[91085]: 2025-11-24 19:48:34.076936957 +0000 UTC m=+0.199466939 container died 59f1ed9f51348f51096518d2ecde942d7a560868fa2016f686af09d4148e0613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 19:48:34 compute-0 ceph-osd[90884]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32512c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32513400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32513400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32513400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32513400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluefs mount
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluefs mount shared_bdev_used = 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: RocksDB version: 7.9.2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Git sha 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: DB SUMMARY
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: DB Session ID:  9Q0ZTHA2XAOP4LXN3PLF
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: CURRENT file:  CURRENT
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                         Options.error_if_exists: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.create_if_missing: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                                     Options.env: 0x557d324e3c70
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                                Options.info_log: 0x557d316e08a0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                              Options.statistics: (nil)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.use_fsync: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                              Options.db_log_dir: 
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.write_buffer_manager: 0x557d325ec460
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.unordered_write: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.row_cache: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                              Options.wal_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.two_write_queues: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.wal_compression: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.atomic_flush: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.max_background_jobs: 4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.max_background_compactions: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.max_subcompactions: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.max_open_files: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Compression algorithms supported:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kZSTD supported: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kXpressCompression supported: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kBZip2Compression supported: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kLZ4Compression supported: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kZlibCompression supported: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kSnappyCompression supported: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316e02c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316e02c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316e02c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316e02c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316e02c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a6e07a1ab131667d75dc6489d47b8a041de90566cd53bda11955038602244fa-merged.mount: Deactivated successfully.
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316e02c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316e02c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316e0240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316e0240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316e0240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:34 compute-0 podman[91101]: 2025-11-24 19:48:34.031986252 +0000 UTC m=+0.040835928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:34 compute-0 systemd[1]: Started libpod-conmon-d8c6f47871c68fbda8e82445ca4802afd8b2035fa5070329ee33c7db57795d43.scope.
Nov 24 19:48:34 compute-0 podman[91085]: 2025-11-24 19:48:34.132328831 +0000 UTC m=+0.254858783 container remove 59f1ed9f51348f51096518d2ecde942d7a560868fa2016f686af09d4148e0613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3478918e-cf8c-491e-bc9a-1be40cc946b0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013714137533, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013714137965, "job": 1, "event": "recovery_finished"}
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: freelist init
Nov 24 19:48:34 compute-0 ceph-osd[90884]: freelist _read_cfg
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluefs umount
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32513400 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 19:48:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:34 compute-0 systemd[1]: libpod-conmon-59f1ed9f51348f51096518d2ecde942d7a560868fa2016f686af09d4148e0613.scope: Deactivated successfully.
Nov 24 19:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b339db3a526695e02e9f4e254d5abe123e038207982504d00618a31e8f03af1d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b339db3a526695e02e9f4e254d5abe123e038207982504d00618a31e8f03af1d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:34 compute-0 podman[91101]: 2025-11-24 19:48:34.185342277 +0000 UTC m=+0.194191953 container init d8c6f47871c68fbda8e82445ca4802afd8b2035fa5070329ee33c7db57795d43 (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 19:48:34 compute-0 podman[91101]: 2025-11-24 19:48:34.196098992 +0000 UTC m=+0.204948688 container start d8c6f47871c68fbda8e82445ca4802afd8b2035fa5070329ee33c7db57795d43 (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:34 compute-0 podman[91101]: 2025-11-24 19:48:34.200392453 +0000 UTC m=+0.209242149 container attach d8c6f47871c68fbda8e82445ca4802afd8b2035fa5070329ee33c7db57795d43 (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 24 19:48:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v40: 3 pgs: 1 active+clean, 2 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 24 19:48:34 compute-0 podman[91338]: 2025-11-24 19:48:34.359178246 +0000 UTC m=+0.066866563 container create 6aaf87fae6551739982038fc20e02bc59fb10dc9e5465b02c53466c2062bffc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32513400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32513400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32513400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bdev(0x557d32513400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluefs mount
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluefs mount shared_bdev_used = 4718592
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: RocksDB version: 7.9.2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Git sha 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: DB SUMMARY
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: DB Session ID:  9Q0ZTHA2XAOP4LXN3PLE
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: CURRENT file:  CURRENT
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                         Options.error_if_exists: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.create_if_missing: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                                     Options.env: 0x557d32694310
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                                Options.info_log: 0x557d316d7280
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                              Options.statistics: (nil)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.use_fsync: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                              Options.db_log_dir: 
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.write_buffer_manager: 0x557d325ec6e0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.unordered_write: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.row_cache: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                              Options.wal_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.two_write_queues: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.wal_compression: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.atomic_flush: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.max_background_jobs: 4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.max_background_compactions: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.max_subcompactions: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.max_open_files: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Compression algorithms supported:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kZSTD supported: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kXpressCompression supported: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kBZip2Compression supported: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kLZ4Compression supported: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kZlibCompression supported: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         kSnappyCompression supported: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316b3c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316b3c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316b3c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316b3c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316b3c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316b3c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316b3c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316d7840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316d7840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 podman[91338]: 2025-11-24 19:48:34.329622743 +0000 UTC m=+0.037311110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:           Options.merge_operator: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557d316d7840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557d316cd090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 19:48:34 compute-0 systemd[1]: Started libpod-conmon-6aaf87fae6551739982038fc20e02bc59fb10dc9e5465b02c53466c2062bffc5.scope.
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.compression: LZ4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.num_levels: 7
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.bloom_locality: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                               Options.ttl: 2592000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                       Options.enable_blob_files: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                           Options.min_blob_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3478918e-cf8c-491e-bc9a-1be40cc946b0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013714419447, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013714424409, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013714, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3478918e-cf8c-491e-bc9a-1be40cc946b0", "db_session_id": "9Q0ZTHA2XAOP4LXN3PLE", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013714428012, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013714, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3478918e-cf8c-491e-bc9a-1be40cc946b0", "db_session_id": "9Q0ZTHA2XAOP4LXN3PLE", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013714432412, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013714, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3478918e-cf8c-491e-bc9a-1be40cc946b0", "db_session_id": "9Q0ZTHA2XAOP4LXN3PLE", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764013714434327, "job": 1, "event": "recovery_finished"}
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 19:48:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c88b267d9d7aaf3f6f17f6f43a7ac74ce7659b2ad97bcef0a0e0a190ad7a260/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c88b267d9d7aaf3f6f17f6f43a7ac74ce7659b2ad97bcef0a0e0a190ad7a260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c88b267d9d7aaf3f6f17f6f43a7ac74ce7659b2ad97bcef0a0e0a190ad7a260/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c88b267d9d7aaf3f6f17f6f43a7ac74ce7659b2ad97bcef0a0e0a190ad7a260/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557d3183bc00
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: DB pointer 0x557d325d5a00
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 24 19:48:34 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 19:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 19:48:34 compute-0 ceph-osd[90884]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 19:48:34 compute-0 ceph-osd[90884]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 19:48:34 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/326699308; not ready for session (expect reconnect)
Nov 24 19:48:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:34 compute-0 ceph-osd[90884]: _get_class not permitted to load lua
Nov 24 19:48:34 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 19:48:34 compute-0 ceph-osd[90884]: _get_class not permitted to load sdk
Nov 24 19:48:34 compute-0 ceph-osd[90884]: _get_class not permitted to load test_remote_reads
Nov 24 19:48:34 compute-0 ceph-osd[90884]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 19:48:34 compute-0 ceph-osd[90884]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 19:48:34 compute-0 ceph-osd[90884]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 19:48:34 compute-0 ceph-osd[90884]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 19:48:34 compute-0 ceph-osd[90884]: osd.2 0 load_pgs
Nov 24 19:48:34 compute-0 ceph-osd[90884]: osd.2 0 load_pgs opened 0 pgs
Nov 24 19:48:34 compute-0 ceph-osd[90884]: osd.2 0 log_to_monitors true
Nov 24 19:48:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2[90880]: 2025-11-24T19:48:34.484+0000 7fcc40618740 -1 osd.2 0 log_to_monitors true
Nov 24 19:48:34 compute-0 podman[91338]: 2025-11-24 19:48:34.487685765 +0000 UTC m=+0.195374082 container init 6aaf87fae6551739982038fc20e02bc59fb10dc9e5465b02c53466c2062bffc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 19:48:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 24 19:48:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 19:48:34 compute-0 podman[91338]: 2025-11-24 19:48:34.498229487 +0000 UTC m=+0.205917774 container start 6aaf87fae6551739982038fc20e02bc59fb10dc9e5465b02c53466c2062bffc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:34 compute-0 podman[91338]: 2025-11-24 19:48:34.501772725 +0000 UTC m=+0.209461012 container attach 6aaf87fae6551739982038fc20e02bc59fb10dc9e5465b02c53466c2062bffc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 19:48:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 24 19:48:34 compute-0 ceph-mon[75677]: OSD bench result of 7606.085651 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 19:48:34 compute-0 ceph-mon[75677]: pgmap v40: 3 pgs: 1 active+clean, 2 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 24 19:48:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:34 compute-0 ceph-mon[75677]: from='osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 19:48:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 24 19:48:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 24 19:48:34 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308] boot
Nov 24 19:48:34 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 24 19:48:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 24 19:48:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 19:48:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e15 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 24 19:48:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 19:48:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:34 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:34 compute-0 ceph-osd[89640]: osd.1 15 state: booting -> active
Nov 24 19:48:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[11,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:48:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[14,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:48:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 19:48:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2623758225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:35 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 19:48:35 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]: {
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "osd_id": 2,
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "type": "bluestore"
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:     },
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "osd_id": 1,
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "type": "bluestore"
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:     },
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "osd_id": 0,
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:         "type": "bluestore"
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]:     }
Nov 24 19:48:35 compute-0 wizardly_zhukovsky[91539]: }
Nov 24 19:48:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 24 19:48:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 19:48:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2623758225' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 24 19:48:35 compute-0 ceph-osd[90884]: osd.2 0 done with init, starting boot process
Nov 24 19:48:35 compute-0 ceph-osd[90884]: osd.2 0 start_boot
Nov 24 19:48:35 compute-0 ceph-osd[90884]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 19:48:35 compute-0 ceph-osd[90884]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 19:48:35 compute-0 ceph-osd[90884]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 19:48:35 compute-0 ceph-osd[90884]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 19:48:35 compute-0 ceph-osd[90884]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 24 19:48:35 compute-0 beautiful_blackburn[91251]: pool 'backups' created
Nov 24 19:48:35 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 24 19:48:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:35 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:35 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:35 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:48:35 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=16 pruub=12.957884789s) [] r=-1 lpr=16 pi=[12,16)/1 crt=0'0 mlcod 0'0 active pruub 25.234201431s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:48:35 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=16 pruub=12.957884789s) [] r=-1 lpr=16 pi=[12,16)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.234201431s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:48:35 compute-0 ceph-mon[75677]: from='osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 24 19:48:35 compute-0 ceph-mon[75677]: osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308] boot
Nov 24 19:48:35 compute-0 ceph-mon[75677]: osdmap e15: 3 total, 2 up, 3 in
Nov 24 19:48:35 compute-0 ceph-mon[75677]: from='osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 19:48:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 19:48:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:35 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2623758225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:35 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/682082471; not ready for session (expect reconnect)
Nov 24 19:48:35 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[14,15)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:48:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:35 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:35 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:35 compute-0 systemd[1]: libpod-d8c6f47871c68fbda8e82445ca4802afd8b2035fa5070329ee33c7db57795d43.scope: Deactivated successfully.
Nov 24 19:48:35 compute-0 conmon[91251]: conmon d8c6f47871c68fbda8e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8c6f47871c68fbda8e82445ca4802afd8b2035fa5070329ee33c7db57795d43.scope/container/memory.events
Nov 24 19:48:35 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=15/16 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[11,15)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:48:35 compute-0 systemd[1]: libpod-6aaf87fae6551739982038fc20e02bc59fb10dc9e5465b02c53466c2062bffc5.scope: Deactivated successfully.
Nov 24 19:48:35 compute-0 systemd[1]: libpod-6aaf87fae6551739982038fc20e02bc59fb10dc9e5465b02c53466c2062bffc5.scope: Consumed 1.136s CPU time.
Nov 24 19:48:35 compute-0 podman[91338]: 2025-11-24 19:48:35.629738287 +0000 UTC m=+1.337426604 container died 6aaf87fae6551739982038fc20e02bc59fb10dc9e5465b02c53466c2062bffc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 19:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c88b267d9d7aaf3f6f17f6f43a7ac74ce7659b2ad97bcef0a0e0a190ad7a260-merged.mount: Deactivated successfully.
Nov 24 19:48:35 compute-0 podman[91629]: 2025-11-24 19:48:35.732024628 +0000 UTC m=+0.082331896 container died d8c6f47871c68fbda8e82445ca4802afd8b2035fa5070329ee33c7db57795d43 (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 19:48:35 compute-0 podman[91338]: 2025-11-24 19:48:35.78175377 +0000 UTC m=+1.489442087 container remove 6aaf87fae6551739982038fc20e02bc59fb10dc9e5465b02c53466c2062bffc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 19:48:35 compute-0 systemd[1]: libpod-conmon-6aaf87fae6551739982038fc20e02bc59fb10dc9e5465b02c53466c2062bffc5.scope: Deactivated successfully.
Nov 24 19:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b339db3a526695e02e9f4e254d5abe123e038207982504d00618a31e8f03af1d-merged.mount: Deactivated successfully.
Nov 24 19:48:35 compute-0 sudo[90975]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:48:35 compute-0 podman[91629]: 2025-11-24 19:48:35.86136453 +0000 UTC m=+0.211671808 container remove d8c6f47871c68fbda8e82445ca4802afd8b2035fa5070329ee33c7db57795d43 (image=quay.io/ceph/ceph:v18, name=beautiful_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 19:48:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:48:35 compute-0 systemd[1]: libpod-conmon-d8c6f47871c68fbda8e82445ca4802afd8b2035fa5070329ee33c7db57795d43.scope: Deactivated successfully.
Nov 24 19:48:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:35 compute-0 sudo[91068]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:35 compute-0 sudo[91655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:35 compute-0 sudo[91655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:35 compute-0 sudo[91655]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:36 compute-0 sudo[91721]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfikcarxdbikwflsmohpwlutszlevakl ; /usr/bin/python3'
Nov 24 19:48:36 compute-0 sudo[91721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:36 compute-0 sudo[91686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:48:36 compute-0 sudo[91686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:36 compute-0 sudo[91686]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:36 compute-0 sudo[91731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:36 compute-0 sudo[91731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:36 compute-0 sudo[91731]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:36 compute-0 python3[91728]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:36 compute-0 sudo[91756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v43: 4 pgs: 1 unknown, 2 creating+peering, 1 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 19:48:36 compute-0 sudo[91756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:36 compute-0 sudo[91756]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:36 compute-0 podman[91780]: 2025-11-24 19:48:36.379823458 +0000 UTC m=+0.080109360 container create 37823f3c0c91edc0cce8f29aa2c3a8615466dde3701c6eee183733f5a7c90868 (image=quay.io/ceph/ceph:v18, name=suspicious_goodall, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 19:48:36 compute-0 sudo[91782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:36 compute-0 sudo[91782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:36 compute-0 sudo[91782]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:36 compute-0 systemd[1]: Started libpod-conmon-37823f3c0c91edc0cce8f29aa2c3a8615466dde3701c6eee183733f5a7c90868.scope.
Nov 24 19:48:36 compute-0 podman[91780]: 2025-11-24 19:48:36.344516221 +0000 UTC m=+0.044802113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6327b0303dfdec7a1b83a9810df594de80a7b7d8714f685b81bf9020a3d11d3d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6327b0303dfdec7a1b83a9810df594de80a7b7d8714f685b81bf9020a3d11d3d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:36 compute-0 sudo[91819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 19:48:36 compute-0 sudo[91819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:36 compute-0 podman[91780]: 2025-11-24 19:48:36.490421314 +0000 UTC m=+0.190707276 container init 37823f3c0c91edc0cce8f29aa2c3a8615466dde3701c6eee183733f5a7c90868 (image=quay.io/ceph/ceph:v18, name=suspicious_goodall, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:48:36 compute-0 podman[91780]: 2025-11-24 19:48:36.49994338 +0000 UTC m=+0.200229272 container start 37823f3c0c91edc0cce8f29aa2c3a8615466dde3701c6eee183733f5a7c90868 (image=quay.io/ceph/ceph:v18, name=suspicious_goodall, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 19:48:36 compute-0 podman[91780]: 2025-11-24 19:48:36.513164136 +0000 UTC m=+0.213450038 container attach 37823f3c0c91edc0cce8f29aa2c3a8615466dde3701c6eee183733f5a7c90868 (image=quay.io/ceph/ceph:v18, name=suspicious_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:48:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 24 19:48:36 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/682082471; not ready for session (expect reconnect)
Nov 24 19:48:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:36 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:36 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Nov 24 19:48:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Nov 24 19:48:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:36 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:36 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:36 compute-0 ceph-mon[75677]: from='osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 19:48:36 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2623758225' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:36 compute-0 ceph-mon[75677]: osdmap e16: 3 total, 2 up, 3 in
Nov 24 19:48:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:36 compute-0 ceph-mon[75677]: pgmap v43: 4 pgs: 1 unknown, 2 creating+peering, 1 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 19:48:36 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:48:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 19:48:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1791888064' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:37 compute-0 podman[91937]: 2025-11-24 19:48:37.130063161 +0000 UTC m=+0.093335716 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 19:48:37 compute-0 podman[91937]: 2025-11-24 19:48:37.263020892 +0000 UTC m=+0.226293397 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 24 19:48:37 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/682082471; not ready for session (expect reconnect)
Nov 24 19:48:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:37 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:37 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 24 19:48:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1791888064' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Nov 24 19:48:37 compute-0 suspicious_goodall[91834]: pool 'images' created
Nov 24 19:48:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Nov 24 19:48:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:37 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:37 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:37 compute-0 ceph-mon[75677]: purged_snaps scrub starts
Nov 24 19:48:37 compute-0 ceph-mon[75677]: purged_snaps scrub ok
Nov 24 19:48:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:37 compute-0 ceph-mon[75677]: osdmap e17: 3 total, 2 up, 3 in
Nov 24 19:48:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:37 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1791888064' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:37 compute-0 systemd[1]: libpod-37823f3c0c91edc0cce8f29aa2c3a8615466dde3701c6eee183733f5a7c90868.scope: Deactivated successfully.
Nov 24 19:48:37 compute-0 podman[91780]: 2025-11-24 19:48:37.667179974 +0000 UTC m=+1.367465876 container died 37823f3c0c91edc0cce8f29aa2c3a8615466dde3701c6eee183733f5a7c90868 (image=quay.io/ceph/ceph:v18, name=suspicious_goodall, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 19:48:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6327b0303dfdec7a1b83a9810df594de80a7b7d8714f685b81bf9020a3d11d3d-merged.mount: Deactivated successfully.
Nov 24 19:48:37 compute-0 podman[91780]: 2025-11-24 19:48:37.785974314 +0000 UTC m=+1.486260206 container remove 37823f3c0c91edc0cce8f29aa2c3a8615466dde3701c6eee183733f5a7c90868 (image=quay.io/ceph/ceph:v18, name=suspicious_goodall, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:37 compute-0 systemd[1]: libpod-conmon-37823f3c0c91edc0cce8f29aa2c3a8615466dde3701c6eee183733f5a7c90868.scope: Deactivated successfully.
Nov 24 19:48:37 compute-0 sudo[91721]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:37 compute-0 sudo[91819]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:48:37 compute-0 sudo[92100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkitrrmeocbwyscxmzptnhzmmivlbwwb ; /usr/bin/python3'
Nov 24 19:48:38 compute-0 sudo[92100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:48:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:38 compute-0 sudo[92103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:38 compute-0 sudo[92103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:38 compute-0 sudo[92103]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:38 compute-0 python3[92102]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:38 compute-0 sudo[92128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:38 compute-0 sudo[92128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:38 compute-0 sudo[92128]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:38 compute-0 podman[92145]: 2025-11-24 19:48:38.278716031 +0000 UTC m=+0.077958954 container create 81fdfdfc7ba80503e5060c1377f96aec6114746bf86e106a80c55a8c8170a00b (image=quay.io/ceph/ceph:v18, name=flamboyant_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v46: 5 pgs: 2 unknown, 2 creating+peering, 1 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 19:48:38 compute-0 systemd[1]: Started libpod-conmon-81fdfdfc7ba80503e5060c1377f96aec6114746bf86e106a80c55a8c8170a00b.scope.
Nov 24 19:48:38 compute-0 sudo[92165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:38 compute-0 podman[92145]: 2025-11-24 19:48:38.242873756 +0000 UTC m=+0.042116749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:38 compute-0 sudo[92165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:38 compute-0 sudo[92165]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e98d8d9fdd428f2fc399a3d2d959ce0789bbb4cb9827b36eb869892eb766e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e98d8d9fdd428f2fc399a3d2d959ce0789bbb4cb9827b36eb869892eb766e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:38 compute-0 podman[92145]: 2025-11-24 19:48:38.390931954 +0000 UTC m=+0.190174967 container init 81fdfdfc7ba80503e5060c1377f96aec6114746bf86e106a80c55a8c8170a00b (image=quay.io/ceph/ceph:v18, name=flamboyant_meitner, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 19:48:38 compute-0 podman[92145]: 2025-11-24 19:48:38.404071779 +0000 UTC m=+0.203314732 container start 81fdfdfc7ba80503e5060c1377f96aec6114746bf86e106a80c55a8c8170a00b (image=quay.io/ceph/ceph:v18, name=flamboyant_meitner, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:38 compute-0 podman[92145]: 2025-11-24 19:48:38.413854458 +0000 UTC m=+0.213097451 container attach 81fdfdfc7ba80503e5060c1377f96aec6114746bf86e106a80c55a8c8170a00b (image=quay.io/ceph/ceph:v18, name=flamboyant_meitner, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 19:48:38 compute-0 sudo[92197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- inventory --format=json-pretty --filter-for-batch
Nov 24 19:48:38 compute-0 sudo[92197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:38 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/682082471; not ready for session (expect reconnect)
Nov 24 19:48:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:38 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:38 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1791888064' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:38 compute-0 ceph-mon[75677]: osdmap e18: 3 total, 2 up, 3 in
Nov 24 19:48:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:38 compute-0 ceph-mon[75677]: pgmap v46: 5 pgs: 2 unknown, 2 creating+peering, 1 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 19:48:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:38 compute-0 ceph-osd[90884]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 35.208 iops: 9013.133 elapsed_sec: 0.333
Nov 24 19:48:38 compute-0 ceph-osd[90884]: log_channel(cluster) log [WRN] : OSD bench result of 9013.132693 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 19:48:38 compute-0 ceph-osd[90884]: osd.2 0 waiting for initial osdmap
Nov 24 19:48:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2[90880]: 2025-11-24T19:48:38.670+0000 7fcc3c598640 -1 osd.2 0 waiting for initial osdmap
Nov 24 19:48:38 compute-0 ceph-osd[90884]: osd.2 18 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 19:48:38 compute-0 ceph-osd[90884]: osd.2 18 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 24 19:48:38 compute-0 ceph-osd[90884]: osd.2 18 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 19:48:38 compute-0 ceph-osd[90884]: osd.2 18 check_osdmap_features require_osd_release unknown -> reef
Nov 24 19:48:38 compute-0 ceph-osd[90884]: osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 19:48:38 compute-0 ceph-osd[90884]: osd.2 18 set_numa_affinity not setting numa affinity
Nov 24 19:48:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2[90880]: 2025-11-24T19:48:38.701+0000 7fcc37bc0640 -1 osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 19:48:38 compute-0 ceph-osd[90884]: osd.2 18 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 24 19:48:38 compute-0 podman[92282]: 2025-11-24 19:48:38.883887145 +0000 UTC m=+0.057451210 container create 1c64eff90b442038bbd5a79a6512d8c1b5de0dcd93cb5624dc47393ee0d34d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 19:48:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3210206879' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:38 compute-0 systemd[1]: Started libpod-conmon-1c64eff90b442038bbd5a79a6512d8c1b5de0dcd93cb5624dc47393ee0d34d3b.scope.
Nov 24 19:48:38 compute-0 podman[92282]: 2025-11-24 19:48:38.856353825 +0000 UTC m=+0.029917970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:38 compute-0 podman[92282]: 2025-11-24 19:48:38.98150313 +0000 UTC m=+0.155067225 container init 1c64eff90b442038bbd5a79a6512d8c1b5de0dcd93cb5624dc47393ee0d34d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:38 compute-0 podman[92282]: 2025-11-24 19:48:38.989597772 +0000 UTC m=+0.163161827 container start 1c64eff90b442038bbd5a79a6512d8c1b5de0dcd93cb5624dc47393ee0d34d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:38 compute-0 podman[92282]: 2025-11-24 19:48:38.993713569 +0000 UTC m=+0.167277654 container attach 1c64eff90b442038bbd5a79a6512d8c1b5de0dcd93cb5624dc47393ee0d34d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 19:48:38 compute-0 heuristic_lalande[92303]: 167 167
Nov 24 19:48:38 compute-0 systemd[1]: libpod-1c64eff90b442038bbd5a79a6512d8c1b5de0dcd93cb5624dc47393ee0d34d3b.scope: Deactivated successfully.
Nov 24 19:48:38 compute-0 podman[92282]: 2025-11-24 19:48:38.995829024 +0000 UTC m=+0.169393109 container died 1c64eff90b442038bbd5a79a6512d8c1b5de0dcd93cb5624dc47393ee0d34d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 19:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-eee16cb0e1a3f67ab8352e4008e656c142d8a598bee21342278a28b8c6b48d6b-merged.mount: Deactivated successfully.
Nov 24 19:48:39 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 19:48:39 compute-0 podman[92282]: 2025-11-24 19:48:39.055289064 +0000 UTC m=+0.228853159 container remove 1c64eff90b442038bbd5a79a6512d8c1b5de0dcd93cb5624dc47393ee0d34d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 19:48:39 compute-0 systemd[1]: libpod-conmon-1c64eff90b442038bbd5a79a6512d8c1b5de0dcd93cb5624dc47393ee0d34d3b.scope: Deactivated successfully.
Nov 24 19:48:39 compute-0 podman[92330]: 2025-11-24 19:48:39.297818986 +0000 UTC m=+0.067789349 container create bcdaee572d91c04ddc0c7305bf5ee1c7220e1e94303f5065bb70e2f49b205f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_roentgen, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 19:48:39 compute-0 ceph-mgr[75975]: [devicehealth INFO root] creating main.db for devicehealth
Nov 24 19:48:39 compute-0 systemd[1]: Started libpod-conmon-bcdaee572d91c04ddc0c7305bf5ee1c7220e1e94303f5065bb70e2f49b205f33.scope.
Nov 24 19:48:39 compute-0 podman[92330]: 2025-11-24 19:48:39.268392035 +0000 UTC m=+0.038362458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67fc4f10105d53379d62740713dc0d348ecc1dd5fa18e97dc92b0ff6803f25b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67fc4f10105d53379d62740713dc0d348ecc1dd5fa18e97dc92b0ff6803f25b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67fc4f10105d53379d62740713dc0d348ecc1dd5fa18e97dc92b0ff6803f25b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67fc4f10105d53379d62740713dc0d348ecc1dd5fa18e97dc92b0ff6803f25b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:39 compute-0 podman[92330]: 2025-11-24 19:48:39.4076834 +0000 UTC m=+0.177653773 container init bcdaee572d91c04ddc0c7305bf5ee1c7220e1e94303f5065bb70e2f49b205f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_roentgen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:39 compute-0 podman[92330]: 2025-11-24 19:48:39.425791885 +0000 UTC m=+0.195762238 container start bcdaee572d91c04ddc0c7305bf5ee1c7220e1e94303f5065bb70e2f49b205f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_roentgen, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:39 compute-0 podman[92330]: 2025-11-24 19:48:39.429506797 +0000 UTC m=+0.199477200 container attach bcdaee572d91c04ddc0c7305bf5ee1c7220e1e94303f5065bb70e2f49b205f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:39 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Check health
Nov 24 19:48:39 compute-0 ceph-mgr[75975]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 24 19:48:39 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 24 19:48:39 compute-0 sudo[92363]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 24 19:48:39 compute-0 sudo[92363]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 19:48:39 compute-0 sudo[92363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 24 19:48:39 compute-0 sudo[92363]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:39 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 24 19:48:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 24 19:48:39 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 19:48:39 compute-0 ceph-mgr[75975]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/682082471; not ready for session (expect reconnect)
Nov 24 19:48:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:39 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:39 compute-0 ceph-mgr[75975]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 19:48:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 24 19:48:39 compute-0 ceph-mon[75677]: OSD bench result of 9013.132693 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 19:48:39 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3210206879' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:39 compute-0 ceph-mon[75677]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 19:48:39 compute-0 ceph-mon[75677]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 24 19:48:39 compute-0 ceph-mon[75677]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 24 19:48:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 19:48:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:39 compute-0 ceph-osd[90884]: osd.2 18 tick checking mon for new map
Nov 24 19:48:39 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3210206879' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 24 19:48:39 compute-0 flamboyant_meitner[92193]: pool 'cephfs.cephfs.meta' created
Nov 24 19:48:39 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471] boot
Nov 24 19:48:39 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 24 19:48:39 compute-0 ceph-osd[90884]: osd.2 19 state: booting -> active
Nov 24 19:48:39 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[18,19)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:48:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 19:48:39 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:39 compute-0 systemd[1]: libpod-81fdfdfc7ba80503e5060c1377f96aec6114746bf86e106a80c55a8c8170a00b.scope: Deactivated successfully.
Nov 24 19:48:39 compute-0 podman[92145]: 2025-11-24 19:48:39.721855081 +0000 UTC m=+1.521098034 container died 81fdfdfc7ba80503e5060c1377f96aec6114746bf86e106a80c55a8c8170a00b (image=quay.io/ceph/ceph:v18, name=flamboyant_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 19:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-67e98d8d9fdd428f2fc399a3d2d959ce0789bbb4cb9827b36eb869892eb766e7-merged.mount: Deactivated successfully.
Nov 24 19:48:39 compute-0 podman[92145]: 2025-11-24 19:48:39.785514791 +0000 UTC m=+1.584757724 container remove 81fdfdfc7ba80503e5060c1377f96aec6114746bf86e106a80c55a8c8170a00b (image=quay.io/ceph/ceph:v18, name=flamboyant_meitner, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 19:48:39 compute-0 systemd[1]: libpod-conmon-81fdfdfc7ba80503e5060c1377f96aec6114746bf86e106a80c55a8c8170a00b.scope: Deactivated successfully.
Nov 24 19:48:39 compute-0 sudo[92100]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:39 compute-0 sudo[92404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbwdcpmnhivijlmzvwrpepeqxnvczosp ; /usr/bin/python3'
Nov 24 19:48:39 compute-0 sudo[92404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:40 compute-0 python3[92406]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:40 compute-0 podman[92411]: 2025-11-24 19:48:40.250704069 +0000 UTC m=+0.072235621 container create 0b3b380c9fd488eb6cdf08b8693af38ce7fd191d45f712a957186c724b16555a (image=quay.io/ceph/ceph:v18, name=intelligent_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 19:48:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v48: 6 pgs: 2 unknown, 2 creating+peering, 2 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 19:48:40 compute-0 systemd[1]: Started libpod-conmon-0b3b380c9fd488eb6cdf08b8693af38ce7fd191d45f712a957186c724b16555a.scope.
Nov 24 19:48:40 compute-0 podman[92411]: 2025-11-24 19:48:40.21956443 +0000 UTC m=+0.041096052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e93a20b6e8efb636844d9fc9190d6140d7880f51a10d49e64a8e8ebde35454/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e93a20b6e8efb636844d9fc9190d6140d7880f51a10d49e64a8e8ebde35454/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:40 compute-0 podman[92411]: 2025-11-24 19:48:40.347090202 +0000 UTC m=+0.168621834 container init 0b3b380c9fd488eb6cdf08b8693af38ce7fd191d45f712a957186c724b16555a (image=quay.io/ceph/ceph:v18, name=intelligent_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 19:48:40 compute-0 podman[92411]: 2025-11-24 19:48:40.358818574 +0000 UTC m=+0.180350146 container start 0b3b380c9fd488eb6cdf08b8693af38ce7fd191d45f712a957186c724b16555a (image=quay.io/ceph/ceph:v18, name=intelligent_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:40 compute-0 podman[92411]: 2025-11-24 19:48:40.364381845 +0000 UTC m=+0.185913427 container attach 0b3b380c9fd488eb6cdf08b8693af38ce7fd191d45f712a957186c724b16555a (image=quay.io/ceph/ceph:v18, name=intelligent_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:40 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 19 pg[6.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:48:40 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 19 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19 pruub=8.161258698s) [2] r=-1 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.234201431s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:48:40 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 19 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19 pruub=8.161213875s) [2] r=-1 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.234201431s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:48:40 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19) [2] r=0 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:48:40 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 24 19:48:40 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ofslrn(active, since 76s)
Nov 24 19:48:40 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 24 19:48:40 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3210206879' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:40 compute-0 ceph-mon[75677]: osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471] boot
Nov 24 19:48:40 compute-0 ceph-mon[75677]: osdmap e19: 3 total, 3 up, 3 in
Nov 24 19:48:40 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 19:48:40 compute-0 ceph-mon[75677]: pgmap v48: 6 pgs: 2 unknown, 2 creating+peering, 2 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 19:48:40 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 24 19:48:40 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:48:40 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19) [2] r=0 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:48:40 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[18,19)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:48:40 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 19:48:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/944495331' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]: [
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:     {
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:         "available": false,
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:         "ceph_device": false,
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:         "lsm_data": {},
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:         "lvs": [],
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:         "path": "/dev/sr0",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:         "rejected_reasons": [
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "Insufficient space (<5GB)",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "Has a FileSystem"
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:         ],
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:         "sys_api": {
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "actuators": null,
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "device_nodes": "sr0",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "devname": "sr0",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "human_readable_size": "482.00 KB",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "id_bus": "ata",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "model": "QEMU DVD-ROM",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "nr_requests": "2",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "parent": "/dev/sr0",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "partitions": {},
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "path": "/dev/sr0",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "removable": "1",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "rev": "2.5+",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "ro": "0",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "rotational": "1",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "sas_address": "",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "sas_device_handle": "",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "scheduler_mode": "mq-deadline",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "sectors": 0,
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "sectorsize": "2048",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "size": 493568.0,
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "support_discard": "2048",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "type": "disk",
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:             "vendor": "QEMU"
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:         }
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]:     }
Nov 24 19:48:41 compute-0 mystifying_roentgen[92347]: ]
Nov 24 19:48:41 compute-0 systemd[1]: libpod-bcdaee572d91c04ddc0c7305bf5ee1c7220e1e94303f5065bb70e2f49b205f33.scope: Deactivated successfully.
Nov 24 19:48:41 compute-0 systemd[1]: libpod-bcdaee572d91c04ddc0c7305bf5ee1c7220e1e94303f5065bb70e2f49b205f33.scope: Consumed 1.707s CPU time.
Nov 24 19:48:41 compute-0 podman[92330]: 2025-11-24 19:48:41.086205814 +0000 UTC m=+1.856176177 container died bcdaee572d91c04ddc0c7305bf5ee1c7220e1e94303f5065bb70e2f49b205f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_roentgen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 19:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f67fc4f10105d53379d62740713dc0d348ecc1dd5fa18e97dc92b0ff6803f25b-merged.mount: Deactivated successfully.
Nov 24 19:48:41 compute-0 podman[92330]: 2025-11-24 19:48:41.157847595 +0000 UTC m=+1.927817958 container remove bcdaee572d91c04ddc0c7305bf5ee1c7220e1e94303f5065bb70e2f49b205f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 19:48:41 compute-0 systemd[1]: libpod-conmon-bcdaee572d91c04ddc0c7305bf5ee1c7220e1e94303f5065bb70e2f49b205f33.scope: Deactivated successfully.
Nov 24 19:48:41 compute-0 sudo[92197]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mgr[75975]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43687k
Nov 24 19:48:41 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43687k
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 24 19:48:41 compute-0 ceph-mgr[75975]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Nov 24 19:48:41 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:41 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 42dbcfef-dc79-41a0-bc66-b57abf04d6ed does not exist
Nov 24 19:48:41 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 207f3d58-c836-4101-aeef-4494eb944ab1 does not exist
Nov 24 19:48:41 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2b672b6c-f694-4212-bcab-4b6f13e0bd76 does not exist
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:41 compute-0 sudo[94425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:41 compute-0 sudo[94425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:41 compute-0 sudo[94425]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:41 compute-0 sudo[94450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:41 compute-0 sudo[94450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:41 compute-0 sudo[94450]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:41 compute-0 sudo[94475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:41 compute-0 sudo[94475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:41 compute-0 sudo[94475]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:41 compute-0 sudo[94500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:48:41 compute-0 sudo[94500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mgrmap e9: compute-0.ofslrn(active, since 76s)
Nov 24 19:48:41 compute-0 ceph-mon[75677]: osdmap e20: 3 total, 3 up, 3 in
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/944495331' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: Adjusting osd_memory_target on compute-0 to 43687k
Nov 24 19:48:41 compute-0 ceph-mon[75677]: Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/944495331' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 24 19:48:41 compute-0 intelligent_zhukovsky[92430]: pool 'cephfs.cephfs.data' created
Nov 24 19:48:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 24 19:48:41 compute-0 systemd[1]: libpod-0b3b380c9fd488eb6cdf08b8693af38ce7fd191d45f712a957186c724b16555a.scope: Deactivated successfully.
Nov 24 19:48:41 compute-0 podman[94526]: 2025-11-24 19:48:41.805906979 +0000 UTC m=+0.030888456 container died 0b3b380c9fd488eb6cdf08b8693af38ce7fd191d45f712a957186c724b16555a (image=quay.io/ceph/ceph:v18, name=intelligent_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-13e93a20b6e8efb636844d9fc9190d6140d7880f51a10d49e64a8e8ebde35454-merged.mount: Deactivated successfully.
Nov 24 19:48:41 compute-0 podman[94526]: 2025-11-24 19:48:41.862238768 +0000 UTC m=+0.087220215 container remove 0b3b380c9fd488eb6cdf08b8693af38ce7fd191d45f712a957186c724b16555a (image=quay.io/ceph/ceph:v18, name=intelligent_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:41 compute-0 systemd[1]: libpod-conmon-0b3b380c9fd488eb6cdf08b8693af38ce7fd191d45f712a957186c724b16555a.scope: Deactivated successfully.
Nov 24 19:48:41 compute-0 sudo[92404]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:41 compute-0 sshd-session[92289]: Connection closed by authenticating user root 27.79.44.141 port 39340 [preauth]
Nov 24 19:48:42 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 21 pg[7.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:48:42 compute-0 podman[94577]: 2025-11-24 19:48:42.107387863 +0000 UTC m=+0.064638138 container create a027b85816d08e593406fccb7ac3ef58cbd02d361ac253a519a00f434a33dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:48:42 compute-0 sudo[94614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlppgceemsoirheysdyxmupgsjuvipcb ; /usr/bin/python3'
Nov 24 19:48:42 compute-0 sudo[94614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:42 compute-0 systemd[1]: Started libpod-conmon-a027b85816d08e593406fccb7ac3ef58cbd02d361ac253a519a00f434a33dcb8.scope.
Nov 24 19:48:42 compute-0 podman[94577]: 2025-11-24 19:48:42.081090193 +0000 UTC m=+0.038340518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:42 compute-0 podman[94577]: 2025-11-24 19:48:42.214731606 +0000 UTC m=+0.171981921 container init a027b85816d08e593406fccb7ac3ef58cbd02d361ac253a519a00f434a33dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 19:48:42 compute-0 podman[94577]: 2025-11-24 19:48:42.225685365 +0000 UTC m=+0.182935640 container start a027b85816d08e593406fccb7ac3ef58cbd02d361ac253a519a00f434a33dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:42 compute-0 stoic_liskov[94619]: 167 167
Nov 24 19:48:42 compute-0 systemd[1]: libpod-a027b85816d08e593406fccb7ac3ef58cbd02d361ac253a519a00f434a33dcb8.scope: Deactivated successfully.
Nov 24 19:48:42 compute-0 podman[94577]: 2025-11-24 19:48:42.235473334 +0000 UTC m=+0.192723609 container attach a027b85816d08e593406fccb7ac3ef58cbd02d361ac253a519a00f434a33dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:42 compute-0 podman[94577]: 2025-11-24 19:48:42.236717155 +0000 UTC m=+0.193967430 container died a027b85816d08e593406fccb7ac3ef58cbd02d361ac253a519a00f434a33dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-43cff03b8055d49781749058a4f385588aa4f498e9107193094cae24ddb3f612-merged.mount: Deactivated successfully.
Nov 24 19:48:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v51: 7 pgs: 1 creating+peering, 1 peering, 2 unknown, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:42 compute-0 python3[94616]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:42 compute-0 podman[94577]: 2025-11-24 19:48:42.291640551 +0000 UTC m=+0.248890816 container remove a027b85816d08e593406fccb7ac3ef58cbd02d361ac253a519a00f434a33dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:48:42 compute-0 systemd[1]: libpod-conmon-a027b85816d08e593406fccb7ac3ef58cbd02d361ac253a519a00f434a33dcb8.scope: Deactivated successfully.
Nov 24 19:48:42 compute-0 podman[94636]: 2025-11-24 19:48:42.37171333 +0000 UTC m=+0.057573762 container create 4bb8cc3c8f283c8ac2d54a4a154588bbb428a2e859f72128a01c1fa1c76872cc (image=quay.io/ceph/ceph:v18, name=wizardly_keller, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:42 compute-0 systemd[1]: Started libpod-conmon-4bb8cc3c8f283c8ac2d54a4a154588bbb428a2e859f72128a01c1fa1c76872cc.scope.
Nov 24 19:48:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f66dd2e93fd76fdeafafae7440991fb0d67b5d5e9231f63f7ee08dd51ef5f28/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f66dd2e93fd76fdeafafae7440991fb0d67b5d5e9231f63f7ee08dd51ef5f28/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:42 compute-0 podman[94636]: 2025-11-24 19:48:42.354000321 +0000 UTC m=+0.039860743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:42 compute-0 podman[94636]: 2025-11-24 19:48:42.473573523 +0000 UTC m=+0.159433965 container init 4bb8cc3c8f283c8ac2d54a4a154588bbb428a2e859f72128a01c1fa1c76872cc (image=quay.io/ceph/ceph:v18, name=wizardly_keller, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 19:48:42 compute-0 podman[94636]: 2025-11-24 19:48:42.486179559 +0000 UTC m=+0.172039961 container start 4bb8cc3c8f283c8ac2d54a4a154588bbb428a2e859f72128a01c1fa1c76872cc (image=quay.io/ceph/ceph:v18, name=wizardly_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:42 compute-0 podman[94636]: 2025-11-24 19:48:42.489864889 +0000 UTC m=+0.175725321 container attach 4bb8cc3c8f283c8ac2d54a4a154588bbb428a2e859f72128a01c1fa1c76872cc (image=quay.io/ceph/ceph:v18, name=wizardly_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 19:48:42 compute-0 podman[94657]: 2025-11-24 19:48:42.500354291 +0000 UTC m=+0.076364168 container create 432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:42 compute-0 systemd[1]: Started libpod-conmon-432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9.scope.
Nov 24 19:48:42 compute-0 podman[94657]: 2025-11-24 19:48:42.474248274 +0000 UTC m=+0.050258191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c766e054829014ee93d4ff5b47175309046b4d7884189b3468295f2b52650cd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c766e054829014ee93d4ff5b47175309046b4d7884189b3468295f2b52650cd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c766e054829014ee93d4ff5b47175309046b4d7884189b3468295f2b52650cd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c766e054829014ee93d4ff5b47175309046b4d7884189b3468295f2b52650cd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c766e054829014ee93d4ff5b47175309046b4d7884189b3468295f2b52650cd7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:42 compute-0 podman[94657]: 2025-11-24 19:48:42.62890565 +0000 UTC m=+0.204915567 container init 432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:42 compute-0 podman[94657]: 2025-11-24 19:48:42.641693019 +0000 UTC m=+0.217702876 container start 432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 19:48:42 compute-0 podman[94657]: 2025-11-24 19:48:42.645627803 +0000 UTC m=+0.221637721 container attach 432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 19:48:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 24 19:48:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 24 19:48:42 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/944495331' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 19:48:42 compute-0 ceph-mon[75677]: osdmap e21: 3 total, 3 up, 3 in
Nov 24 19:48:42 compute-0 ceph-mon[75677]: pgmap v51: 7 pgs: 1 creating+peering, 1 peering, 2 unknown, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 24 19:48:42 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:48:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 24 19:48:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3167125319' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 24 19:48:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 24 19:48:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3167125319' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 24 19:48:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 24 19:48:43 compute-0 wizardly_keller[94659]: enabled application 'rbd' on pool 'vms'
Nov 24 19:48:43 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 24 19:48:43 compute-0 ceph-mon[75677]: osdmap e22: 3 total, 3 up, 3 in
Nov 24 19:48:43 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3167125319' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 24 19:48:43 compute-0 dazzling_curie[94677]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:48:43 compute-0 dazzling_curie[94677]: --> relative data size: 1.0
Nov 24 19:48:43 compute-0 dazzling_curie[94677]: --> All data devices are unavailable
Nov 24 19:48:43 compute-0 systemd[1]: libpod-4bb8cc3c8f283c8ac2d54a4a154588bbb428a2e859f72128a01c1fa1c76872cc.scope: Deactivated successfully.
Nov 24 19:48:43 compute-0 podman[94636]: 2025-11-24 19:48:43.790098135 +0000 UTC m=+1.475958557 container died 4bb8cc3c8f283c8ac2d54a4a154588bbb428a2e859f72128a01c1fa1c76872cc (image=quay.io/ceph/ceph:v18, name=wizardly_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f66dd2e93fd76fdeafafae7440991fb0d67b5d5e9231f63f7ee08dd51ef5f28-merged.mount: Deactivated successfully.
Nov 24 19:48:43 compute-0 systemd[1]: libpod-432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9.scope: Deactivated successfully.
Nov 24 19:48:43 compute-0 systemd[1]: libpod-432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9.scope: Consumed 1.143s CPU time.
Nov 24 19:48:43 compute-0 conmon[94677]: conmon 432038a6c5aae1add905 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9.scope/container/memory.events
Nov 24 19:48:43 compute-0 podman[94657]: 2025-11-24 19:48:43.834973998 +0000 UTC m=+1.410983885 container died 432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:43 compute-0 podman[94636]: 2025-11-24 19:48:43.851740421 +0000 UTC m=+1.537600853 container remove 4bb8cc3c8f283c8ac2d54a4a154588bbb428a2e859f72128a01c1fa1c76872cc (image=quay.io/ceph/ceph:v18, name=wizardly_keller, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:43 compute-0 systemd[1]: libpod-conmon-4bb8cc3c8f283c8ac2d54a4a154588bbb428a2e859f72128a01c1fa1c76872cc.scope: Deactivated successfully.
Nov 24 19:48:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c766e054829014ee93d4ff5b47175309046b4d7884189b3468295f2b52650cd7-merged.mount: Deactivated successfully.
Nov 24 19:48:43 compute-0 sudo[94614]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:43 compute-0 podman[94657]: 2025-11-24 19:48:43.923643516 +0000 UTC m=+1.499653353 container remove 432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_curie, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:43 compute-0 systemd[1]: libpod-conmon-432038a6c5aae1add9050ef1b784bdddbe4f9b831c1851313f5e171e43b6dfb9.scope: Deactivated successfully.
Nov 24 19:48:43 compute-0 sudo[94500]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:44 compute-0 sudo[94782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyvrsomypkkkosifhzdazetloqahxjbp ; /usr/bin/python3'
Nov 24 19:48:44 compute-0 sudo[94782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:44 compute-0 sudo[94764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:44 compute-0 sudo[94764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:44 compute-0 sudo[94764]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:44 compute-0 sudo[94802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:44 compute-0 sudo[94802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:44 compute-0 sudo[94802]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:44 compute-0 python3[94799]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:44 compute-0 sudo[94827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:44 compute-0 sudo[94827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:44 compute-0 sudo[94827]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 1 creating+peering, 1 peering, 1 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:44 compute-0 podman[94850]: 2025-11-24 19:48:44.296205741 +0000 UTC m=+0.065429639 container create 15ff8a0d15afa70458eb32a24586a69645361ea4318efae45790611911bda43a (image=quay.io/ceph/ceph:v18, name=peaceful_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:44 compute-0 systemd[1]: Started libpod-conmon-15ff8a0d15afa70458eb32a24586a69645361ea4318efae45790611911bda43a.scope.
Nov 24 19:48:44 compute-0 sudo[94861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:48:44 compute-0 sudo[94861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:44 compute-0 podman[94850]: 2025-11-24 19:48:44.270674674 +0000 UTC m=+0.039898612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e43e78957c4d14f32d966cfd0a688b3c1702ae1a9a113055b1fbfbf2b73814/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e43e78957c4d14f32d966cfd0a688b3c1702ae1a9a113055b1fbfbf2b73814/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:44 compute-0 podman[94850]: 2025-11-24 19:48:44.40452571 +0000 UTC m=+0.173749648 container init 15ff8a0d15afa70458eb32a24586a69645361ea4318efae45790611911bda43a (image=quay.io/ceph/ceph:v18, name=peaceful_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:44 compute-0 podman[94850]: 2025-11-24 19:48:44.416998664 +0000 UTC m=+0.186222552 container start 15ff8a0d15afa70458eb32a24586a69645361ea4318efae45790611911bda43a (image=quay.io/ceph/ceph:v18, name=peaceful_payne, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 19:48:44 compute-0 podman[94850]: 2025-11-24 19:48:44.421837113 +0000 UTC m=+0.191061061 container attach 15ff8a0d15afa70458eb32a24586a69645361ea4318efae45790611911bda43a (image=quay.io/ceph/ceph:v18, name=peaceful_payne, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 19:48:44 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 19:48:44 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3167125319' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 24 19:48:44 compute-0 ceph-mon[75677]: osdmap e23: 3 total, 3 up, 3 in
Nov 24 19:48:44 compute-0 ceph-mon[75677]: pgmap v54: 7 pgs: 1 creating+peering, 1 peering, 1 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:44 compute-0 podman[94937]: 2025-11-24 19:48:44.805207274 +0000 UTC m=+0.062466981 container create f7e99e4b763c1663860568ff5785bd8cb38efaf08c41b5ddc4d3475fe4102f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_heisenberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 24 19:48:44 compute-0 systemd[1]: Started libpod-conmon-f7e99e4b763c1663860568ff5785bd8cb38efaf08c41b5ddc4d3475fe4102f2c.scope.
Nov 24 19:48:44 compute-0 podman[94937]: 2025-11-24 19:48:44.77923306 +0000 UTC m=+0.036492787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:44 compute-0 podman[94937]: 2025-11-24 19:48:44.903055613 +0000 UTC m=+0.160315400 container init f7e99e4b763c1663860568ff5785bd8cb38efaf08c41b5ddc4d3475fe4102f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_heisenberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 24 19:48:44 compute-0 podman[94937]: 2025-11-24 19:48:44.912659109 +0000 UTC m=+0.169918826 container start f7e99e4b763c1663860568ff5785bd8cb38efaf08c41b5ddc4d3475fe4102f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:44 compute-0 podman[94937]: 2025-11-24 19:48:44.916202527 +0000 UTC m=+0.173462304 container attach f7e99e4b763c1663860568ff5785bd8cb38efaf08c41b5ddc4d3475fe4102f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_heisenberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:44 compute-0 jolly_heisenberg[94972]: 167 167
Nov 24 19:48:44 compute-0 podman[94937]: 2025-11-24 19:48:44.918752958 +0000 UTC m=+0.176012695 container died f7e99e4b763c1663860568ff5785bd8cb38efaf08c41b5ddc4d3475fe4102f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_heisenberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:44 compute-0 systemd[1]: libpod-f7e99e4b763c1663860568ff5785bd8cb38efaf08c41b5ddc4d3475fe4102f2c.scope: Deactivated successfully.
Nov 24 19:48:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2a63a934aa73cab32c02ba4e5ee3905bce8e0d75d4e8436116535cd5dc1c706-merged.mount: Deactivated successfully.
Nov 24 19:48:44 compute-0 podman[94937]: 2025-11-24 19:48:44.971317447 +0000 UTC m=+0.228577184 container remove f7e99e4b763c1663860568ff5785bd8cb38efaf08c41b5ddc4d3475fe4102f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 24 19:48:44 compute-0 systemd[1]: libpod-conmon-f7e99e4b763c1663860568ff5785bd8cb38efaf08c41b5ddc4d3475fe4102f2c.scope: Deactivated successfully.
Nov 24 19:48:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 24 19:48:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/821213709' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 24 19:48:45 compute-0 podman[94998]: 2025-11-24 19:48:45.19069746 +0000 UTC m=+0.068647862 container create 7222ec2303c0aaa9b67f6e01c672d58b2eaf60fc59cc87b47110a0dc1d2a719b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 19:48:45 compute-0 systemd[1]: Started libpod-conmon-7222ec2303c0aaa9b67f6e01c672d58b2eaf60fc59cc87b47110a0dc1d2a719b.scope.
Nov 24 19:48:45 compute-0 podman[94998]: 2025-11-24 19:48:45.161474263 +0000 UTC m=+0.039424735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2aa4a73500a43f7202e52125d7c7bcc8c5ef1225bf13c3a062211135188b0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2aa4a73500a43f7202e52125d7c7bcc8c5ef1225bf13c3a062211135188b0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2aa4a73500a43f7202e52125d7c7bcc8c5ef1225bf13c3a062211135188b0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2aa4a73500a43f7202e52125d7c7bcc8c5ef1225bf13c3a062211135188b0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:45 compute-0 podman[94998]: 2025-11-24 19:48:45.289483973 +0000 UTC m=+0.167434385 container init 7222ec2303c0aaa9b67f6e01c672d58b2eaf60fc59cc87b47110a0dc1d2a719b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 19:48:45 compute-0 podman[94998]: 2025-11-24 19:48:45.303686556 +0000 UTC m=+0.181636968 container start 7222ec2303c0aaa9b67f6e01c672d58b2eaf60fc59cc87b47110a0dc1d2a719b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:45 compute-0 podman[94998]: 2025-11-24 19:48:45.30762853 +0000 UTC m=+0.185578942 container attach 7222ec2303c0aaa9b67f6e01c672d58b2eaf60fc59cc87b47110a0dc1d2a719b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 24 19:48:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 24 19:48:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/821213709' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 24 19:48:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 24 19:48:45 compute-0 peaceful_payne[94892]: enabled application 'rbd' on pool 'volumes'
Nov 24 19:48:45 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 24 19:48:45 compute-0 ceph-mon[75677]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 19:48:45 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/821213709' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 24 19:48:45 compute-0 systemd[1]: libpod-15ff8a0d15afa70458eb32a24586a69645361ea4318efae45790611911bda43a.scope: Deactivated successfully.
Nov 24 19:48:45 compute-0 podman[94850]: 2025-11-24 19:48:45.823735689 +0000 UTC m=+1.592959607 container died 15ff8a0d15afa70458eb32a24586a69645361ea4318efae45790611911bda43a (image=quay.io/ceph/ceph:v18, name=peaceful_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-32e43e78957c4d14f32d966cfd0a688b3c1702ae1a9a113055b1fbfbf2b73814-merged.mount: Deactivated successfully.
Nov 24 19:48:45 compute-0 podman[94850]: 2025-11-24 19:48:45.887247576 +0000 UTC m=+1.656471474 container remove 15ff8a0d15afa70458eb32a24586a69645361ea4318efae45790611911bda43a (image=quay.io/ceph/ceph:v18, name=peaceful_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:45 compute-0 systemd[1]: libpod-conmon-15ff8a0d15afa70458eb32a24586a69645361ea4318efae45790611911bda43a.scope: Deactivated successfully.
Nov 24 19:48:45 compute-0 sudo[94782]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:46 compute-0 nifty_bohr[95014]: {
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:     "0": [
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:         {
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "devices": [
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "/dev/loop3"
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             ],
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_name": "ceph_lv0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_size": "21470642176",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "name": "ceph_lv0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "tags": {
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.cluster_name": "ceph",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.crush_device_class": "",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.encrypted": "0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.osd_id": "0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.type": "block",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.vdo": "0"
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             },
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "type": "block",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "vg_name": "ceph_vg0"
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:         }
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:     ],
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:     "1": [
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:         {
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "devices": [
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "/dev/loop4"
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             ],
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_name": "ceph_lv1",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_size": "21470642176",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "name": "ceph_lv1",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "tags": {
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.cluster_name": "ceph",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.crush_device_class": "",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.encrypted": "0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.osd_id": "1",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.type": "block",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.vdo": "0"
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             },
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "type": "block",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "vg_name": "ceph_vg1"
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:         }
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:     ],
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:     "2": [
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:         {
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "devices": [
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "/dev/loop5"
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             ],
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_name": "ceph_lv2",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_size": "21470642176",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "name": "ceph_lv2",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "tags": {
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.cluster_name": "ceph",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.crush_device_class": "",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.encrypted": "0",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.osd_id": "2",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.type": "block",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:                 "ceph.vdo": "0"
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             },
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "type": "block",
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:             "vg_name": "ceph_vg2"
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:         }
Nov 24 19:48:46 compute-0 nifty_bohr[95014]:     ]
Nov 24 19:48:46 compute-0 nifty_bohr[95014]: }
Nov 24 19:48:46 compute-0 sudo[95058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inndplfggtjoxrihgmobgmjtktwqhgde ; /usr/bin/python3'
Nov 24 19:48:46 compute-0 systemd[1]: libpod-7222ec2303c0aaa9b67f6e01c672d58b2eaf60fc59cc87b47110a0dc1d2a719b.scope: Deactivated successfully.
Nov 24 19:48:46 compute-0 podman[94998]: 2025-11-24 19:48:46.070769923 +0000 UTC m=+0.948720325 container died 7222ec2303c0aaa9b67f6e01c672d58b2eaf60fc59cc87b47110a0dc1d2a719b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 19:48:46 compute-0 sudo[95058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd2aa4a73500a43f7202e52125d7c7bcc8c5ef1225bf13c3a062211135188b0b-merged.mount: Deactivated successfully.
Nov 24 19:48:46 compute-0 podman[94998]: 2025-11-24 19:48:46.144250454 +0000 UTC m=+1.022200866 container remove 7222ec2303c0aaa9b67f6e01c672d58b2eaf60fc59cc87b47110a0dc1d2a719b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:46 compute-0 systemd[1]: libpod-conmon-7222ec2303c0aaa9b67f6e01c672d58b2eaf60fc59cc87b47110a0dc1d2a719b.scope: Deactivated successfully.
Nov 24 19:48:46 compute-0 sudo[94861]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:46 compute-0 python3[95061]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:46 compute-0 sudo[95073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:46 compute-0 sudo[95073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:46 compute-0 sudo[95073]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:46 compute-0 podman[95091]: 2025-11-24 19:48:46.336369831 +0000 UTC m=+0.072091878 container create 28d5110467bb9603356c536edafc2a27cbac3d2f6f17ed02ff465ed7ec5dab78 (image=quay.io/ceph/ceph:v18, name=interesting_haibt, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 24 19:48:46 compute-0 systemd[1]: Started libpod-conmon-28d5110467bb9603356c536edafc2a27cbac3d2f6f17ed02ff465ed7ec5dab78.scope.
Nov 24 19:48:46 compute-0 sudo[95108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:46 compute-0 sudo[95108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:46 compute-0 sudo[95108]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:46 compute-0 podman[95091]: 2025-11-24 19:48:46.310885815 +0000 UTC m=+0.046607912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfa459c178ba345b649bda811bdef6eff30681c3346cca0a3d8ba6f69f23a7e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfa459c178ba345b649bda811bdef6eff30681c3346cca0a3d8ba6f69f23a7e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:46 compute-0 podman[95091]: 2025-11-24 19:48:46.435367158 +0000 UTC m=+0.171089255 container init 28d5110467bb9603356c536edafc2a27cbac3d2f6f17ed02ff465ed7ec5dab78 (image=quay.io/ceph/ceph:v18, name=interesting_haibt, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:48:46 compute-0 podman[95091]: 2025-11-24 19:48:46.446788715 +0000 UTC m=+0.182510762 container start 28d5110467bb9603356c536edafc2a27cbac3d2f6f17ed02ff465ed7ec5dab78 (image=quay.io/ceph/ceph:v18, name=interesting_haibt, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:46 compute-0 podman[95091]: 2025-11-24 19:48:46.451106495 +0000 UTC m=+0.186828602 container attach 28d5110467bb9603356c536edafc2a27cbac3d2f6f17ed02ff465ed7ec5dab78 (image=quay.io/ceph/ceph:v18, name=interesting_haibt, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 19:48:46 compute-0 sudo[95141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:46 compute-0 sudo[95141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:46 compute-0 sudo[95141]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:46 compute-0 sudo[95167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:48:46 compute-0 sudo[95167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:46 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/821213709' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 24 19:48:46 compute-0 ceph-mon[75677]: osdmap e24: 3 total, 3 up, 3 in
Nov 24 19:48:46 compute-0 ceph-mon[75677]: pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 24 19:48:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3352164533' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 24 19:48:47 compute-0 podman[95250]: 2025-11-24 19:48:47.029814227 +0000 UTC m=+0.066113291 container create 8358a3eaec4e85c39650e704ead907d37e01e88243fe5d5c42b5c37d7e9a57d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:47 compute-0 systemd[1]: Started libpod-conmon-8358a3eaec4e85c39650e704ead907d37e01e88243fe5d5c42b5c37d7e9a57d2.scope.
Nov 24 19:48:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:47 compute-0 podman[95250]: 2025-11-24 19:48:47.002487981 +0000 UTC m=+0.038787055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:47 compute-0 podman[95250]: 2025-11-24 19:48:47.11514235 +0000 UTC m=+0.151441474 container init 8358a3eaec4e85c39650e704ead907d37e01e88243fe5d5c42b5c37d7e9a57d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:47 compute-0 podman[95250]: 2025-11-24 19:48:47.124620766 +0000 UTC m=+0.160919830 container start 8358a3eaec4e85c39650e704ead907d37e01e88243fe5d5c42b5c37d7e9a57d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:47 compute-0 podman[95250]: 2025-11-24 19:48:47.128925096 +0000 UTC m=+0.165224160 container attach 8358a3eaec4e85c39650e704ead907d37e01e88243fe5d5c42b5c37d7e9a57d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:47 compute-0 reverent_pare[95267]: 167 167
Nov 24 19:48:47 compute-0 systemd[1]: libpod-8358a3eaec4e85c39650e704ead907d37e01e88243fe5d5c42b5c37d7e9a57d2.scope: Deactivated successfully.
Nov 24 19:48:47 compute-0 podman[95250]: 2025-11-24 19:48:47.131733682 +0000 UTC m=+0.168032756 container died 8358a3eaec4e85c39650e704ead907d37e01e88243fe5d5c42b5c37d7e9a57d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 24 19:48:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-75d91ed3e64689032e4a03afe08a41671ae473c236dd6715c19773dcd2b02b25-merged.mount: Deactivated successfully.
Nov 24 19:48:47 compute-0 podman[95250]: 2025-11-24 19:48:47.182551521 +0000 UTC m=+0.218850585 container remove 8358a3eaec4e85c39650e704ead907d37e01e88243fe5d5c42b5c37d7e9a57d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:47 compute-0 systemd[1]: libpod-conmon-8358a3eaec4e85c39650e704ead907d37e01e88243fe5d5c42b5c37d7e9a57d2.scope: Deactivated successfully.
Nov 24 19:48:47 compute-0 podman[95292]: 2025-11-24 19:48:47.406195864 +0000 UTC m=+0.071035571 container create 992e52cf109e0d127697fc9a20062b3a0cb7fefa753e9467f15a0e1ba687221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cohen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:47 compute-0 systemd[1]: Started libpod-conmon-992e52cf109e0d127697fc9a20062b3a0cb7fefa753e9467f15a0e1ba687221b.scope.
Nov 24 19:48:47 compute-0 podman[95292]: 2025-11-24 19:48:47.37778813 +0000 UTC m=+0.042627877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd87817239e83877e6976fa83f1bd895edd48fcd1b5feffdf396565bc6b1607/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd87817239e83877e6976fa83f1bd895edd48fcd1b5feffdf396565bc6b1607/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd87817239e83877e6976fa83f1bd895edd48fcd1b5feffdf396565bc6b1607/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd87817239e83877e6976fa83f1bd895edd48fcd1b5feffdf396565bc6b1607/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:47 compute-0 podman[95292]: 2025-11-24 19:48:47.515103013 +0000 UTC m=+0.179942780 container init 992e52cf109e0d127697fc9a20062b3a0cb7fefa753e9467f15a0e1ba687221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:47 compute-0 podman[95292]: 2025-11-24 19:48:47.529855424 +0000 UTC m=+0.194695121 container start 992e52cf109e0d127697fc9a20062b3a0cb7fefa753e9467f15a0e1ba687221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cohen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:47 compute-0 podman[95292]: 2025-11-24 19:48:47.534271296 +0000 UTC m=+0.199111053 container attach 992e52cf109e0d127697fc9a20062b3a0cb7fefa753e9467f15a0e1ba687221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cohen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 19:48:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 24 19:48:47 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3352164533' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 24 19:48:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3352164533' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 24 19:48:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 24 19:48:47 compute-0 interesting_haibt[95137]: enabled application 'rbd' on pool 'backups'
Nov 24 19:48:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 24 19:48:47 compute-0 systemd[1]: libpod-28d5110467bb9603356c536edafc2a27cbac3d2f6f17ed02ff465ed7ec5dab78.scope: Deactivated successfully.
Nov 24 19:48:47 compute-0 podman[95091]: 2025-11-24 19:48:47.865492196 +0000 UTC m=+1.601214243 container died 28d5110467bb9603356c536edafc2a27cbac3d2f6f17ed02ff465ed7ec5dab78 (image=quay.io/ceph/ceph:v18, name=interesting_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bfa459c178ba345b649bda811bdef6eff30681c3346cca0a3d8ba6f69f23a7e-merged.mount: Deactivated successfully.
Nov 24 19:48:47 compute-0 podman[95091]: 2025-11-24 19:48:47.922378634 +0000 UTC m=+1.658100681 container remove 28d5110467bb9603356c536edafc2a27cbac3d2f6f17ed02ff465ed7ec5dab78 (image=quay.io/ceph/ceph:v18, name=interesting_haibt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:47 compute-0 systemd[1]: libpod-conmon-28d5110467bb9603356c536edafc2a27cbac3d2f6f17ed02ff465ed7ec5dab78.scope: Deactivated successfully.
Nov 24 19:48:47 compute-0 sudo[95058]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:48 compute-0 sshd-session[76781]: Connection closed by authenticating user root 27.79.44.141 port 59496 [preauth]
Nov 24 19:48:48 compute-0 sudo[95348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpflozcnvsnygnvxlqppnzzptgwiixhu ; /usr/bin/python3'
Nov 24 19:48:48 compute-0 sudo[95348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:48 compute-0 python3[95350]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:48 compute-0 podman[95359]: 2025-11-24 19:48:48.390766755 +0000 UTC m=+0.072668748 container create 2c2261b84876ed6fcfa4c53e38f736cd853e44042fde68ab7e6dda54b1b78d0f (image=quay.io/ceph/ceph:v18, name=hungry_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 19:48:48 compute-0 systemd[1]: Started libpod-conmon-2c2261b84876ed6fcfa4c53e38f736cd853e44042fde68ab7e6dda54b1b78d0f.scope.
Nov 24 19:48:48 compute-0 podman[95359]: 2025-11-24 19:48:48.360187036 +0000 UTC m=+0.042089089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69f4110f5548bb689161f55e8555feb722dd559c3f3b71801ece728d3fc20f7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69f4110f5548bb689161f55e8555feb722dd559c3f3b71801ece728d3fc20f7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:48 compute-0 podman[95359]: 2025-11-24 19:48:48.496949298 +0000 UTC m=+0.178851351 container init 2c2261b84876ed6fcfa4c53e38f736cd853e44042fde68ab7e6dda54b1b78d0f (image=quay.io/ceph/ceph:v18, name=hungry_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:48 compute-0 podman[95359]: 2025-11-24 19:48:48.508374366 +0000 UTC m=+0.190276369 container start 2c2261b84876ed6fcfa4c53e38f736cd853e44042fde68ab7e6dda54b1b78d0f (image=quay.io/ceph/ceph:v18, name=hungry_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 19:48:48 compute-0 podman[95359]: 2025-11-24 19:48:48.512407521 +0000 UTC m=+0.194309504 container attach 2c2261b84876ed6fcfa4c53e38f736cd853e44042fde68ab7e6dda54b1b78d0f (image=quay.io/ceph/ceph:v18, name=hungry_shamir, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]: {
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "osd_id": 2,
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "type": "bluestore"
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:     },
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "osd_id": 1,
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "type": "bluestore"
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:     },
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "osd_id": 0,
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:         "type": "bluestore"
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]:     }
Nov 24 19:48:48 compute-0 ecstatic_cohen[95308]: }
Nov 24 19:48:48 compute-0 systemd[1]: libpod-992e52cf109e0d127697fc9a20062b3a0cb7fefa753e9467f15a0e1ba687221b.scope: Deactivated successfully.
Nov 24 19:48:48 compute-0 systemd[1]: libpod-992e52cf109e0d127697fc9a20062b3a0cb7fefa753e9467f15a0e1ba687221b.scope: Consumed 1.104s CPU time.
Nov 24 19:48:48 compute-0 podman[95398]: 2025-11-24 19:48:48.692709676 +0000 UTC m=+0.038304916 container died 992e52cf109e0d127697fc9a20062b3a0cb7fefa753e9467f15a0e1ba687221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cohen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 19:48:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cd87817239e83877e6976fa83f1bd895edd48fcd1b5feffdf396565bc6b1607-merged.mount: Deactivated successfully.
Nov 24 19:48:48 compute-0 podman[95398]: 2025-11-24 19:48:48.787319281 +0000 UTC m=+0.132914521 container remove 992e52cf109e0d127697fc9a20062b3a0cb7fefa753e9467f15a0e1ba687221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cohen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:48 compute-0 systemd[1]: libpod-conmon-992e52cf109e0d127697fc9a20062b3a0cb7fefa753e9467f15a0e1ba687221b.scope: Deactivated successfully.
Nov 24 19:48:48 compute-0 sudo[95167]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:48:48 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:48:48 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3352164533' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 24 19:48:48 compute-0 ceph-mon[75677]: osdmap e25: 3 total, 3 up, 3 in
Nov 24 19:48:48 compute-0 ceph-mon[75677]: pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:48 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:48 compute-0 sudo[95432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:48 compute-0 sudo[95432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:48 compute-0 sudo[95432]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:49 compute-0 sudo[95457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:48:49 compute-0 sudo[95457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:49 compute-0 sudo[95457]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 24 19:48:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 24 19:48:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 24 19:48:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 24 19:48:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 19:48:49 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 19:48:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 24 19:48:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 19:48:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 24 19:48:49 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 19:48:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:48:49 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:49 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 19:48:49 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 19:48:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 24 19:48:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1849665822' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 24 19:48:49 compute-0 sudo[95483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:49 compute-0 sudo[95483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:49 compute-0 sudo[95483]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:49 compute-0 sudo[95508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:49 compute-0 sudo[95508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:49 compute-0 sudo[95508]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:49 compute-0 sudo[95533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:49 compute-0 sudo[95533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:49 compute-0 sudo[95533]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:49 compute-0 sudo[95558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:48:49 compute-0 sudo[95558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:49 compute-0 podman[95599]: 2025-11-24 19:48:49.741266792 +0000 UTC m=+0.074095912 container create ca69ec3ea0cc91faf3b9603bb9750ad3d11b42fef297ad25fda3f2b57065a246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 19:48:49 compute-0 systemd[1]: Started libpod-conmon-ca69ec3ea0cc91faf3b9603bb9750ad3d11b42fef297ad25fda3f2b57065a246.scope.
Nov 24 19:48:49 compute-0 podman[95599]: 2025-11-24 19:48:49.709486602 +0000 UTC m=+0.042315772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:49 compute-0 podman[95599]: 2025-11-24 19:48:49.841780833 +0000 UTC m=+0.174610003 container init ca69ec3ea0cc91faf3b9603bb9750ad3d11b42fef297ad25fda3f2b57065a246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 19:48:49 compute-0 podman[95599]: 2025-11-24 19:48:49.8514081 +0000 UTC m=+0.184237210 container start ca69ec3ea0cc91faf3b9603bb9750ad3d11b42fef297ad25fda3f2b57065a246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 19:48:49 compute-0 podman[95599]: 2025-11-24 19:48:49.856776868 +0000 UTC m=+0.189605978 container attach ca69ec3ea0cc91faf3b9603bb9750ad3d11b42fef297ad25fda3f2b57065a246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 19:48:49 compute-0 recursing_neumann[95616]: 167 167
Nov 24 19:48:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 systemd[1]: libpod-ca69ec3ea0cc91faf3b9603bb9750ad3d11b42fef297ad25fda3f2b57065a246.scope: Deactivated successfully.
Nov 24 19:48:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mon[75677]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 19:48:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 19:48:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 19:48:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:49 compute-0 ceph-mon[75677]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 19:48:49 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1849665822' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 24 19:48:49 compute-0 podman[95599]: 2025-11-24 19:48:49.859843648 +0000 UTC m=+0.192672768 container died ca69ec3ea0cc91faf3b9603bb9750ad3d11b42fef297ad25fda3f2b57065a246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a89d502f060657370b3ca2453f10befb24b03ce7dcce079f02582555714a07c-merged.mount: Deactivated successfully.
Nov 24 19:48:49 compute-0 podman[95599]: 2025-11-24 19:48:49.910053028 +0000 UTC m=+0.242882138 container remove ca69ec3ea0cc91faf3b9603bb9750ad3d11b42fef297ad25fda3f2b57065a246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 19:48:49 compute-0 systemd[1]: libpod-conmon-ca69ec3ea0cc91faf3b9603bb9750ad3d11b42fef297ad25fda3f2b57065a246.scope: Deactivated successfully.
Nov 24 19:48:49 compute-0 sudo[95558]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:48:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:48:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:49 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ofslrn (unknown last config time)...
Nov 24 19:48:49 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ofslrn (unknown last config time)...
Nov 24 19:48:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ofslrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 24 19:48:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ofslrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 19:48:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 19:48:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 19:48:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:48:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:50 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ofslrn on compute-0
Nov 24 19:48:50 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ofslrn on compute-0
Nov 24 19:48:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 24 19:48:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1849665822' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 24 19:48:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 24 19:48:50 compute-0 hungry_shamir[95383]: enabled application 'rbd' on pool 'images'
Nov 24 19:48:50 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 24 19:48:50 compute-0 systemd[1]: libpod-2c2261b84876ed6fcfa4c53e38f736cd853e44042fde68ab7e6dda54b1b78d0f.scope: Deactivated successfully.
Nov 24 19:48:50 compute-0 podman[95359]: 2025-11-24 19:48:50.088981731 +0000 UTC m=+1.770883734 container died 2c2261b84876ed6fcfa4c53e38f736cd853e44042fde68ab7e6dda54b1b78d0f (image=quay.io/ceph/ceph:v18, name=hungry_shamir, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 19:48:50 compute-0 sudo[95634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:50 compute-0 sudo[95634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:50 compute-0 sudo[95634]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f69f4110f5548bb689161f55e8555feb722dd559c3f3b71801ece728d3fc20f7-merged.mount: Deactivated successfully.
Nov 24 19:48:50 compute-0 podman[95359]: 2025-11-24 19:48:50.154617982 +0000 UTC m=+1.836519995 container remove 2c2261b84876ed6fcfa4c53e38f736cd853e44042fde68ab7e6dda54b1b78d0f (image=quay.io/ceph/ceph:v18, name=hungry_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:50 compute-0 systemd[1]: libpod-conmon-2c2261b84876ed6fcfa4c53e38f736cd853e44042fde68ab7e6dda54b1b78d0f.scope: Deactivated successfully.
Nov 24 19:48:50 compute-0 sudo[95348]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:50 compute-0 sudo[95667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:50 compute-0 sudo[95667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:50 compute-0 sudo[95667]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:50 compute-0 sudo[95699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:50 compute-0 sudo[95699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:50 compute-0 sudo[95699]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:50 compute-0 sudo[95767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvsjmwsktyatplkvdemhbsrizlhlhejp ; /usr/bin/python3'
Nov 24 19:48:50 compute-0 sudo[95767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:50 compute-0 sudo[95731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:48:50 compute-0 sudo[95731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:50 compute-0 python3[95772]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:50 compute-0 podman[95777]: 2025-11-24 19:48:50.615751114 +0000 UTC m=+0.064898461 container create 21ac29c5d2506eafb3f60a966b2ace261d7d08a43b47de8b90cd645428a7c087 (image=quay.io/ceph/ceph:v18, name=xenodochial_beaver, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 19:48:50 compute-0 systemd[1]: Started libpod-conmon-21ac29c5d2506eafb3f60a966b2ace261d7d08a43b47de8b90cd645428a7c087.scope.
Nov 24 19:48:50 compute-0 podman[95777]: 2025-11-24 19:48:50.592343211 +0000 UTC m=+0.041490588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d597cf421d0bbd7635d53bec072a11dd40febc89bac0a5a0dbc6d39feea202/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d597cf421d0bbd7635d53bec072a11dd40febc89bac0a5a0dbc6d39feea202/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:50 compute-0 podman[95802]: 2025-11-24 19:48:50.709656407 +0000 UTC m=+0.069923373 container create dc2a76aeb1bb51dcc1654cd1e3610b6250bf470b3a14c8abb8ce544650ab1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 19:48:50 compute-0 podman[95777]: 2025-11-24 19:48:50.71651934 +0000 UTC m=+0.165666767 container init 21ac29c5d2506eafb3f60a966b2ace261d7d08a43b47de8b90cd645428a7c087 (image=quay.io/ceph/ceph:v18, name=xenodochial_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:50 compute-0 podman[95777]: 2025-11-24 19:48:50.727464639 +0000 UTC m=+0.176612016 container start 21ac29c5d2506eafb3f60a966b2ace261d7d08a43b47de8b90cd645428a7c087 (image=quay.io/ceph/ceph:v18, name=xenodochial_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:50 compute-0 podman[95777]: 2025-11-24 19:48:50.732294497 +0000 UTC m=+0.181441874 container attach 21ac29c5d2506eafb3f60a966b2ace261d7d08a43b47de8b90cd645428a7c087 (image=quay.io/ceph/ceph:v18, name=xenodochial_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:50 compute-0 systemd[1]: Started libpod-conmon-dc2a76aeb1bb51dcc1654cd1e3610b6250bf470b3a14c8abb8ce544650ab1992.scope.
Nov 24 19:48:50 compute-0 podman[95802]: 2025-11-24 19:48:50.679968953 +0000 UTC m=+0.040235959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:50 compute-0 podman[95802]: 2025-11-24 19:48:50.802031356 +0000 UTC m=+0.162298322 container init dc2a76aeb1bb51dcc1654cd1e3610b6250bf470b3a14c8abb8ce544650ab1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 19:48:50 compute-0 podman[95802]: 2025-11-24 19:48:50.813947771 +0000 UTC m=+0.174214727 container start dc2a76aeb1bb51dcc1654cd1e3610b6250bf470b3a14c8abb8ce544650ab1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 19:48:50 compute-0 cranky_agnesi[95826]: 167 167
Nov 24 19:48:50 compute-0 podman[95802]: 2025-11-24 19:48:50.818277781 +0000 UTC m=+0.178544797 container attach dc2a76aeb1bb51dcc1654cd1e3610b6250bf470b3a14c8abb8ce544650ab1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 19:48:50 compute-0 systemd[1]: libpod-dc2a76aeb1bb51dcc1654cd1e3610b6250bf470b3a14c8abb8ce544650ab1992.scope: Deactivated successfully.
Nov 24 19:48:50 compute-0 podman[95802]: 2025-11-24 19:48:50.819135005 +0000 UTC m=+0.179401971 container died dc2a76aeb1bb51dcc1654cd1e3610b6250bf470b3a14c8abb8ce544650ab1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 19:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b6e99ca7606dcf68c72507b718c0deff46f549691b1adda83bb94af5cba0d26-merged.mount: Deactivated successfully.
Nov 24 19:48:50 compute-0 podman[95802]: 2025-11-24 19:48:50.865357761 +0000 UTC m=+0.225624727 container remove dc2a76aeb1bb51dcc1654cd1e3610b6250bf470b3a14c8abb8ce544650ab1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:50 compute-0 systemd[1]: libpod-conmon-dc2a76aeb1bb51dcc1654cd1e3610b6250bf470b3a14c8abb8ce544650ab1992.scope: Deactivated successfully.
Nov 24 19:48:50 compute-0 sudo[95731]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:48:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:48:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:50 compute-0 ceph-mon[75677]: Reconfiguring mgr.compute-0.ofslrn (unknown last config time)...
Nov 24 19:48:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ofslrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 19:48:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 19:48:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:50 compute-0 ceph-mon[75677]: Reconfiguring daemon mgr.compute-0.ofslrn on compute-0
Nov 24 19:48:50 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1849665822' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 24 19:48:50 compute-0 ceph-mon[75677]: osdmap e26: 3 total, 3 up, 3 in
Nov 24 19:48:50 compute-0 ceph-mon[75677]: pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:51 compute-0 sudo[95845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:51 compute-0 sudo[95845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:51 compute-0 sudo[95845]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 19:48:51 compute-0 sudo[95872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:51 compute-0 sudo[95872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:51 compute-0 sudo[95872]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:51 compute-0 sudo[95914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:51 compute-0 sudo[95914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:51 compute-0 sudo[95914]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:51 compute-0 sudo[95939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 19:48:51 compute-0 sudo[95939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 24 19:48:51 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539301919' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 24 19:48:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:51 compute-0 podman[96035]: 2025-11-24 19:48:51.927200123 +0000 UTC m=+0.086150858 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:51 compute-0 ceph-mon[75677]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 19:48:51 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3539301919' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 24 19:48:52 compute-0 podman[96035]: 2025-11-24 19:48:52.044202184 +0000 UTC m=+0.203152849 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 24 19:48:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3539301919' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 24 19:48:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 24 19:48:52 compute-0 xenodochial_beaver[95817]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 24 19:48:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 24 19:48:52 compute-0 systemd[1]: libpod-21ac29c5d2506eafb3f60a966b2ace261d7d08a43b47de8b90cd645428a7c087.scope: Deactivated successfully.
Nov 24 19:48:52 compute-0 podman[95777]: 2025-11-24 19:48:52.098510721 +0000 UTC m=+1.547658098 container died 21ac29c5d2506eafb3f60a966b2ace261d7d08a43b47de8b90cd645428a7c087 (image=quay.io/ceph/ceph:v18, name=xenodochial_beaver, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 19:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-02d597cf421d0bbd7635d53bec072a11dd40febc89bac0a5a0dbc6d39feea202-merged.mount: Deactivated successfully.
Nov 24 19:48:52 compute-0 podman[95777]: 2025-11-24 19:48:52.152692995 +0000 UTC m=+1.601840362 container remove 21ac29c5d2506eafb3f60a966b2ace261d7d08a43b47de8b90cd645428a7c087 (image=quay.io/ceph/ceph:v18, name=xenodochial_beaver, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 19:48:52 compute-0 systemd[1]: libpod-conmon-21ac29c5d2506eafb3f60a966b2ace261d7d08a43b47de8b90cd645428a7c087.scope: Deactivated successfully.
Nov 24 19:48:52 compute-0 sudo[95767]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:52 compute-0 sudo[96142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfzkuvcgqlrejqaevjtzsinhtrwqobov ; /usr/bin/python3'
Nov 24 19:48:52 compute-0 sudo[96142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:52 compute-0 python3[96147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:52 compute-0 podman[96177]: 2025-11-24 19:48:52.614691371 +0000 UTC m=+0.076312897 container create fbe50c5c1bec64acf04a0a1abd26373573a5bcce8f3db35486b00a27bd7448f6 (image=quay.io/ceph/ceph:v18, name=infallible_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 19:48:52 compute-0 systemd[1]: Started libpod-conmon-fbe50c5c1bec64acf04a0a1abd26373573a5bcce8f3db35486b00a27bd7448f6.scope.
Nov 24 19:48:52 compute-0 podman[96177]: 2025-11-24 19:48:52.586082394 +0000 UTC m=+0.047703930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc1757b2eb857ff8445bda9e9ba2c4fda2d29adb35145229cb451befb1c2c79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc1757b2eb857ff8445bda9e9ba2c4fda2d29adb35145229cb451befb1c2c79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:52 compute-0 sudo[95939]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:52 compute-0 podman[96177]: 2025-11-24 19:48:52.725163856 +0000 UTC m=+0.186785382 container init fbe50c5c1bec64acf04a0a1abd26373573a5bcce8f3db35486b00a27bd7448f6 (image=quay.io/ceph/ceph:v18, name=infallible_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 19:48:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:48:52 compute-0 podman[96177]: 2025-11-24 19:48:52.735866211 +0000 UTC m=+0.197487707 container start fbe50c5c1bec64acf04a0a1abd26373573a5bcce8f3db35486b00a27bd7448f6 (image=quay.io/ceph/ceph:v18, name=infallible_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 19:48:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:48:52 compute-0 podman[96177]: 2025-11-24 19:48:52.741391681 +0000 UTC m=+0.203013177 container attach fbe50c5c1bec64acf04a0a1abd26373573a5bcce8f3db35486b00a27bd7448f6 (image=quay.io/ceph/ceph:v18, name=infallible_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 19:48:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:52 compute-0 sudo[96216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:52 compute-0 sudo[96216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:52 compute-0 sudo[96216]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:52 compute-0 sudo[96241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:52 compute-0 sudo[96241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:52 compute-0 sudo[96241]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:53 compute-0 sudo[96266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:53 compute-0 sudo[96266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:53 compute-0 sudo[96266]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:53 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3539301919' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 24 19:48:53 compute-0 ceph-mon[75677]: osdmap e27: 3 total, 3 up, 3 in
Nov 24 19:48:53 compute-0 ceph-mon[75677]: pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:53 compute-0 sudo[96293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 19:48:53 compute-0 sudo[96293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 24 19:48:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/399185542' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 24 19:48:53 compute-0 sudo[96293]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:48:53 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:48:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:48:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:48:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:53 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 8ea2a3f3-7d29-4aa4-b1ca-63b48369c87f does not exist
Nov 24 19:48:53 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f5c4ec7f-756e-44d6-b0f8-3e3a16e53a76 does not exist
Nov 24 19:48:53 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ee0a9304-3122-40eb-afe5-e5ff998c2b92 does not exist
Nov 24 19:48:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:48:53 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:48:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:48:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:48:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:48:53 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:53 compute-0 sudo[96365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:53 compute-0 sudo[96365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:53 compute-0 sudo[96365]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:53 compute-0 sudo[96390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:53 compute-0 sudo[96390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:53 compute-0 sudo[96390]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:53 compute-0 sudo[96415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:53 compute-0 sudo[96415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:54 compute-0 sudo[96415]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:54 compute-0 sudo[96440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:48:54 compute-0 sudo[96440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 24 19:48:54 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/399185542' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 24 19:48:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:48:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:48:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:48:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:48:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:48:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/399185542' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 24 19:48:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 24 19:48:54 compute-0 infallible_carver[96212]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 24 19:48:54 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 24 19:48:54 compute-0 systemd[1]: libpod-fbe50c5c1bec64acf04a0a1abd26373573a5bcce8f3db35486b00a27bd7448f6.scope: Deactivated successfully.
Nov 24 19:48:54 compute-0 podman[96177]: 2025-11-24 19:48:54.137204808 +0000 UTC m=+1.598826324 container died fbe50c5c1bec64acf04a0a1abd26373573a5bcce8f3db35486b00a27bd7448f6 (image=quay.io/ceph/ceph:v18, name=infallible_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cc1757b2eb857ff8445bda9e9ba2c4fda2d29adb35145229cb451befb1c2c79-merged.mount: Deactivated successfully.
Nov 24 19:48:54 compute-0 podman[96177]: 2025-11-24 19:48:54.194901859 +0000 UTC m=+1.656523375 container remove fbe50c5c1bec64acf04a0a1abd26373573a5bcce8f3db35486b00a27bd7448f6 (image=quay.io/ceph/ceph:v18, name=infallible_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:54 compute-0 systemd[1]: libpod-conmon-fbe50c5c1bec64acf04a0a1abd26373573a5bcce8f3db35486b00a27bd7448f6.scope: Deactivated successfully.
Nov 24 19:48:54 compute-0 sudo[96142]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:48:54 compute-0 podman[96519]: 2025-11-24 19:48:54.582175515 +0000 UTC m=+0.060517540 container create 0a18c88227bc5f2065117712a081b82c8c5ba5b880e035ca8403a845d93b3dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sutherland, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:54 compute-0 systemd[1]: Started libpod-conmon-0a18c88227bc5f2065117712a081b82c8c5ba5b880e035ca8403a845d93b3dc1.scope.
Nov 24 19:48:54 compute-0 podman[96519]: 2025-11-24 19:48:54.561467226 +0000 UTC m=+0.039809261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:54 compute-0 podman[96519]: 2025-11-24 19:48:54.682003785 +0000 UTC m=+0.160345870 container init 0a18c88227bc5f2065117712a081b82c8c5ba5b880e035ca8403a845d93b3dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:54 compute-0 podman[96519]: 2025-11-24 19:48:54.691680563 +0000 UTC m=+0.170022588 container start 0a18c88227bc5f2065117712a081b82c8c5ba5b880e035ca8403a845d93b3dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:54 compute-0 podman[96519]: 2025-11-24 19:48:54.696177656 +0000 UTC m=+0.174519681 container attach 0a18c88227bc5f2065117712a081b82c8c5ba5b880e035ca8403a845d93b3dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sutherland, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:54 compute-0 bold_sutherland[96535]: 167 167
Nov 24 19:48:54 compute-0 systemd[1]: libpod-0a18c88227bc5f2065117712a081b82c8c5ba5b880e035ca8403a845d93b3dc1.scope: Deactivated successfully.
Nov 24 19:48:54 compute-0 podman[96519]: 2025-11-24 19:48:54.700140022 +0000 UTC m=+0.178482057 container died 0a18c88227bc5f2065117712a081b82c8c5ba5b880e035ca8403a845d93b3dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sutherland, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 19:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-adbe178f07c0b718e58fdbb5ee9f7e4a9d0e1dc0885fa58a16b42ed7ece0757e-merged.mount: Deactivated successfully.
Nov 24 19:48:54 compute-0 podman[96519]: 2025-11-24 19:48:54.75270687 +0000 UTC m=+0.231048895 container remove 0a18c88227bc5f2065117712a081b82c8c5ba5b880e035ca8403a845d93b3dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sutherland, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:48:54 compute-0 systemd[1]: libpod-conmon-0a18c88227bc5f2065117712a081b82c8c5ba5b880e035ca8403a845d93b3dc1.scope: Deactivated successfully.
Nov 24 19:48:54 compute-0 podman[96603]: 2025-11-24 19:48:54.999231947 +0000 UTC m=+0.073566674 container create d651b60b2ef8e5f4bc811828fd80d6222c3d4c9e2782e71fa55278cfb1fee467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:55 compute-0 systemd[1]: Started libpod-conmon-d651b60b2ef8e5f4bc811828fd80d6222c3d4c9e2782e71fa55278cfb1fee467.scope.
Nov 24 19:48:55 compute-0 podman[96603]: 2025-11-24 19:48:54.966343369 +0000 UTC m=+0.040678156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c594353b19e09dff1137572656a6fcdad23c3f56d97e9abd6d5af3f26602c16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c594353b19e09dff1137572656a6fcdad23c3f56d97e9abd6d5af3f26602c16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c594353b19e09dff1137572656a6fcdad23c3f56d97e9abd6d5af3f26602c16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c594353b19e09dff1137572656a6fcdad23c3f56d97e9abd6d5af3f26602c16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c594353b19e09dff1137572656a6fcdad23c3f56d97e9abd6d5af3f26602c16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:55 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 19:48:55 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 19:48:55 compute-0 podman[96603]: 2025-11-24 19:48:55.110842069 +0000 UTC m=+0.185176786 container init d651b60b2ef8e5f4bc811828fd80d6222c3d4c9e2782e71fa55278cfb1fee467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 19:48:55 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/399185542' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 24 19:48:55 compute-0 ceph-mon[75677]: osdmap e28: 3 total, 3 up, 3 in
Nov 24 19:48:55 compute-0 ceph-mon[75677]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:55 compute-0 podman[96603]: 2025-11-24 19:48:55.124113196 +0000 UTC m=+0.198447913 container start d651b60b2ef8e5f4bc811828fd80d6222c3d4c9e2782e71fa55278cfb1fee467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:55 compute-0 podman[96603]: 2025-11-24 19:48:55.128286194 +0000 UTC m=+0.202620931 container attach d651b60b2ef8e5f4bc811828fd80d6222c3d4c9e2782e71fa55278cfb1fee467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 19:48:55 compute-0 python3[96651]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:48:55 compute-0 python3[96727]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764013734.8462906-37739-191800450850632/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:48:56 compute-0 sudo[96841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqebljaqogjxzafkkkbmgmypaznqasm ; /usr/bin/python3'
Nov 24 19:48:56 compute-0 sudo[96841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:56 compute-0 ceph-mon[75677]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 19:48:56 compute-0 ceph-mon[75677]: Cluster is now healthy
Nov 24 19:48:56 compute-0 python3[96845]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:48:56 compute-0 mystifying_dewdney[96652]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:48:56 compute-0 mystifying_dewdney[96652]: --> relative data size: 1.0
Nov 24 19:48:56 compute-0 mystifying_dewdney[96652]: --> All data devices are unavailable
Nov 24 19:48:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:56 compute-0 systemd[1]: libpod-d651b60b2ef8e5f4bc811828fd80d6222c3d4c9e2782e71fa55278cfb1fee467.scope: Deactivated successfully.
Nov 24 19:48:56 compute-0 systemd[1]: libpod-d651b60b2ef8e5f4bc811828fd80d6222c3d4c9e2782e71fa55278cfb1fee467.scope: Consumed 1.107s CPU time.
Nov 24 19:48:56 compute-0 sudo[96841]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:56 compute-0 podman[96856]: 2025-11-24 19:48:56.346364398 +0000 UTC m=+0.030378977 container died d651b60b2ef8e5f4bc811828fd80d6222c3d4c9e2782e71fa55278cfb1fee467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c594353b19e09dff1137572656a6fcdad23c3f56d97e9abd6d5af3f26602c16-merged.mount: Deactivated successfully.
Nov 24 19:48:56 compute-0 podman[96856]: 2025-11-24 19:48:56.392818907 +0000 UTC m=+0.076833436 container remove d651b60b2ef8e5f4bc811828fd80d6222c3d4c9e2782e71fa55278cfb1fee467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 19:48:56 compute-0 systemd[1]: libpod-conmon-d651b60b2ef8e5f4bc811828fd80d6222c3d4c9e2782e71fa55278cfb1fee467.scope: Deactivated successfully.
Nov 24 19:48:56 compute-0 sudo[96440]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:56 compute-0 sudo[96920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:56 compute-0 sudo[96961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysifxpkrmjskwrhpxjwwrlduqktedrea ; /usr/bin/python3'
Nov 24 19:48:56 compute-0 sudo[96920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:56 compute-0 sudo[96961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:56 compute-0 sudo[96920]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:56 compute-0 sudo[96967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:56 compute-0 sudo[96967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:56 compute-0 sudo[96967]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:56 compute-0 python3[96965]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764013735.9104717-37753-29996137692408/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=d51e3c44bbee7c2d1d1f1875de42f8a02c3d2189 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:48:56 compute-0 sudo[96992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:56 compute-0 sudo[96992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:56 compute-0 sudo[96992]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:56 compute-0 sudo[96961]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:56 compute-0 sudo[97017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:48:56 compute-0 sudo[97017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:48:56 compute-0 sudo[97090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbtcwejhzhritusbdfhzdrpohlsdvmti ; /usr/bin/python3'
Nov 24 19:48:56 compute-0 sudo[97090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:57 compute-0 python3[97098]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:57 compute-0 ceph-mon[75677]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:57 compute-0 podman[97130]: 2025-11-24 19:48:57.137680592 +0000 UTC m=+0.062825747 container create ba441ede7ef66d07a5e584c20cb5422774dc5374784587d4d7e3668632763fa4 (image=quay.io/ceph/ceph:v18, name=wizardly_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:57 compute-0 podman[97136]: 2025-11-24 19:48:57.183045303 +0000 UTC m=+0.074088991 container create bed44e9d1be994dabdfd62e5bca8ffe6b9ed8d30acd6c044a53c804aefcde936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_grothendieck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 19:48:57 compute-0 systemd[1]: Started libpod-conmon-ba441ede7ef66d07a5e584c20cb5422774dc5374784587d4d7e3668632763fa4.scope.
Nov 24 19:48:57 compute-0 podman[97130]: 2025-11-24 19:48:57.107207244 +0000 UTC m=+0.032352469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03374f2988ee2254a747b13b1cd3a3b3aca262c99a1cd3abdae101c757e2701f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03374f2988ee2254a747b13b1cd3a3b3aca262c99a1cd3abdae101c757e2701f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03374f2988ee2254a747b13b1cd3a3b3aca262c99a1cd3abdae101c757e2701f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:57 compute-0 systemd[1]: Started libpod-conmon-bed44e9d1be994dabdfd62e5bca8ffe6b9ed8d30acd6c044a53c804aefcde936.scope.
Nov 24 19:48:57 compute-0 podman[97136]: 2025-11-24 19:48:57.154975545 +0000 UTC m=+0.046019283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:57 compute-0 podman[97130]: 2025-11-24 19:48:57.248871689 +0000 UTC m=+0.174016864 container init ba441ede7ef66d07a5e584c20cb5422774dc5374784587d4d7e3668632763fa4 (image=quay.io/ceph/ceph:v18, name=wizardly_satoshi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 19:48:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:57 compute-0 podman[97130]: 2025-11-24 19:48:57.263231813 +0000 UTC m=+0.188376978 container start ba441ede7ef66d07a5e584c20cb5422774dc5374784587d4d7e3668632763fa4 (image=quay.io/ceph/ceph:v18, name=wizardly_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 19:48:57 compute-0 podman[97130]: 2025-11-24 19:48:57.267616984 +0000 UTC m=+0.192762189 container attach ba441ede7ef66d07a5e584c20cb5422774dc5374784587d4d7e3668632763fa4 (image=quay.io/ceph/ceph:v18, name=wizardly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 19:48:57 compute-0 podman[97136]: 2025-11-24 19:48:57.271871184 +0000 UTC m=+0.162914922 container init bed44e9d1be994dabdfd62e5bca8ffe6b9ed8d30acd6c044a53c804aefcde936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 19:48:57 compute-0 podman[97136]: 2025-11-24 19:48:57.280729428 +0000 UTC m=+0.171773116 container start bed44e9d1be994dabdfd62e5bca8ffe6b9ed8d30acd6c044a53c804aefcde936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_grothendieck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 19:48:57 compute-0 podman[97136]: 2025-11-24 19:48:57.28507387 +0000 UTC m=+0.176117618 container attach bed44e9d1be994dabdfd62e5bca8ffe6b9ed8d30acd6c044a53c804aefcde936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:57 compute-0 eloquent_grothendieck[97163]: 167 167
Nov 24 19:48:57 compute-0 systemd[1]: libpod-bed44e9d1be994dabdfd62e5bca8ffe6b9ed8d30acd6c044a53c804aefcde936.scope: Deactivated successfully.
Nov 24 19:48:57 compute-0 podman[97136]: 2025-11-24 19:48:57.287828885 +0000 UTC m=+0.178872583 container died bed44e9d1be994dabdfd62e5bca8ffe6b9ed8d30acd6c044a53c804aefcde936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 19:48:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c108bf06451e2fee424ad6be25ca4f0e2f6e15cbf583928fefef72304722e88a-merged.mount: Deactivated successfully.
Nov 24 19:48:57 compute-0 podman[97136]: 2025-11-24 19:48:57.344693723 +0000 UTC m=+0.235737411 container remove bed44e9d1be994dabdfd62e5bca8ffe6b9ed8d30acd6c044a53c804aefcde936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_grothendieck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:57 compute-0 systemd[1]: libpod-conmon-bed44e9d1be994dabdfd62e5bca8ffe6b9ed8d30acd6c044a53c804aefcde936.scope: Deactivated successfully.
Nov 24 19:48:57 compute-0 podman[97189]: 2025-11-24 19:48:57.596394034 +0000 UTC m=+0.070411020 container create fcc8013c7133835afae1c03837216a2bf8f12bd99838d36564f79b795ef0d430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_knuth, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:57 compute-0 systemd[1]: Started libpod-conmon-fcc8013c7133835afae1c03837216a2bf8f12bd99838d36564f79b795ef0d430.scope.
Nov 24 19:48:57 compute-0 podman[97189]: 2025-11-24 19:48:57.568227514 +0000 UTC m=+0.042244560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a346db84ff8ea9aeafa1b265f396c3236dace0e83e9cc67b49deacac445e59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a346db84ff8ea9aeafa1b265f396c3236dace0e83e9cc67b49deacac445e59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a346db84ff8ea9aeafa1b265f396c3236dace0e83e9cc67b49deacac445e59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a346db84ff8ea9aeafa1b265f396c3236dace0e83e9cc67b49deacac445e59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:57 compute-0 podman[97189]: 2025-11-24 19:48:57.696974507 +0000 UTC m=+0.170991493 container init fcc8013c7133835afae1c03837216a2bf8f12bd99838d36564f79b795ef0d430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_knuth, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 19:48:57 compute-0 podman[97189]: 2025-11-24 19:48:57.71248743 +0000 UTC m=+0.186504416 container start fcc8013c7133835afae1c03837216a2bf8f12bd99838d36564f79b795ef0d430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 19:48:57 compute-0 podman[97189]: 2025-11-24 19:48:57.716836341 +0000 UTC m=+0.190853367 container attach fcc8013c7133835afae1c03837216a2bf8f12bd99838d36564f79b795ef0d430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_knuth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 24 19:48:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3330710173' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 19:48:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3330710173' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 19:48:57 compute-0 wizardly_satoshi[97158]: 
Nov 24 19:48:57 compute-0 wizardly_satoshi[97158]: [global]
Nov 24 19:48:57 compute-0 wizardly_satoshi[97158]:         fsid = 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:48:57 compute-0 wizardly_satoshi[97158]:         mon_host = 192.168.122.100
Nov 24 19:48:57 compute-0 systemd[1]: libpod-ba441ede7ef66d07a5e584c20cb5422774dc5374784587d4d7e3668632763fa4.scope: Deactivated successfully.
Nov 24 19:48:57 compute-0 conmon[97158]: conmon ba441ede7ef66d07a5e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba441ede7ef66d07a5e584c20cb5422774dc5374784587d4d7e3668632763fa4.scope/container/memory.events
Nov 24 19:48:57 compute-0 podman[97231]: 2025-11-24 19:48:57.859988779 +0000 UTC m=+0.029792718 container died ba441ede7ef66d07a5e584c20cb5422774dc5374784587d4d7e3668632763fa4 (image=quay.io/ceph/ceph:v18, name=wizardly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-03374f2988ee2254a747b13b1cd3a3b3aca262c99a1cd3abdae101c757e2701f-merged.mount: Deactivated successfully.
Nov 24 19:48:57 compute-0 podman[97231]: 2025-11-24 19:48:57.912431595 +0000 UTC m=+0.082235464 container remove ba441ede7ef66d07a5e584c20cb5422774dc5374784587d4d7e3668632763fa4 (image=quay.io/ceph/ceph:v18, name=wizardly_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 19:48:57 compute-0 systemd[1]: libpod-conmon-ba441ede7ef66d07a5e584c20cb5422774dc5374784587d4d7e3668632763fa4.scope: Deactivated successfully.
Nov 24 19:48:57 compute-0 sudo[97090]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:58 compute-0 sudo[97269]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjskchqnnrkubcurdzzlomwevjseffcr ; /usr/bin/python3'
Nov 24 19:48:58 compute-0 sudo[97269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:58 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3330710173' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 19:48:58 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3330710173' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 19:48:58 compute-0 python3[97271]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:58 compute-0 podman[97272]: 2025-11-24 19:48:58.372163875 +0000 UTC m=+0.075049269 container create 73335dc12f2990ffd5c4336eaebe439c1b10911e6b0cff6ed71eb5d541d4e4ce (image=quay.io/ceph/ceph:v18, name=interesting_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:58 compute-0 systemd[1]: Started libpod-conmon-73335dc12f2990ffd5c4336eaebe439c1b10911e6b0cff6ed71eb5d541d4e4ce.scope.
Nov 24 19:48:58 compute-0 podman[97272]: 2025-11-24 19:48:58.341681605 +0000 UTC m=+0.044567059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20dcd078afe4d7b27c293e9020ffbfb36f4b784bcfc0c5d3a0730f43a3bfc475/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20dcd078afe4d7b27c293e9020ffbfb36f4b784bcfc0c5d3a0730f43a3bfc475/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20dcd078afe4d7b27c293e9020ffbfb36f4b784bcfc0c5d3a0730f43a3bfc475/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:58 compute-0 podman[97272]: 2025-11-24 19:48:58.478039581 +0000 UTC m=+0.180925015 container init 73335dc12f2990ffd5c4336eaebe439c1b10911e6b0cff6ed71eb5d541d4e4ce (image=quay.io/ceph/ceph:v18, name=interesting_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:58 compute-0 podman[97272]: 2025-11-24 19:48:58.488612526 +0000 UTC m=+0.191497910 container start 73335dc12f2990ffd5c4336eaebe439c1b10911e6b0cff6ed71eb5d541d4e4ce (image=quay.io/ceph/ceph:v18, name=interesting_hofstadter, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 19:48:58 compute-0 podman[97272]: 2025-11-24 19:48:58.493114347 +0000 UTC m=+0.195999741 container attach 73335dc12f2990ffd5c4336eaebe439c1b10911e6b0cff6ed71eb5d541d4e4ce (image=quay.io/ceph/ceph:v18, name=interesting_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]: {
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:     "0": [
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:         {
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "devices": [
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "/dev/loop3"
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             ],
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_name": "ceph_lv0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_size": "21470642176",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "name": "ceph_lv0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "tags": {
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.cluster_name": "ceph",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.crush_device_class": "",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.encrypted": "0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.osd_id": "0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.type": "block",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.vdo": "0"
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             },
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "type": "block",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "vg_name": "ceph_vg0"
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:         }
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:     ],
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:     "1": [
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:         {
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "devices": [
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "/dev/loop4"
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             ],
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_name": "ceph_lv1",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_size": "21470642176",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "name": "ceph_lv1",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "tags": {
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.cluster_name": "ceph",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.crush_device_class": "",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.encrypted": "0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.osd_id": "1",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.type": "block",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.vdo": "0"
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             },
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "type": "block",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "vg_name": "ceph_vg1"
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:         }
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:     ],
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:     "2": [
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:         {
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "devices": [
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "/dev/loop5"
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             ],
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_name": "ceph_lv2",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_size": "21470642176",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "name": "ceph_lv2",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "tags": {
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.cluster_name": "ceph",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.crush_device_class": "",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.encrypted": "0",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.osd_id": "2",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.type": "block",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:                 "ceph.vdo": "0"
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             },
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "type": "block",
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:             "vg_name": "ceph_vg2"
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:         }
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]:     ]
Nov 24 19:48:58 compute-0 eloquent_knuth[97224]: }
Nov 24 19:48:58 compute-0 systemd[1]: libpod-fcc8013c7133835afae1c03837216a2bf8f12bd99838d36564f79b795ef0d430.scope: Deactivated successfully.
Nov 24 19:48:58 compute-0 podman[97189]: 2025-11-24 19:48:58.641749844 +0000 UTC m=+1.115766820 container died fcc8013c7133835afae1c03837216a2bf8f12bd99838d36564f79b795ef0d430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_knuth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3a346db84ff8ea9aeafa1b265f396c3236dace0e83e9cc67b49deacac445e59-merged.mount: Deactivated successfully.
Nov 24 19:48:58 compute-0 podman[97189]: 2025-11-24 19:48:58.70332467 +0000 UTC m=+1.177341626 container remove fcc8013c7133835afae1c03837216a2bf8f12bd99838d36564f79b795ef0d430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_knuth, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:58 compute-0 systemd[1]: libpod-conmon-fcc8013c7133835afae1c03837216a2bf8f12bd99838d36564f79b795ef0d430.scope: Deactivated successfully.
Nov 24 19:48:58 compute-0 sudo[97017]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:58 compute-0 sudo[97310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:58 compute-0 sudo[97310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:58 compute-0 sudo[97310]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:58 compute-0 sudo[97354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:48:58 compute-0 sudo[97354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:58 compute-0 sudo[97354]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:59 compute-0 sudo[97379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:48:59 compute-0 sudo[97379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:59 compute-0 sudo[97379]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:59 compute-0 sudo[97404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:48:59 compute-0 sudo[97404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:48:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 24 19:48:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2524522783' entity='client.admin' 
Nov 24 19:48:59 compute-0 interesting_hofstadter[97288]: set ssl_option
Nov 24 19:48:59 compute-0 ceph-mon[75677]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:48:59 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2524522783' entity='client.admin' 
Nov 24 19:48:59 compute-0 systemd[1]: libpod-73335dc12f2990ffd5c4336eaebe439c1b10911e6b0cff6ed71eb5d541d4e4ce.scope: Deactivated successfully.
Nov 24 19:48:59 compute-0 podman[97272]: 2025-11-24 19:48:59.15183679 +0000 UTC m=+0.854722214 container died 73335dc12f2990ffd5c4336eaebe439c1b10911e6b0cff6ed71eb5d541d4e4ce (image=quay.io/ceph/ceph:v18, name=interesting_hofstadter, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-20dcd078afe4d7b27c293e9020ffbfb36f4b784bcfc0c5d3a0730f43a3bfc475-merged.mount: Deactivated successfully.
Nov 24 19:48:59 compute-0 podman[97272]: 2025-11-24 19:48:59.211280598 +0000 UTC m=+0.914165992 container remove 73335dc12f2990ffd5c4336eaebe439c1b10911e6b0cff6ed71eb5d541d4e4ce (image=quay.io/ceph/ceph:v18, name=interesting_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 24 19:48:59 compute-0 systemd[1]: libpod-conmon-73335dc12f2990ffd5c4336eaebe439c1b10911e6b0cff6ed71eb5d541d4e4ce.scope: Deactivated successfully.
Nov 24 19:48:59 compute-0 sudo[97269]: pam_unix(sudo:session): session closed for user root
Nov 24 19:48:59 compute-0 sudo[97487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riajoyhqiqaosjllhmbnftxmxofgnhnj ; /usr/bin/python3'
Nov 24 19:48:59 compute-0 sudo[97487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:48:59 compute-0 python3[97494]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:48:59 compute-0 podman[97509]: 2025-11-24 19:48:59.575623735 +0000 UTC m=+0.066201610 container create 160a3a2d643b1edfdecb7f2efef0b11686e4317aab5deb6efcf210e193b5f23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 24 19:48:59 compute-0 podman[97516]: 2025-11-24 19:48:59.614924172 +0000 UTC m=+0.076392015 container create 06f79e4e4aca3926aa2a6a50e8df11df311dfd914972ee26d04c780bfa484bd2 (image=quay.io/ceph/ceph:v18, name=inspiring_goldstine, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 19:48:59 compute-0 systemd[1]: Started libpod-conmon-160a3a2d643b1edfdecb7f2efef0b11686e4317aab5deb6efcf210e193b5f23d.scope.
Nov 24 19:48:59 compute-0 podman[97509]: 2025-11-24 19:48:59.546150133 +0000 UTC m=+0.036728048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:59 compute-0 systemd[1]: Started libpod-conmon-06f79e4e4aca3926aa2a6a50e8df11df311dfd914972ee26d04c780bfa484bd2.scope.
Nov 24 19:48:59 compute-0 podman[97516]: 2025-11-24 19:48:59.584842183 +0000 UTC m=+0.046310066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:48:59 compute-0 podman[97509]: 2025-11-24 19:48:59.671976547 +0000 UTC m=+0.162554402 container init 160a3a2d643b1edfdecb7f2efef0b11686e4317aab5deb6efcf210e193b5f23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 19:48:59 compute-0 podman[97509]: 2025-11-24 19:48:59.682458008 +0000 UTC m=+0.173035833 container start 160a3a2d643b1edfdecb7f2efef0b11686e4317aab5deb6efcf210e193b5f23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_payne, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 19:48:59 compute-0 podman[97509]: 2025-11-24 19:48:59.685951773 +0000 UTC m=+0.176529598 container attach 160a3a2d643b1edfdecb7f2efef0b11686e4317aab5deb6efcf210e193b5f23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:48:59 compute-0 competent_payne[97539]: 167 167
Nov 24 19:48:59 compute-0 podman[97509]: 2025-11-24 19:48:59.690067023 +0000 UTC m=+0.180644878 container died 160a3a2d643b1edfdecb7f2efef0b11686e4317aab5deb6efcf210e193b5f23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 19:48:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:59 compute-0 systemd[1]: libpod-160a3a2d643b1edfdecb7f2efef0b11686e4317aab5deb6efcf210e193b5f23d.scope: Deactivated successfully.
Nov 24 19:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/decc512da8a7196f166b29c2ef80462009bac0918bd50d21a9cfa27b397998c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/decc512da8a7196f166b29c2ef80462009bac0918bd50d21a9cfa27b397998c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/decc512da8a7196f166b29c2ef80462009bac0918bd50d21a9cfa27b397998c4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:59 compute-0 podman[97516]: 2025-11-24 19:48:59.724851398 +0000 UTC m=+0.186319291 container init 06f79e4e4aca3926aa2a6a50e8df11df311dfd914972ee26d04c780bfa484bd2 (image=quay.io/ceph/ceph:v18, name=inspiring_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 19:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7bae0f234dacbe6bac6f429b3f0b6c7ecbe74a191d153a8120ecf326752bb3a-merged.mount: Deactivated successfully.
Nov 24 19:48:59 compute-0 podman[97516]: 2025-11-24 19:48:59.734091087 +0000 UTC m=+0.195558930 container start 06f79e4e4aca3926aa2a6a50e8df11df311dfd914972ee26d04c780bfa484bd2 (image=quay.io/ceph/ceph:v18, name=inspiring_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:48:59 compute-0 podman[97509]: 2025-11-24 19:48:59.745185475 +0000 UTC m=+0.235763290 container remove 160a3a2d643b1edfdecb7f2efef0b11686e4317aab5deb6efcf210e193b5f23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_payne, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:48:59 compute-0 podman[97516]: 2025-11-24 19:48:59.756828098 +0000 UTC m=+0.218295951 container attach 06f79e4e4aca3926aa2a6a50e8df11df311dfd914972ee26d04c780bfa484bd2 (image=quay.io/ceph/ceph:v18, name=inspiring_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:48:59 compute-0 systemd[1]: libpod-conmon-160a3a2d643b1edfdecb7f2efef0b11686e4317aab5deb6efcf210e193b5f23d.scope: Deactivated successfully.
Nov 24 19:48:59 compute-0 podman[97568]: 2025-11-24 19:48:59.915455343 +0000 UTC m=+0.046555602 container create 4c4a9a01e9b37f0b76e33e7773386f63766c5fb2b0c7fe3fd9eada80bb4bc9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:48:59 compute-0 systemd[1]: Started libpod-conmon-4c4a9a01e9b37f0b76e33e7773386f63766c5fb2b0c7fe3fd9eada80bb4bc9c4.scope.
Nov 24 19:48:59 compute-0 podman[97568]: 2025-11-24 19:48:59.894289404 +0000 UTC m=+0.025389743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:48:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2129818faed68faa9978b7517f8e89152bbcd31d0273d64dd2a4dff6ce28e23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2129818faed68faa9978b7517f8e89152bbcd31d0273d64dd2a4dff6ce28e23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2129818faed68faa9978b7517f8e89152bbcd31d0273d64dd2a4dff6ce28e23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2129818faed68faa9978b7517f8e89152bbcd31d0273d64dd2a4dff6ce28e23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:00 compute-0 podman[97568]: 2025-11-24 19:49:00.016404888 +0000 UTC m=+0.147505177 container init 4c4a9a01e9b37f0b76e33e7773386f63766c5fb2b0c7fe3fd9eada80bb4bc9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:00 compute-0 podman[97568]: 2025-11-24 19:49:00.031371991 +0000 UTC m=+0.162472250 container start 4c4a9a01e9b37f0b76e33e7773386f63766c5fb2b0c7fe3fd9eada80bb4bc9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:00 compute-0 podman[97568]: 2025-11-24 19:49:00.041777391 +0000 UTC m=+0.172877680 container attach 4c4a9a01e9b37f0b76e33e7773386f63766c5fb2b0c7fe3fd9eada80bb4bc9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:00 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14242 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:49:00 compute-0 ceph-mgr[75975]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 24 19:49:00 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 24 19:49:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 19:49:00 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:00 compute-0 inspiring_goldstine[97544]: Scheduled rgw.rgw update...
Nov 24 19:49:00 compute-0 systemd[1]: libpod-06f79e4e4aca3926aa2a6a50e8df11df311dfd914972ee26d04c780bfa484bd2.scope: Deactivated successfully.
Nov 24 19:49:00 compute-0 conmon[97544]: conmon 06f79e4e4aca3926aa2a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06f79e4e4aca3926aa2a6a50e8df11df311dfd914972ee26d04c780bfa484bd2.scope/container/memory.events
Nov 24 19:49:00 compute-0 podman[97516]: 2025-11-24 19:49:00.323539927 +0000 UTC m=+0.785007760 container died 06f79e4e4aca3926aa2a6a50e8df11df311dfd914972ee26d04c780bfa484bd2 (image=quay.io/ceph/ceph:v18, name=inspiring_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 19:49:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-decc512da8a7196f166b29c2ef80462009bac0918bd50d21a9cfa27b397998c4-merged.mount: Deactivated successfully.
Nov 24 19:49:00 compute-0 podman[97516]: 2025-11-24 19:49:00.377409935 +0000 UTC m=+0.838877778 container remove 06f79e4e4aca3926aa2a6a50e8df11df311dfd914972ee26d04c780bfa484bd2 (image=quay.io/ceph/ceph:v18, name=inspiring_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:00 compute-0 systemd[1]: libpod-conmon-06f79e4e4aca3926aa2a6a50e8df11df311dfd914972ee26d04c780bfa484bd2.scope: Deactivated successfully.
Nov 24 19:49:00 compute-0 sudo[97487]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]: {
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "osd_id": 2,
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "type": "bluestore"
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:     },
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "osd_id": 1,
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "type": "bluestore"
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:     },
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "osd_id": 0,
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:         "type": "bluestore"
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]:     }
Nov 24 19:49:01 compute-0 hopeful_proskuriakova[97584]: }
Nov 24 19:49:01 compute-0 systemd[1]: libpod-4c4a9a01e9b37f0b76e33e7773386f63766c5fb2b0c7fe3fd9eada80bb4bc9c4.scope: Deactivated successfully.
Nov 24 19:49:01 compute-0 podman[97568]: 2025-11-24 19:49:01.16116464 +0000 UTC m=+1.292264979 container died 4c4a9a01e9b37f0b76e33e7773386f63766c5fb2b0c7fe3fd9eada80bb4bc9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 19:49:01 compute-0 systemd[1]: libpod-4c4a9a01e9b37f0b76e33e7773386f63766c5fb2b0c7fe3fd9eada80bb4bc9c4.scope: Consumed 1.134s CPU time.
Nov 24 19:49:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2129818faed68faa9978b7517f8e89152bbcd31d0273d64dd2a4dff6ce28e23-merged.mount: Deactivated successfully.
Nov 24 19:49:01 compute-0 podman[97568]: 2025-11-24 19:49:01.246905476 +0000 UTC m=+1.378005775 container remove 4c4a9a01e9b37f0b76e33e7773386f63766c5fb2b0c7fe3fd9eada80bb4bc9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:01 compute-0 systemd[1]: libpod-conmon-4c4a9a01e9b37f0b76e33e7773386f63766c5fb2b0c7fe3fd9eada80bb4bc9c4.scope: Deactivated successfully.
Nov 24 19:49:01 compute-0 sudo[97404]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:01 compute-0 ceph-mon[75677]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:01 compute-0 ceph-mon[75677]: from='client.14242 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:49:01 compute-0 ceph-mon[75677]: Saving service rgw.rgw spec with placement compute-0
Nov 24 19:49:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:01 compute-0 sudo[97705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:01 compute-0 sudo[97705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:01 compute-0 sudo[97705]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:01 compute-0 sudo[97759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:49:01 compute-0 sudo[97759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:01 compute-0 sudo[97759]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:01 compute-0 sudo[97790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:01 compute-0 sudo[97790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:01 compute-0 sudo[97790]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:01 compute-0 python3[97770]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:49:01 compute-0 sudo[97815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:01 compute-0 sudo[97815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:01 compute-0 sudo[97815]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:01 compute-0 sudo[97861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:01 compute-0 sudo[97861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:01 compute-0 sudo[97861]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:01 compute-0 sudo[97908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 19:49:01 compute-0 sudo[97908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:02 compute-0 python3[97960]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764013741.274154-37794-60653840415086/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:49:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:02 compute-0 sudo[98074]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvcrjmbnocvhokwncirkwvizhfkfaiho ; /usr/bin/python3'
Nov 24 19:49:02 compute-0 sudo[98074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:02 compute-0 podman[98080]: 2025-11-24 19:49:02.516803802 +0000 UTC m=+0.086639871 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 19:49:02 compute-0 python3[98081]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:02 compute-0 podman[98080]: 2025-11-24 19:49:02.638012071 +0000 UTC m=+0.207848080 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 19:49:02 compute-0 podman[98102]: 2025-11-24 19:49:02.669890449 +0000 UTC m=+0.069232363 container create f07431d5bcea51a7476830236bdef6f4be040bd5d742389a23dd73df55aee918 (image=quay.io/ceph/ceph:v18, name=recursing_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 19:49:02 compute-0 systemd[1]: Started libpod-conmon-f07431d5bcea51a7476830236bdef6f4be040bd5d742389a23dd73df55aee918.scope.
Nov 24 19:49:02 compute-0 podman[98102]: 2025-11-24 19:49:02.641162295 +0000 UTC m=+0.040504249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995e2ba4c8ef981837bed2577b1f8578e44c848e0691d2df127b4bd937696d56/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995e2ba4c8ef981837bed2577b1f8578e44c848e0691d2df127b4bd937696d56/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995e2ba4c8ef981837bed2577b1f8578e44c848e0691d2df127b4bd937696d56/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:02 compute-0 podman[98102]: 2025-11-24 19:49:02.778065147 +0000 UTC m=+0.177407091 container init f07431d5bcea51a7476830236bdef6f4be040bd5d742389a23dd73df55aee918 (image=quay.io/ceph/ceph:v18, name=recursing_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 19:49:02 compute-0 podman[98102]: 2025-11-24 19:49:02.790124722 +0000 UTC m=+0.189466596 container start f07431d5bcea51a7476830236bdef6f4be040bd5d742389a23dd73df55aee918 (image=quay.io/ceph/ceph:v18, name=recursing_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 19:49:02 compute-0 podman[98102]: 2025-11-24 19:49:02.794024196 +0000 UTC m=+0.193366060 container attach f07431d5bcea51a7476830236bdef6f4be040bd5d742389a23dd73df55aee918 (image=quay.io/ceph/ceph:v18, name=recursing_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 19:49:03 compute-0 sudo[97908]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:03 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:03 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:03 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:03 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev aa327d0c-42f5-478e-be2e-69439e3edb09 does not exist
Nov 24 19:49:03 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 93d2fa93-4f86-4549-982b-0bcb63a89022 does not exist
Nov 24 19:49:03 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev afe62cd2-c7ef-446b-a5a0-e7f0386a4d64 does not exist
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mgr[75975]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 24 19:49:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0[75673]: 2025-11-24T19:49:03.405+0000 7fba94928640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e2 new map
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T19:49:03.407355+0000
                                           modified        2025-11-24T19:49:03.407384+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 24 19:49:03 compute-0 ceph-mgr[75975]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 24 19:49:03 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 24 19:49:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 19:49:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:03 compute-0 ceph-mgr[75975]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 24 19:49:03 compute-0 sudo[98234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:03 compute-0 sudo[98234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:03 compute-0 sudo[98234]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:03 compute-0 systemd[1]: libpod-f07431d5bcea51a7476830236bdef6f4be040bd5d742389a23dd73df55aee918.scope: Deactivated successfully.
Nov 24 19:49:03 compute-0 podman[98102]: 2025-11-24 19:49:03.452971735 +0000 UTC m=+0.852313639 container died f07431d5bcea51a7476830236bdef6f4be040bd5d742389a23dd73df55aee918 (image=quay.io/ceph/ceph:v18, name=recursing_goldwasser, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 19:49:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-995e2ba4c8ef981837bed2577b1f8578e44c848e0691d2df127b4bd937696d56-merged.mount: Deactivated successfully.
Nov 24 19:49:03 compute-0 podman[98102]: 2025-11-24 19:49:03.516546255 +0000 UTC m=+0.915888169 container remove f07431d5bcea51a7476830236bdef6f4be040bd5d742389a23dd73df55aee918 (image=quay.io/ceph/ceph:v18, name=recursing_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 19:49:03 compute-0 systemd[1]: libpod-conmon-f07431d5bcea51a7476830236bdef6f4be040bd5d742389a23dd73df55aee918.scope: Deactivated successfully.
Nov 24 19:49:03 compute-0 sudo[98262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:03 compute-0 sudo[98262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:03 compute-0 sudo[98074]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:03 compute-0 sudo[98262]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:03 compute-0 sudo[98298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:03 compute-0 sudo[98298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:03 compute-0 sudo[98298]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:03 compute-0 sudo[98367]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcziyuqfuifmvikabtkeiucptmyoqtgt ; /usr/bin/python3'
Nov 24 19:49:03 compute-0 sudo[98367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:03 compute-0 sudo[98327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:49:03 compute-0 sudo[98327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:03 compute-0 python3[98371]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:03 compute-0 podman[98383]: 2025-11-24 19:49:03.971348074 +0000 UTC m=+0.058381281 container create 68d2718262b830a4393f1b4d23e2d6772cc1d94eefd243e87dad1cf9608cda33 (image=quay.io/ceph/ceph:v18, name=festive_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 19:49:04 compute-0 systemd[1]: Started libpod-conmon-68d2718262b830a4393f1b4d23e2d6772cc1d94eefd243e87dad1cf9608cda33.scope.
Nov 24 19:49:04 compute-0 podman[98383]: 2025-11-24 19:49:03.953523525 +0000 UTC m=+0.040556712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be971d9501302113484efb631998fa316b4a730a81e7403e68d1602caea852/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be971d9501302113484efb631998fa316b4a730a81e7403e68d1602caea852/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66be971d9501302113484efb631998fa316b4a730a81e7403e68d1602caea852/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:04 compute-0 podman[98383]: 2025-11-24 19:49:04.090658892 +0000 UTC m=+0.177692079 container init 68d2718262b830a4393f1b4d23e2d6772cc1d94eefd243e87dad1cf9608cda33 (image=quay.io/ceph/ceph:v18, name=festive_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:04 compute-0 podman[98383]: 2025-11-24 19:49:04.101239757 +0000 UTC m=+0.188272964 container start 68d2718262b830a4393f1b4d23e2d6772cc1d94eefd243e87dad1cf9608cda33 (image=quay.io/ceph/ceph:v18, name=festive_driscoll, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 19:49:04 compute-0 podman[98383]: 2025-11-24 19:49:04.104933836 +0000 UTC m=+0.191966993 container attach 68d2718262b830a4393f1b4d23e2d6772cc1d94eefd243e87dad1cf9608cda33 (image=quay.io/ceph/ceph:v18, name=festive_driscoll, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:04 compute-0 podman[98434]: 2025-11-24 19:49:04.281891304 +0000 UTC m=+0.061931046 container create d02d3d0adfea34a0050b54d8f6ed5379bd248b3efcc4306bcecf1f2cf78d4421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:04 compute-0 systemd[1]: Started libpod-conmon-d02d3d0adfea34a0050b54d8f6ed5379bd248b3efcc4306bcecf1f2cf78d4421.scope.
Nov 24 19:49:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:49:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:49:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:04 compute-0 ceph-mon[75677]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:49:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 24 19:49:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 24 19:49:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 24 19:49:04 compute-0 ceph-mon[75677]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 19:49:04 compute-0 ceph-mon[75677]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 24 19:49:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 24 19:49:04 compute-0 ceph-mon[75677]: osdmap e29: 3 total, 3 up, 3 in
Nov 24 19:49:04 compute-0 ceph-mon[75677]: fsmap cephfs:0
Nov 24 19:49:04 compute-0 ceph-mon[75677]: Saving service mds.cephfs spec with placement compute-0
Nov 24 19:49:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:04 compute-0 podman[98434]: 2025-11-24 19:49:04.257825048 +0000 UTC m=+0.037864870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:04 compute-0 podman[98434]: 2025-11-24 19:49:04.374111204 +0000 UTC m=+0.154151036 container init d02d3d0adfea34a0050b54d8f6ed5379bd248b3efcc4306bcecf1f2cf78d4421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:04 compute-0 podman[98434]: 2025-11-24 19:49:04.379030937 +0000 UTC m=+0.159070719 container start d02d3d0adfea34a0050b54d8f6ed5379bd248b3efcc4306bcecf1f2cf78d4421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:04 compute-0 suspicious_fermi[98450]: 167 167
Nov 24 19:49:04 compute-0 systemd[1]: libpod-d02d3d0adfea34a0050b54d8f6ed5379bd248b3efcc4306bcecf1f2cf78d4421.scope: Deactivated successfully.
Nov 24 19:49:04 compute-0 podman[98434]: 2025-11-24 19:49:04.388121391 +0000 UTC m=+0.168161223 container attach d02d3d0adfea34a0050b54d8f6ed5379bd248b3efcc4306bcecf1f2cf78d4421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:04 compute-0 podman[98434]: 2025-11-24 19:49:04.388641975 +0000 UTC m=+0.168681757 container died d02d3d0adfea34a0050b54d8f6ed5379bd248b3efcc4306bcecf1f2cf78d4421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 19:49:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cad148427bc99189230ae9249d66f8688069bae224c9cfc874b95df69183d19c-merged.mount: Deactivated successfully.
Nov 24 19:49:04 compute-0 podman[98434]: 2025-11-24 19:49:04.440028377 +0000 UTC m=+0.220068159 container remove d02d3d0adfea34a0050b54d8f6ed5379bd248b3efcc4306bcecf1f2cf78d4421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:04 compute-0 systemd[1]: libpod-conmon-d02d3d0adfea34a0050b54d8f6ed5379bd248b3efcc4306bcecf1f2cf78d4421.scope: Deactivated successfully.
Nov 24 19:49:04 compute-0 podman[98494]: 2025-11-24 19:49:04.672689469 +0000 UTC m=+0.066546526 container create 8fd18a4d1db5b86d13cc59a8b66431a114b49abf472bf992fe2477d219227ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 19:49:04 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:49:04 compute-0 ceph-mgr[75975]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 24 19:49:04 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 24 19:49:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 19:49:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:04 compute-0 festive_driscoll[98416]: Scheduled mds.cephfs update...
Nov 24 19:49:04 compute-0 systemd[1]: Started libpod-conmon-8fd18a4d1db5b86d13cc59a8b66431a114b49abf472bf992fe2477d219227ea9.scope.
Nov 24 19:49:04 compute-0 podman[98494]: 2025-11-24 19:49:04.645374793 +0000 UTC m=+0.039231850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:04 compute-0 systemd[1]: libpod-68d2718262b830a4393f1b4d23e2d6772cc1d94eefd243e87dad1cf9608cda33.scope: Deactivated successfully.
Nov 24 19:49:04 compute-0 podman[98383]: 2025-11-24 19:49:04.746087048 +0000 UTC m=+0.833120255 container died 68d2718262b830a4393f1b4d23e2d6772cc1d94eefd243e87dad1cf9608cda33 (image=quay.io/ceph/ceph:v18, name=festive_driscoll, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f059049a50de5a3dccc2a19907ae98c3f62d943cb7eaf45a541fd834c53e44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f059049a50de5a3dccc2a19907ae98c3f62d943cb7eaf45a541fd834c53e44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f059049a50de5a3dccc2a19907ae98c3f62d943cb7eaf45a541fd834c53e44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f059049a50de5a3dccc2a19907ae98c3f62d943cb7eaf45a541fd834c53e44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f059049a50de5a3dccc2a19907ae98c3f62d943cb7eaf45a541fd834c53e44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:04 compute-0 podman[98494]: 2025-11-24 19:49:04.790737027 +0000 UTC m=+0.184594144 container init 8fd18a4d1db5b86d13cc59a8b66431a114b49abf472bf992fe2477d219227ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-66be971d9501302113484efb631998fa316b4a730a81e7403e68d1602caea852-merged.mount: Deactivated successfully.
Nov 24 19:49:04 compute-0 podman[98494]: 2025-11-24 19:49:04.806649845 +0000 UTC m=+0.200506912 container start 8fd18a4d1db5b86d13cc59a8b66431a114b49abf472bf992fe2477d219227ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gould, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 19:49:04 compute-0 podman[98494]: 2025-11-24 19:49:04.811379953 +0000 UTC m=+0.205237080 container attach 8fd18a4d1db5b86d13cc59a8b66431a114b49abf472bf992fe2477d219227ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gould, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:04 compute-0 podman[98383]: 2025-11-24 19:49:04.836926443 +0000 UTC m=+0.923959650 container remove 68d2718262b830a4393f1b4d23e2d6772cc1d94eefd243e87dad1cf9608cda33 (image=quay.io/ceph/ceph:v18, name=festive_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 19:49:04 compute-0 systemd[1]: libpod-conmon-68d2718262b830a4393f1b4d23e2d6772cc1d94eefd243e87dad1cf9608cda33.scope: Deactivated successfully.
Nov 24 19:49:04 compute-0 sudo[98367]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:05 compute-0 ceph-mon[75677]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:05 compute-0 sudo[98610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viemshhmuncaxlztayaygunrphthttmi ; /usr/bin/python3'
Nov 24 19:49:05 compute-0 sudo[98610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:05 compute-0 python3[98614]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 19:49:05 compute-0 sudo[98610]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:05 compute-0 practical_gould[98512]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:49:05 compute-0 practical_gould[98512]: --> relative data size: 1.0
Nov 24 19:49:05 compute-0 practical_gould[98512]: --> All data devices are unavailable
Nov 24 19:49:05 compute-0 systemd[1]: libpod-8fd18a4d1db5b86d13cc59a8b66431a114b49abf472bf992fe2477d219227ea9.scope: Deactivated successfully.
Nov 24 19:49:05 compute-0 systemd[1]: libpod-8fd18a4d1db5b86d13cc59a8b66431a114b49abf472bf992fe2477d219227ea9.scope: Consumed 1.057s CPU time.
Nov 24 19:49:05 compute-0 podman[98494]: 2025-11-24 19:49:05.929262613 +0000 UTC m=+1.323119670 container died 8fd18a4d1db5b86d13cc59a8b66431a114b49abf472bf992fe2477d219227ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-83f059049a50de5a3dccc2a19907ae98c3f62d943cb7eaf45a541fd834c53e44-merged.mount: Deactivated successfully.
Nov 24 19:49:05 compute-0 sudo[98708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgulnruxpcwvdrntwftjyiqjxlnkwhdg ; /usr/bin/python3'
Nov 24 19:49:05 compute-0 sudo[98708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:05 compute-0 podman[98494]: 2025-11-24 19:49:05.997295934 +0000 UTC m=+1.391152971 container remove 8fd18a4d1db5b86d13cc59a8b66431a114b49abf472bf992fe2477d219227ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gould, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:06 compute-0 systemd[1]: libpod-conmon-8fd18a4d1db5b86d13cc59a8b66431a114b49abf472bf992fe2477d219227ea9.scope: Deactivated successfully.
Nov 24 19:49:06 compute-0 sudo[98327]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:06 compute-0 sudo[98715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:06 compute-0 sudo[98715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:06 compute-0 sudo[98715]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:06 compute-0 python3[98714]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764013745.3421283-37824-182935334163290/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=fa21d6f168c8a77ce51e23081d832e1507915a8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:49:06 compute-0 sudo[98708]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:06 compute-0 sudo[98740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:06 compute-0 sudo[98740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:06 compute-0 sudo[98740]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:06 compute-0 sudo[98789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:06 compute-0 sudo[98789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:06 compute-0 sudo[98789]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:06 compute-0 sudo[98814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:49:06 compute-0 sudo[98814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:06 compute-0 ceph-mon[75677]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 19:49:06 compute-0 ceph-mon[75677]: Saving service mds.cephfs spec with placement compute-0
Nov 24 19:49:06 compute-0 sudo[98864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnjlwblyybvunvoojjhzqafqcxqobftd ; /usr/bin/python3'
Nov 24 19:49:06 compute-0 sudo[98864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:06 compute-0 python3[98872]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:06 compute-0 podman[98900]: 2025-11-24 19:49:06.812771671 +0000 UTC m=+0.051635449 container create 041aa58f12ca694f638020d0763aaff49ba71639a876824fc140b8140d2dd84c (image=quay.io/ceph/ceph:v18, name=bold_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:06 compute-0 systemd[1]: Started libpod-conmon-041aa58f12ca694f638020d0763aaff49ba71639a876824fc140b8140d2dd84c.scope.
Nov 24 19:49:06 compute-0 podman[98914]: 2025-11-24 19:49:06.872981906 +0000 UTC m=+0.066950378 container create 50f9751e2ab6dcff0c5d1512fe5f6c6e2fcd36e79070b3b8785f0f61ff12a664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:06 compute-0 podman[98900]: 2025-11-24 19:49:06.787101736 +0000 UTC m=+0.025965614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbe3ce7191b1996bf14178181940108962cb9727d5c44ac6c7a63729e1316f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbe3ce7191b1996bf14178181940108962cb9727d5c44ac6c7a63729e1316f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:06 compute-0 systemd[1]: Started libpod-conmon-50f9751e2ab6dcff0c5d1512fe5f6c6e2fcd36e79070b3b8785f0f61ff12a664.scope.
Nov 24 19:49:06 compute-0 podman[98900]: 2025-11-24 19:49:06.910538773 +0000 UTC m=+0.149402641 container init 041aa58f12ca694f638020d0763aaff49ba71639a876824fc140b8140d2dd84c (image=quay.io/ceph/ceph:v18, name=bold_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:06 compute-0 podman[98900]: 2025-11-24 19:49:06.917543632 +0000 UTC m=+0.156407400 container start 041aa58f12ca694f638020d0763aaff49ba71639a876824fc140b8140d2dd84c (image=quay.io/ceph/ceph:v18, name=bold_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 19:49:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:06 compute-0 podman[98900]: 2025-11-24 19:49:06.92191328 +0000 UTC m=+0.160777138 container attach 041aa58f12ca694f638020d0763aaff49ba71639a876824fc140b8140d2dd84c (image=quay.io/ceph/ceph:v18, name=bold_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 19:49:06 compute-0 podman[98914]: 2025-11-24 19:49:06.929206278 +0000 UTC m=+0.123174790 container init 50f9751e2ab6dcff0c5d1512fe5f6c6e2fcd36e79070b3b8785f0f61ff12a664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:06 compute-0 podman[98914]: 2025-11-24 19:49:06.937121046 +0000 UTC m=+0.131089558 container start 50f9751e2ab6dcff0c5d1512fe5f6c6e2fcd36e79070b3b8785f0f61ff12a664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 19:49:06 compute-0 podman[98914]: 2025-11-24 19:49:06.94078775 +0000 UTC m=+0.134756232 container attach 50f9751e2ab6dcff0c5d1512fe5f6c6e2fcd36e79070b3b8785f0f61ff12a664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 19:49:06 compute-0 trusting_robinson[98936]: 167 167
Nov 24 19:49:06 compute-0 systemd[1]: libpod-50f9751e2ab6dcff0c5d1512fe5f6c6e2fcd36e79070b3b8785f0f61ff12a664.scope: Deactivated successfully.
Nov 24 19:49:06 compute-0 conmon[98936]: conmon 50f9751e2ab6dcff0c5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50f9751e2ab6dcff0c5d1512fe5f6c6e2fcd36e79070b3b8785f0f61ff12a664.scope/container/memory.events
Nov 24 19:49:06 compute-0 podman[98914]: 2025-11-24 19:49:06.944363283 +0000 UTC m=+0.138331795 container died 50f9751e2ab6dcff0c5d1512fe5f6c6e2fcd36e79070b3b8785f0f61ff12a664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:06 compute-0 podman[98914]: 2025-11-24 19:49:06.853386793 +0000 UTC m=+0.047355345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-01977d86bca374e9b9bfba49bbeea376ea91e4d5e9a20a319f049f0c1a77e12f-merged.mount: Deactivated successfully.
Nov 24 19:49:06 compute-0 podman[98914]: 2025-11-24 19:49:06.997934921 +0000 UTC m=+0.191903433 container remove 50f9751e2ab6dcff0c5d1512fe5f6c6e2fcd36e79070b3b8785f0f61ff12a664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:07 compute-0 systemd[1]: libpod-conmon-50f9751e2ab6dcff0c5d1512fe5f6c6e2fcd36e79070b3b8785f0f61ff12a664.scope: Deactivated successfully.
Nov 24 19:49:07 compute-0 podman[98962]: 2025-11-24 19:49:07.224072105 +0000 UTC m=+0.066308458 container create a104dfe622792d431ef6ede081749dc43adcdc2812c53cb564668007aca461d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 19:49:07 compute-0 systemd[1]: Started libpod-conmon-a104dfe622792d431ef6ede081749dc43adcdc2812c53cb564668007aca461d8.scope.
Nov 24 19:49:07 compute-0 podman[98962]: 2025-11-24 19:49:07.196228672 +0000 UTC m=+0.038465065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feefffe57fdad3f65b7d4634094dac7a2657c3a77a89e5343274702d1bb12e67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feefffe57fdad3f65b7d4634094dac7a2657c3a77a89e5343274702d1bb12e67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feefffe57fdad3f65b7d4634094dac7a2657c3a77a89e5343274702d1bb12e67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feefffe57fdad3f65b7d4634094dac7a2657c3a77a89e5343274702d1bb12e67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:07 compute-0 podman[98962]: 2025-11-24 19:49:07.334381741 +0000 UTC m=+0.176618154 container init a104dfe622792d431ef6ede081749dc43adcdc2812c53cb564668007aca461d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:07 compute-0 podman[98962]: 2025-11-24 19:49:07.348864954 +0000 UTC m=+0.191101297 container start a104dfe622792d431ef6ede081749dc43adcdc2812c53cb564668007aca461d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:07 compute-0 podman[98962]: 2025-11-24 19:49:07.353015404 +0000 UTC m=+0.195251807 container attach a104dfe622792d431ef6ede081749dc43adcdc2812c53cb564668007aca461d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 19:49:07 compute-0 ceph-mon[75677]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 24 19:49:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1181907633' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 24 19:49:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1181907633' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 24 19:49:07 compute-0 systemd[1]: libpod-041aa58f12ca694f638020d0763aaff49ba71639a876824fc140b8140d2dd84c.scope: Deactivated successfully.
Nov 24 19:49:07 compute-0 podman[98900]: 2025-11-24 19:49:07.55843716 +0000 UTC m=+0.797300978 container died 041aa58f12ca694f638020d0763aaff49ba71639a876824fc140b8140d2dd84c (image=quay.io/ceph/ceph:v18, name=bold_cerf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcbe3ce7191b1996bf14178181940108962cb9727d5c44ac6c7a63729e1316f9-merged.mount: Deactivated successfully.
Nov 24 19:49:07 compute-0 podman[98900]: 2025-11-24 19:49:07.612424721 +0000 UTC m=+0.851288529 container remove 041aa58f12ca694f638020d0763aaff49ba71639a876824fc140b8140d2dd84c (image=quay.io/ceph/ceph:v18, name=bold_cerf, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:07 compute-0 systemd[1]: libpod-conmon-041aa58f12ca694f638020d0763aaff49ba71639a876824fc140b8140d2dd84c.scope: Deactivated successfully.
Nov 24 19:49:07 compute-0 sudo[98864]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:08 compute-0 great_mclean[98997]: {
Nov 24 19:49:08 compute-0 great_mclean[98997]:     "0": [
Nov 24 19:49:08 compute-0 great_mclean[98997]:         {
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "devices": [
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "/dev/loop3"
Nov 24 19:49:08 compute-0 great_mclean[98997]:             ],
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_name": "ceph_lv0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_size": "21470642176",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "name": "ceph_lv0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "tags": {
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.crush_device_class": "",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.encrypted": "0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.osd_id": "0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.type": "block",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.vdo": "0"
Nov 24 19:49:08 compute-0 great_mclean[98997]:             },
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "type": "block",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "vg_name": "ceph_vg0"
Nov 24 19:49:08 compute-0 great_mclean[98997]:         }
Nov 24 19:49:08 compute-0 great_mclean[98997]:     ],
Nov 24 19:49:08 compute-0 great_mclean[98997]:     "1": [
Nov 24 19:49:08 compute-0 great_mclean[98997]:         {
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "devices": [
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "/dev/loop4"
Nov 24 19:49:08 compute-0 great_mclean[98997]:             ],
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_name": "ceph_lv1",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_size": "21470642176",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "name": "ceph_lv1",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "tags": {
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.crush_device_class": "",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.encrypted": "0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.osd_id": "1",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.type": "block",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.vdo": "0"
Nov 24 19:49:08 compute-0 great_mclean[98997]:             },
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "type": "block",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "vg_name": "ceph_vg1"
Nov 24 19:49:08 compute-0 great_mclean[98997]:         }
Nov 24 19:49:08 compute-0 great_mclean[98997]:     ],
Nov 24 19:49:08 compute-0 great_mclean[98997]:     "2": [
Nov 24 19:49:08 compute-0 great_mclean[98997]:         {
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "devices": [
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "/dev/loop5"
Nov 24 19:49:08 compute-0 great_mclean[98997]:             ],
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_name": "ceph_lv2",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_size": "21470642176",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "name": "ceph_lv2",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "tags": {
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.crush_device_class": "",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.encrypted": "0",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.osd_id": "2",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.type": "block",
Nov 24 19:49:08 compute-0 great_mclean[98997]:                 "ceph.vdo": "0"
Nov 24 19:49:08 compute-0 great_mclean[98997]:             },
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "type": "block",
Nov 24 19:49:08 compute-0 great_mclean[98997]:             "vg_name": "ceph_vg2"
Nov 24 19:49:08 compute-0 great_mclean[98997]:         }
Nov 24 19:49:08 compute-0 great_mclean[98997]:     ]
Nov 24 19:49:08 compute-0 great_mclean[98997]: }
Nov 24 19:49:08 compute-0 systemd[1]: libpod-a104dfe622792d431ef6ede081749dc43adcdc2812c53cb564668007aca461d8.scope: Deactivated successfully.
Nov 24 19:49:08 compute-0 podman[98962]: 2025-11-24 19:49:08.180541958 +0000 UTC m=+1.022778291 container died a104dfe622792d431ef6ede081749dc43adcdc2812c53cb564668007aca461d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 24 19:49:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-feefffe57fdad3f65b7d4634094dac7a2657c3a77a89e5343274702d1bb12e67-merged.mount: Deactivated successfully.
Nov 24 19:49:08 compute-0 sudo[99050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlcaznondiilaaglyquvltthmssnehez ; /usr/bin/python3'
Nov 24 19:49:08 compute-0 sudo[99050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:08 compute-0 podman[98962]: 2025-11-24 19:49:08.248482586 +0000 UTC m=+1.090718899 container remove a104dfe622792d431ef6ede081749dc43adcdc2812c53cb564668007aca461d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 19:49:08 compute-0 systemd[1]: libpod-conmon-a104dfe622792d431ef6ede081749dc43adcdc2812c53cb564668007aca461d8.scope: Deactivated successfully.
Nov 24 19:49:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:08 compute-0 sudo[98814]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:08 compute-0 sudo[99059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:08 compute-0 sudo[99059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:08 compute-0 sudo[99059]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:08 compute-0 python3[99058]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:08 compute-0 sudo[99084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:08 compute-0 sudo[99084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:08 compute-0 sudo[99084]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:08 compute-0 podman[99103]: 2025-11-24 19:49:08.447841101 +0000 UTC m=+0.042716298 container create 2d33de25895f58159835d9ef740442fb983ad74e0adc24a258a9a4de98a720e5 (image=quay.io/ceph/ceph:v18, name=mystifying_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1181907633' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 24 19:49:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1181907633' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 24 19:49:08 compute-0 systemd[1]: Started libpod-conmon-2d33de25895f58159835d9ef740442fb983ad74e0adc24a258a9a4de98a720e5.scope.
Nov 24 19:49:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba01731660adf7e497478b0990b498e27bcfabe425e4ebc399e2b042d5587306/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba01731660adf7e497478b0990b498e27bcfabe425e4ebc399e2b042d5587306/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:08 compute-0 sudo[99124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:08 compute-0 podman[99103]: 2025-11-24 19:49:08.425412899 +0000 UTC m=+0.020288106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:08 compute-0 sudo[99124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:08 compute-0 podman[99103]: 2025-11-24 19:49:08.533814525 +0000 UTC m=+0.128689742 container init 2d33de25895f58159835d9ef740442fb983ad74e0adc24a258a9a4de98a720e5 (image=quay.io/ceph/ceph:v18, name=mystifying_cerf, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:08 compute-0 sudo[99124]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:08 compute-0 podman[99103]: 2025-11-24 19:49:08.547401651 +0000 UTC m=+0.142276868 container start 2d33de25895f58159835d9ef740442fb983ad74e0adc24a258a9a4de98a720e5 (image=quay.io/ceph/ceph:v18, name=mystifying_cerf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 19:49:08 compute-0 podman[99103]: 2025-11-24 19:49:08.551820399 +0000 UTC m=+0.146695646 container attach 2d33de25895f58159835d9ef740442fb983ad74e0adc24a258a9a4de98a720e5 (image=quay.io/ceph/ceph:v18, name=mystifying_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 19:49:08 compute-0 sudo[99155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:49:08 compute-0 sudo[99155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:09 compute-0 podman[99236]: 2025-11-24 19:49:09.062524458 +0000 UTC m=+0.066457673 container create 0fa1e85d81b67e267ee5f74dc7a8331474363c52be51fa802858e921da1bcf0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:09 compute-0 systemd[1]: Started libpod-conmon-0fa1e85d81b67e267ee5f74dc7a8331474363c52be51fa802858e921da1bcf0a.scope.
Nov 24 19:49:09 compute-0 podman[99236]: 2025-11-24 19:49:09.03481069 +0000 UTC m=+0.038743965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:09 compute-0 podman[99236]: 2025-11-24 19:49:09.157475613 +0000 UTC m=+0.161408828 container init 0fa1e85d81b67e267ee5f74dc7a8331474363c52be51fa802858e921da1bcf0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 19:49:09 compute-0 podman[99236]: 2025-11-24 19:49:09.167237938 +0000 UTC m=+0.171171143 container start 0fa1e85d81b67e267ee5f74dc7a8331474363c52be51fa802858e921da1bcf0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:09 compute-0 podman[99236]: 2025-11-24 19:49:09.17115651 +0000 UTC m=+0.175089725 container attach 0fa1e85d81b67e267ee5f74dc7a8331474363c52be51fa802858e921da1bcf0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:09 compute-0 priceless_wilbur[99252]: 167 167
Nov 24 19:49:09 compute-0 systemd[1]: libpod-0fa1e85d81b67e267ee5f74dc7a8331474363c52be51fa802858e921da1bcf0a.scope: Deactivated successfully.
Nov 24 19:49:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 19:49:09 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/543239208' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 19:49:09 compute-0 podman[99236]: 2025-11-24 19:49:09.173313298 +0000 UTC m=+0.177246473 container died 0fa1e85d81b67e267ee5f74dc7a8331474363c52be51fa802858e921da1bcf0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:09 compute-0 mystifying_cerf[99147]: 
Nov 24 19:49:09 compute-0 mystifying_cerf[99147]: {"fsid":"05e060a3-406b-57f0-89d2-ec35f5b09305","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":147,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":29,"num_osds":3,"num_up_osds":3,"osd_up_since":1764013719,"num_in_osds":3,"osd_in_since":1764013690,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83759104,"bytes_avail":64328167424,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-24T19:48:26.276831+0000","services":{}},"progress_events":{}}
Nov 24 19:49:09 compute-0 systemd[1]: libpod-2d33de25895f58159835d9ef740442fb983ad74e0adc24a258a9a4de98a720e5.scope: Deactivated successfully.
Nov 24 19:49:09 compute-0 podman[99103]: 2025-11-24 19:49:09.198198028 +0000 UTC m=+0.793073245 container died 2d33de25895f58159835d9ef740442fb983ad74e0adc24a258a9a4de98a720e5 (image=quay.io/ceph/ceph:v18, name=mystifying_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 24 19:49:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f02993c06c4da218b2efbe8be97296d81b7c5b47001ba5dd46f09d141f604b2-merged.mount: Deactivated successfully.
Nov 24 19:49:09 compute-0 podman[99236]: 2025-11-24 19:49:09.226952088 +0000 UTC m=+0.230885253 container remove 0fa1e85d81b67e267ee5f74dc7a8331474363c52be51fa802858e921da1bcf0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 19:49:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba01731660adf7e497478b0990b498e27bcfabe425e4ebc399e2b042d5587306-merged.mount: Deactivated successfully.
Nov 24 19:49:09 compute-0 podman[99103]: 2025-11-24 19:49:09.253366636 +0000 UTC m=+0.848241823 container remove 2d33de25895f58159835d9ef740442fb983ad74e0adc24a258a9a4de98a720e5 (image=quay.io/ceph/ceph:v18, name=mystifying_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 19:49:09 compute-0 systemd[1]: libpod-conmon-2d33de25895f58159835d9ef740442fb983ad74e0adc24a258a9a4de98a720e5.scope: Deactivated successfully.
Nov 24 19:49:09 compute-0 systemd[1]: libpod-conmon-0fa1e85d81b67e267ee5f74dc7a8331474363c52be51fa802858e921da1bcf0a.scope: Deactivated successfully.
Nov 24 19:49:09 compute-0 sudo[99050]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:09 compute-0 podman[99288]: 2025-11-24 19:49:09.465260484 +0000 UTC m=+0.067283639 container create 344e1681e58b97fd4120de98b029b37d8957acd62dbe7fa80fbd38f91fc9983c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 19:49:09 compute-0 ceph-mon[75677]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:09 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/543239208' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 19:49:09 compute-0 sudo[99325]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkxvqnfjmcioyotmhnnksjaqjldndkxd ; /usr/bin/python3'
Nov 24 19:49:09 compute-0 sudo[99325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:09 compute-0 systemd[1]: Started libpod-conmon-344e1681e58b97fd4120de98b029b37d8957acd62dbe7fa80fbd38f91fc9983c.scope.
Nov 24 19:49:09 compute-0 podman[99288]: 2025-11-24 19:49:09.439750515 +0000 UTC m=+0.041773700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25468fb66f004dfc1328fc1a130fc061e4a3567ffc109af2a2b59c7b18cc50a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25468fb66f004dfc1328fc1a130fc061e4a3567ffc109af2a2b59c7b18cc50a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25468fb66f004dfc1328fc1a130fc061e4a3567ffc109af2a2b59c7b18cc50a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25468fb66f004dfc1328fc1a130fc061e4a3567ffc109af2a2b59c7b18cc50a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:09 compute-0 podman[99288]: 2025-11-24 19:49:09.5564355 +0000 UTC m=+0.158458665 container init 344e1681e58b97fd4120de98b029b37d8957acd62dbe7fa80fbd38f91fc9983c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 19:49:09 compute-0 podman[99288]: 2025-11-24 19:49:09.569667045 +0000 UTC m=+0.171690200 container start 344e1681e58b97fd4120de98b029b37d8957acd62dbe7fa80fbd38f91fc9983c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 19:49:09 compute-0 podman[99288]: 2025-11-24 19:49:09.57491637 +0000 UTC m=+0.176939535 container attach 344e1681e58b97fd4120de98b029b37d8957acd62dbe7fa80fbd38f91fc9983c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 19:49:09 compute-0 python3[99330]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:09 compute-0 podman[99336]: 2025-11-24 19:49:09.775642507 +0000 UTC m=+0.065852213 container create e145e9a765b730def48e498d0d1d17aa4ac4ae6c42ef6944c5334fe9761bdf2c (image=quay.io/ceph/ceph:v18, name=lucid_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:09 compute-0 systemd[1]: Started libpod-conmon-e145e9a765b730def48e498d0d1d17aa4ac4ae6c42ef6944c5334fe9761bdf2c.scope.
Nov 24 19:49:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa734a57250e14adc71b38723ddeb2b92b00caf7dbfc982c3fef4ece4e104e81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa734a57250e14adc71b38723ddeb2b92b00caf7dbfc982c3fef4ece4e104e81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:09 compute-0 podman[99336]: 2025-11-24 19:49:09.748610351 +0000 UTC m=+0.038820107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:09 compute-0 podman[99336]: 2025-11-24 19:49:09.84916478 +0000 UTC m=+0.139374486 container init e145e9a765b730def48e498d0d1d17aa4ac4ae6c42ef6944c5334fe9761bdf2c (image=quay.io/ceph/ceph:v18, name=lucid_einstein, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 19:49:09 compute-0 podman[99336]: 2025-11-24 19:49:09.856627694 +0000 UTC m=+0.146837390 container start e145e9a765b730def48e498d0d1d17aa4ac4ae6c42ef6944c5334fe9761bdf2c (image=quay.io/ceph/ceph:v18, name=lucid_einstein, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:09 compute-0 podman[99336]: 2025-11-24 19:49:09.862199609 +0000 UTC m=+0.152409355 container attach e145e9a765b730def48e498d0d1d17aa4ac4ae6c42ef6944c5334fe9761bdf2c (image=quay.io/ceph/ceph:v18, name=lucid_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 19:49:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 19:49:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352347513' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 19:49:10 compute-0 lucid_einstein[99351]: 
Nov 24 19:49:10 compute-0 lucid_einstein[99351]: {"epoch":1,"fsid":"05e060a3-406b-57f0-89d2-ec35f5b09305","modified":"2025-11-24T19:46:36.011915Z","created":"2025-11-24T19:46:36.011915Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 24 19:49:10 compute-0 lucid_einstein[99351]: dumped monmap epoch 1
Nov 24 19:49:10 compute-0 systemd[1]: libpod-e145e9a765b730def48e498d0d1d17aa4ac4ae6c42ef6944c5334fe9761bdf2c.scope: Deactivated successfully.
Nov 24 19:49:10 compute-0 podman[99336]: 2025-11-24 19:49:10.463491066 +0000 UTC m=+0.753700732 container died e145e9a765b730def48e498d0d1d17aa4ac4ae6c42ef6944c5334fe9761bdf2c (image=quay.io/ceph/ceph:v18, name=lucid_einstein, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:10 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/352347513' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 19:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa734a57250e14adc71b38723ddeb2b92b00caf7dbfc982c3fef4ece4e104e81-merged.mount: Deactivated successfully.
Nov 24 19:49:10 compute-0 podman[99336]: 2025-11-24 19:49:10.51407438 +0000 UTC m=+0.804284046 container remove e145e9a765b730def48e498d0d1d17aa4ac4ae6c42ef6944c5334fe9761bdf2c (image=quay.io/ceph/ceph:v18, name=lucid_einstein, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 19:49:10 compute-0 systemd[1]: libpod-conmon-e145e9a765b730def48e498d0d1d17aa4ac4ae6c42ef6944c5334fe9761bdf2c.scope: Deactivated successfully.
Nov 24 19:49:10 compute-0 sudo[99325]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:10 compute-0 lucid_poincare[99331]: {
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "osd_id": 2,
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "type": "bluestore"
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:     },
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "osd_id": 1,
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "type": "bluestore"
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:     },
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "osd_id": 0,
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:         "type": "bluestore"
Nov 24 19:49:10 compute-0 lucid_poincare[99331]:     }
Nov 24 19:49:10 compute-0 lucid_poincare[99331]: }
Nov 24 19:49:10 compute-0 systemd[1]: libpod-344e1681e58b97fd4120de98b029b37d8957acd62dbe7fa80fbd38f91fc9983c.scope: Deactivated successfully.
Nov 24 19:49:10 compute-0 podman[99288]: 2025-11-24 19:49:10.588176891 +0000 UTC m=+1.190200076 container died 344e1681e58b97fd4120de98b029b37d8957acd62dbe7fa80fbd38f91fc9983c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 19:49:10 compute-0 systemd[1]: libpod-344e1681e58b97fd4120de98b029b37d8957acd62dbe7fa80fbd38f91fc9983c.scope: Consumed 1.015s CPU time.
Nov 24 19:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d25468fb66f004dfc1328fc1a130fc061e4a3567ffc109af2a2b59c7b18cc50a-merged.mount: Deactivated successfully.
Nov 24 19:49:10 compute-0 podman[99288]: 2025-11-24 19:49:10.657333068 +0000 UTC m=+1.259356223 container remove 344e1681e58b97fd4120de98b029b37d8957acd62dbe7fa80fbd38f91fc9983c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:49:10 compute-0 systemd[1]: libpod-conmon-344e1681e58b97fd4120de98b029b37d8957acd62dbe7fa80fbd38f91fc9983c.scope: Deactivated successfully.
Nov 24 19:49:10 compute-0 sudo[99155]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:10 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev efd66f7c-67df-472c-98f4-ce619b07aa95 (Updating rgw.rgw deployment (+1 -> 1))
Nov 24 19:49:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dgkdrf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 24 19:49:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dgkdrf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 19:49:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dgkdrf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 19:49:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 24 19:49:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:49:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:10 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dgkdrf on compute-0
Nov 24 19:49:10 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dgkdrf on compute-0
Nov 24 19:49:10 compute-0 sudo[99426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:10 compute-0 sudo[99426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:10 compute-0 sudo[99426]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:10 compute-0 sudo[99451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:10 compute-0 sudo[99451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:10 compute-0 sudo[99451]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:10 compute-0 sudo[99500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrsfrnsbeyxpfgrocxngqcgoammsyzqt ; /usr/bin/python3'
Nov 24 19:49:10 compute-0 sudo[99500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:10 compute-0 sudo[99499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:10 compute-0 sudo[99499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:10 compute-0 sudo[99499]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:11 compute-0 sudo[99527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:49:11 compute-0 sudo[99527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:11 compute-0 python3[99519]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:11 compute-0 podman[99552]: 2025-11-24 19:49:11.195700613 +0000 UTC m=+0.068156806 container create 00d7a71260d8c8e54235563efb78052d5b1e76f7d95b3d358f168ed1b860a3b8 (image=quay.io/ceph/ceph:v18, name=eloquent_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 19:49:11 compute-0 systemd[1]: Started libpod-conmon-00d7a71260d8c8e54235563efb78052d5b1e76f7d95b3d358f168ed1b860a3b8.scope.
Nov 24 19:49:11 compute-0 podman[99552]: 2025-11-24 19:49:11.173554239 +0000 UTC m=+0.046010462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b094c4bf385fc02c14a927d377c4258c8e3296d301e9c36d54018f2b4ea4dcbe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b094c4bf385fc02c14a927d377c4258c8e3296d301e9c36d54018f2b4ea4dcbe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:11 compute-0 podman[99552]: 2025-11-24 19:49:11.309934292 +0000 UTC m=+0.182390565 container init 00d7a71260d8c8e54235563efb78052d5b1e76f7d95b3d358f168ed1b860a3b8 (image=quay.io/ceph/ceph:v18, name=eloquent_morse, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:11 compute-0 podman[99552]: 2025-11-24 19:49:11.316201008 +0000 UTC m=+0.188657191 container start 00d7a71260d8c8e54235563efb78052d5b1e76f7d95b3d358f168ed1b860a3b8 (image=quay.io/ceph/ceph:v18, name=eloquent_morse, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:11 compute-0 podman[99552]: 2025-11-24 19:49:11.320000808 +0000 UTC m=+0.192457071 container attach 00d7a71260d8c8e54235563efb78052d5b1e76f7d95b3d358f168ed1b860a3b8 (image=quay.io/ceph/ceph:v18, name=eloquent_morse, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 19:49:11 compute-0 podman[99611]: 2025-11-24 19:49:11.439617584 +0000 UTC m=+0.054534219 container create d92a2b175dd81c698cc541411cb49d1d86fc6b27c0b29a4ecfc4817fac78cb9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_haibt, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:11 compute-0 systemd[1]: Started libpod-conmon-d92a2b175dd81c698cc541411cb49d1d86fc6b27c0b29a4ecfc4817fac78cb9e.scope.
Nov 24 19:49:11 compute-0 ceph-mon[75677]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dgkdrf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 19:49:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dgkdrf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 19:49:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:11 compute-0 podman[99611]: 2025-11-24 19:49:11.408365175 +0000 UTC m=+0.023281850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:11 compute-0 podman[99611]: 2025-11-24 19:49:11.536999165 +0000 UTC m=+0.151915780 container init d92a2b175dd81c698cc541411cb49d1d86fc6b27c0b29a4ecfc4817fac78cb9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 19:49:11 compute-0 podman[99611]: 2025-11-24 19:49:11.547296738 +0000 UTC m=+0.162213353 container start d92a2b175dd81c698cc541411cb49d1d86fc6b27c0b29a4ecfc4817fac78cb9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_haibt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:11 compute-0 bold_haibt[99627]: 167 167
Nov 24 19:49:11 compute-0 podman[99611]: 2025-11-24 19:49:11.551369415 +0000 UTC m=+0.166286030 container attach d92a2b175dd81c698cc541411cb49d1d86fc6b27c0b29a4ecfc4817fac78cb9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 19:49:11 compute-0 systemd[1]: libpod-d92a2b175dd81c698cc541411cb49d1d86fc6b27c0b29a4ecfc4817fac78cb9e.scope: Deactivated successfully.
Nov 24 19:49:11 compute-0 podman[99611]: 2025-11-24 19:49:11.552560943 +0000 UTC m=+0.167477558 container died d92a2b175dd81c698cc541411cb49d1d86fc6b27c0b29a4ecfc4817fac78cb9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_haibt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 19:49:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-14a861cc9b19f1b3f78754047303c502cc97214fe8273a690993a5043484a50c-merged.mount: Deactivated successfully.
Nov 24 19:49:11 compute-0 podman[99611]: 2025-11-24 19:49:11.598727639 +0000 UTC m=+0.213644244 container remove d92a2b175dd81c698cc541411cb49d1d86fc6b27c0b29a4ecfc4817fac78cb9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_haibt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 19:49:11 compute-0 systemd[1]: libpod-conmon-d92a2b175dd81c698cc541411cb49d1d86fc6b27c0b29a4ecfc4817fac78cb9e.scope: Deactivated successfully.
Nov 24 19:49:11 compute-0 systemd[1]: Reloading.
Nov 24 19:49:11 compute-0 systemd-rc-local-generator[99692]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:49:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:11 compute-0 systemd-sysv-generator[99698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:49:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 24 19:49:11 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3435529717' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 24 19:49:11 compute-0 eloquent_morse[99583]: [client.openstack]
Nov 24 19:49:11 compute-0 eloquent_morse[99583]:         key = AQD6tSRpAAAAABAAQR/Xi2jttOzDX+chNv0thg==
Nov 24 19:49:11 compute-0 eloquent_morse[99583]:         caps mgr = "allow *"
Nov 24 19:49:11 compute-0 eloquent_morse[99583]:         caps mon = "profile rbd"
Nov 24 19:49:11 compute-0 eloquent_morse[99583]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 24 19:49:11 compute-0 systemd[1]: Reloading.
Nov 24 19:49:11 compute-0 podman[99552]: 2025-11-24 19:49:11.99141409 +0000 UTC m=+0.863870273 container died 00d7a71260d8c8e54235563efb78052d5b1e76f7d95b3d358f168ed1b860a3b8 (image=quay.io/ceph/ceph:v18, name=eloquent_morse, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:12 compute-0 systemd-rc-local-generator[99750]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:49:12 compute-0 systemd-sysv-generator[99753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:49:12 compute-0 systemd[1]: libpod-00d7a71260d8c8e54235563efb78052d5b1e76f7d95b3d358f168ed1b860a3b8.scope: Deactivated successfully.
Nov 24 19:49:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b094c4bf385fc02c14a927d377c4258c8e3296d301e9c36d54018f2b4ea4dcbe-merged.mount: Deactivated successfully.
Nov 24 19:49:12 compute-0 podman[99552]: 2025-11-24 19:49:12.262799672 +0000 UTC m=+1.135255895 container remove 00d7a71260d8c8e54235563efb78052d5b1e76f7d95b3d358f168ed1b860a3b8 (image=quay.io/ceph/ceph:v18, name=eloquent_morse, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 19:49:12 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.dgkdrf for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:49:12 compute-0 systemd[1]: libpod-conmon-00d7a71260d8c8e54235563efb78052d5b1e76f7d95b3d358f168ed1b860a3b8.scope: Deactivated successfully.
Nov 24 19:49:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:12 compute-0 sudo[99500]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:12 compute-0 ceph-mon[75677]: Deploying daemon rgw.rgw.compute-0.dgkdrf on compute-0
Nov 24 19:49:12 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3435529717' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 24 19:49:12 compute-0 ceph-mon[75677]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:12 compute-0 podman[99808]: 2025-11-24 19:49:12.602816423 +0000 UTC m=+0.067297209 container create c6648a1feda46ace8247a202ec2a859ecd2034f8450031e0751f9459c5e4940c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-rgw-rgw-compute-0-dgkdrf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:12 compute-0 podman[99808]: 2025-11-24 19:49:12.574666221 +0000 UTC m=+0.039147067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb5ff981d828ac9b54a3b14575bb0675bef280399e4db48d48075719448be55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb5ff981d828ac9b54a3b14575bb0675bef280399e4db48d48075719448be55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb5ff981d828ac9b54a3b14575bb0675bef280399e4db48d48075719448be55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fb5ff981d828ac9b54a3b14575bb0675bef280399e4db48d48075719448be55/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dgkdrf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:12 compute-0 podman[99808]: 2025-11-24 19:49:12.684819843 +0000 UTC m=+0.149300659 container init c6648a1feda46ace8247a202ec2a859ecd2034f8450031e0751f9459c5e4940c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-rgw-rgw-compute-0-dgkdrf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 19:49:12 compute-0 podman[99808]: 2025-11-24 19:49:12.695517938 +0000 UTC m=+0.159998724 container start c6648a1feda46ace8247a202ec2a859ecd2034f8450031e0751f9459c5e4940c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-rgw-rgw-compute-0-dgkdrf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:12 compute-0 bash[99808]: c6648a1feda46ace8247a202ec2a859ecd2034f8450031e0751f9459c5e4940c
Nov 24 19:49:12 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.dgkdrf for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:49:12 compute-0 sudo[99527]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:12 compute-0 radosgw[99827]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 24 19:49:12 compute-0 radosgw[99827]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 24 19:49:12 compute-0 radosgw[99827]: framework: beast
Nov 24 19:49:12 compute-0 radosgw[99827]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 24 19:49:12 compute-0 radosgw[99827]: init_numa not setting numa affinity
Nov 24 19:49:12 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:12 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 19:49:12 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:12 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev efd66f7c-67df-472c-98f4-ce619b07aa95 (Updating rgw.rgw deployment (+1 -> 1))
Nov 24 19:49:12 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event efd66f7c-67df-472c-98f4-ce619b07aa95 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 24 19:49:12 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 24 19:49:12 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 24 19:49:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 19:49:12 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 19:49:12 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:12 compute-0 sudo[99889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:12 compute-0 sudo[99889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:12 compute-0 sudo[99889]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:13 compute-0 sudo[99914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:49:13 compute-0 sudo[99914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:13 compute-0 sudo[99914]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:13 compute-0 sudo[99939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:13 compute-0 sudo[99939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:13 compute-0 sudo[99939]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:13 compute-0 sudo[99964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:13 compute-0 sudo[99964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:13 compute-0 sudo[99964]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:13 compute-0 sudo[100008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:13 compute-0 sudo[100008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:13 compute-0 sudo[100008]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:13 compute-0 sudo[100063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 19:49:13 compute-0 sudo[100063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:13 compute-0 sudo[100213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlijiyzfesotzzhwegujejputbjizcmf ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764013753.2564888-37896-192881111850144/async_wrapper.py j923134143227 30 /home/zuul/.ansible/tmp/ansible-tmp-1764013753.2564888-37896-192881111850144/AnsiballZ_command.py _'
Nov 24 19:49:13 compute-0 sudo[100213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:13 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:13 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:13 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:13 compute-0 ceph-mon[75677]: Saving service rgw.rgw spec with placement compute-0
Nov 24 19:49:13 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:13 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 24 19:49:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 24 19:49:13 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 24 19:49:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 24 19:49:13 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 19:49:13 compute-0 ansible-async_wrapper.py[100225]: Invoked with j923134143227 30 /home/zuul/.ansible/tmp/ansible-tmp-1764013753.2564888-37896-192881111850144/AnsiballZ_command.py _
Nov 24 19:49:13 compute-0 ansible-async_wrapper.py[100254]: Starting module and watcher
Nov 24 19:49:13 compute-0 ansible-async_wrapper.py[100254]: Start watching 100255 (30)
Nov 24 19:49:13 compute-0 ansible-async_wrapper.py[100255]: Start module (100255)
Nov 24 19:49:13 compute-0 ansible-async_wrapper.py[100225]: Return async_wrapper task started.
Nov 24 19:49:13 compute-0 sudo[100213]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:14 compute-0 podman[100263]: 2025-11-24 19:49:14.079409851 +0000 UTC m=+0.083923150 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 19:49:14 compute-0 python3[100259]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:14 compute-0 podman[100282]: 2025-11-24 19:49:14.19080361 +0000 UTC m=+0.065662658 container create 494bdbaaade1ec8362a3f8a95d80740354e51c2a18cdfaf47ae71ed5d877025e (image=quay.io/ceph/ceph:v18, name=elated_mestorf, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:14 compute-0 podman[100263]: 2025-11-24 19:49:14.198516962 +0000 UTC m=+0.203030211 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:14 compute-0 systemd[1]: Started libpod-conmon-494bdbaaade1ec8362a3f8a95d80740354e51c2a18cdfaf47ae71ed5d877025e.scope.
Nov 24 19:49:14 compute-0 podman[100282]: 2025-11-24 19:49:14.162775892 +0000 UTC m=+0.037634980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v76: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a4282dfc6b0bfa1cbb9c928fc4a249225537721aa78fd0cfe9bce2487a3ea7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a4282dfc6b0bfa1cbb9c928fc4a249225537721aa78fd0cfe9bce2487a3ea7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:14 compute-0 podman[100282]: 2025-11-24 19:49:14.325334354 +0000 UTC m=+0.200193462 container init 494bdbaaade1ec8362a3f8a95d80740354e51c2a18cdfaf47ae71ed5d877025e (image=quay.io/ceph/ceph:v18, name=elated_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 19:49:14 compute-0 podman[100282]: 2025-11-24 19:49:14.334405709 +0000 UTC m=+0.209264757 container start 494bdbaaade1ec8362a3f8a95d80740354e51c2a18cdfaf47ae71ed5d877025e (image=quay.io/ceph/ceph:v18, name=elated_mestorf, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:14 compute-0 podman[100282]: 2025-11-24 19:49:14.340984385 +0000 UTC m=+0.215843423 container attach 494bdbaaade1ec8362a3f8a95d80740354e51c2a18cdfaf47ae71ed5d877025e (image=quay.io/ceph/ceph:v18, name=elated_mestorf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:14 compute-0 ceph-mgr[75975]: [progress INFO root] Writing back 4 completed events
Nov 24 19:49:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 19:49:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:14 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 30 pg[8.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:14 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 19:49:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 24 19:49:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 24 19:49:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 24 19:49:14 compute-0 ceph-mon[75677]: osdmap e30: 3 total, 3 up, 3 in
Nov 24 19:49:14 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 19:49:14 compute-0 ceph-mon[75677]: pgmap v76: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:14 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:14 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 24 19:49:14 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:14 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:49:14 compute-0 elated_mestorf[100309]: 
Nov 24 19:49:14 compute-0 elated_mestorf[100309]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 19:49:14 compute-0 systemd[1]: libpod-494bdbaaade1ec8362a3f8a95d80740354e51c2a18cdfaf47ae71ed5d877025e.scope: Deactivated successfully.
Nov 24 19:49:14 compute-0 podman[100282]: 2025-11-24 19:49:14.894064631 +0000 UTC m=+0.768923639 container died 494bdbaaade1ec8362a3f8a95d80740354e51c2a18cdfaf47ae71ed5d877025e (image=quay.io/ceph/ceph:v18, name=elated_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-74a4282dfc6b0bfa1cbb9c928fc4a249225537721aa78fd0cfe9bce2487a3ea7-merged.mount: Deactivated successfully.
Nov 24 19:49:14 compute-0 podman[100282]: 2025-11-24 19:49:14.959748989 +0000 UTC m=+0.834607997 container remove 494bdbaaade1ec8362a3f8a95d80740354e51c2a18cdfaf47ae71ed5d877025e (image=quay.io/ceph/ceph:v18, name=elated_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:14 compute-0 systemd[1]: libpod-conmon-494bdbaaade1ec8362a3f8a95d80740354e51c2a18cdfaf47ae71ed5d877025e.scope: Deactivated successfully.
Nov 24 19:49:14 compute-0 ansible-async_wrapper.py[100255]: Module complete (100255)
Nov 24 19:49:15 compute-0 sudo[100063]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:49:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:49:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:49:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f5ffc9b8-21ec-447a-bc2a-7b1fc3f2f192 does not exist
Nov 24 19:49:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e943539d-de38-42f6-9f3a-c2f18dfcb40b does not exist
Nov 24 19:49:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 08c2d1e0-c7d5-452a-9e94-bd99ad1dddfc does not exist
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:49:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:49:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:49:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:15 compute-0 sudo[100505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqziqfljhbspkhrjxitorcppbugutycx ; /usr/bin/python3'
Nov 24 19:49:15 compute-0 sudo[100505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:15 compute-0 sudo[100504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:15 compute-0 sudo[100504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:15 compute-0 sudo[100504]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:15 compute-0 sudo[100532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:15 compute-0 sudo[100532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:15 compute-0 sudo[100532]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:15 compute-0 python3[100521]: ansible-ansible.legacy.async_status Invoked with jid=j923134143227.100225 mode=status _async_dir=/root/.ansible_async
Nov 24 19:49:15 compute-0 sudo[100505]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:15 compute-0 sudo[100557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:15 compute-0 sudo[100557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:15 compute-0 sudo[100557]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:15 compute-0 sudo[100605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:49:15 compute-0 sudo[100605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:15 compute-0 sudo[100653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztwgftlqwdmbvnwnhuzjczpjoulpgnvl ; /usr/bin/python3'
Nov 24 19:49:15 compute-0 sudo[100653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:15 compute-0 python3[100655]: ansible-ansible.legacy.async_status Invoked with jid=j923134143227.100225 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 19:49:15 compute-0 sudo[100653]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 24 19:49:15 compute-0 ceph-mon[75677]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 19:49:15 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 24 19:49:15 compute-0 ceph-mon[75677]: osdmap e31: 3 total, 3 up, 3 in
Nov 24 19:49:15 compute-0 ceph-mon[75677]: from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:49:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:49:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:49:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:49:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 24 19:49:15 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 24 19:49:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 24 19:49:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 19:49:15 compute-0 podman[100697]: 2025-11-24 19:49:15.850902166 +0000 UTC m=+0.058932407 container create aa921f8351384008e601393c5c504f7da81eda8fe121b2f603ec70ceb846bc32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 19:49:15 compute-0 systemd[1]: Started libpod-conmon-aa921f8351384008e601393c5c504f7da81eda8fe121b2f603ec70ceb846bc32.scope.
Nov 24 19:49:15 compute-0 podman[100697]: 2025-11-24 19:49:15.833401278 +0000 UTC m=+0.041431499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:15 compute-0 podman[100697]: 2025-11-24 19:49:15.960254571 +0000 UTC m=+0.168284852 container init aa921f8351384008e601393c5c504f7da81eda8fe121b2f603ec70ceb846bc32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 19:49:15 compute-0 podman[100697]: 2025-11-24 19:49:15.97556462 +0000 UTC m=+0.183594861 container start aa921f8351384008e601393c5c504f7da81eda8fe121b2f603ec70ceb846bc32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 19:49:15 compute-0 podman[100697]: 2025-11-24 19:49:15.979523465 +0000 UTC m=+0.187553756 container attach aa921f8351384008e601393c5c504f7da81eda8fe121b2f603ec70ceb846bc32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:15 compute-0 infallible_benz[100713]: 167 167
Nov 24 19:49:15 compute-0 systemd[1]: libpod-aa921f8351384008e601393c5c504f7da81eda8fe121b2f603ec70ceb846bc32.scope: Deactivated successfully.
Nov 24 19:49:15 compute-0 podman[100697]: 2025-11-24 19:49:15.982187939 +0000 UTC m=+0.190218170 container died aa921f8351384008e601393c5c504f7da81eda8fe121b2f603ec70ceb846bc32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-76f1711d5984f0e73b2570b04de847711be58542655d9b48e0a25080d0c51bfe-merged.mount: Deactivated successfully.
Nov 24 19:49:16 compute-0 podman[100697]: 2025-11-24 19:49:16.033022411 +0000 UTC m=+0.241052642 container remove aa921f8351384008e601393c5c504f7da81eda8fe121b2f603ec70ceb846bc32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_benz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:16 compute-0 systemd[1]: libpod-conmon-aa921f8351384008e601393c5c504f7da81eda8fe121b2f603ec70ceb846bc32.scope: Deactivated successfully.
Nov 24 19:49:16 compute-0 sudo[100755]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlzqyhmzbvmelagvteobnfagdsmxczov ; /usr/bin/python3'
Nov 24 19:49:16 compute-0 sudo[100755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:16 compute-0 python3[100757]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:16 compute-0 podman[100763]: 2025-11-24 19:49:16.268617931 +0000 UTC m=+0.066643429 container create 60c0bfdcc346e403bd5d1761fc49d2b28d2084f571d747489d69890e320ac4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v79: 9 pgs: 1 unknown, 8 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Nov 24 19:49:16 compute-0 systemd[1]: Started libpod-conmon-60c0bfdcc346e403bd5d1761fc49d2b28d2084f571d747489d69890e320ac4dd.scope.
Nov 24 19:49:16 compute-0 podman[100763]: 2025-11-24 19:49:16.243378851 +0000 UTC m=+0.041404449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:16 compute-0 podman[100777]: 2025-11-24 19:49:16.337755237 +0000 UTC m=+0.063653735 container create bc2c9c35e0f65e7116e94a9ecc602cffa2326a3119571b7c9efff917227e1362 (image=quay.io/ceph/ceph:v18, name=unruffled_ellis, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f68582c2a80ac85e1abf921f2dabbebb838656fcb60b734e280ea410894d839/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f68582c2a80ac85e1abf921f2dabbebb838656fcb60b734e280ea410894d839/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f68582c2a80ac85e1abf921f2dabbebb838656fcb60b734e280ea410894d839/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f68582c2a80ac85e1abf921f2dabbebb838656fcb60b734e280ea410894d839/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f68582c2a80ac85e1abf921f2dabbebb838656fcb60b734e280ea410894d839/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:16 compute-0 podman[100763]: 2025-11-24 19:49:16.377508202 +0000 UTC m=+0.175533750 container init 60c0bfdcc346e403bd5d1761fc49d2b28d2084f571d747489d69890e320ac4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 19:49:16 compute-0 systemd[1]: Started libpod-conmon-bc2c9c35e0f65e7116e94a9ecc602cffa2326a3119571b7c9efff917227e1362.scope.
Nov 24 19:49:16 compute-0 podman[100763]: 2025-11-24 19:49:16.397681124 +0000 UTC m=+0.195706652 container start 60c0bfdcc346e403bd5d1761fc49d2b28d2084f571d747489d69890e320ac4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 19:49:16 compute-0 podman[100763]: 2025-11-24 19:49:16.403399023 +0000 UTC m=+0.201424561 container attach 60c0bfdcc346e403bd5d1761fc49d2b28d2084f571d747489d69890e320ac4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:16 compute-0 podman[100777]: 2025-11-24 19:49:16.31773673 +0000 UTC m=+0.043635248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef850fd179b186055fbaadd21548a15f12e678c73099e5e8c3b4cbfc2cb15b6b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef850fd179b186055fbaadd21548a15f12e678c73099e5e8c3b4cbfc2cb15b6b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:16 compute-0 podman[100777]: 2025-11-24 19:49:16.434064184 +0000 UTC m=+0.159962692 container init bc2c9c35e0f65e7116e94a9ecc602cffa2326a3119571b7c9efff917227e1362 (image=quay.io/ceph/ceph:v18, name=unruffled_ellis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:16 compute-0 podman[100777]: 2025-11-24 19:49:16.444233153 +0000 UTC m=+0.170131671 container start bc2c9c35e0f65e7116e94a9ecc602cffa2326a3119571b7c9efff917227e1362 (image=quay.io/ceph/ceph:v18, name=unruffled_ellis, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:16 compute-0 podman[100777]: 2025-11-24 19:49:16.447923398 +0000 UTC m=+0.173821916 container attach bc2c9c35e0f65e7116e94a9ecc602cffa2326a3119571b7c9efff917227e1362 (image=quay.io/ceph/ceph:v18, name=unruffled_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 19:49:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:16 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 32 pg[9.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 24 19:49:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 19:49:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 24 19:49:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 24 19:49:16 compute-0 ceph-mon[75677]: osdmap e32: 3 total, 3 up, 3 in
Nov 24 19:49:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 19:49:16 compute-0 ceph-mon[75677]: pgmap v79: 9 pgs: 1 unknown, 8 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Nov 24 19:49:16 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:16 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14261 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:49:16 compute-0 unruffled_ellis[100799]: 
Nov 24 19:49:16 compute-0 unruffled_ellis[100799]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 19:49:16 compute-0 systemd[1]: libpod-bc2c9c35e0f65e7116e94a9ecc602cffa2326a3119571b7c9efff917227e1362.scope: Deactivated successfully.
Nov 24 19:49:17 compute-0 podman[100828]: 2025-11-24 19:49:17.061795379 +0000 UTC m=+0.042114821 container died bc2c9c35e0f65e7116e94a9ecc602cffa2326a3119571b7c9efff917227e1362 (image=quay.io/ceph/ceph:v18, name=unruffled_ellis, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef850fd179b186055fbaadd21548a15f12e678c73099e5e8c3b4cbfc2cb15b6b-merged.mount: Deactivated successfully.
Nov 24 19:49:17 compute-0 podman[100828]: 2025-11-24 19:49:17.126384312 +0000 UTC m=+0.106703754 container remove bc2c9c35e0f65e7116e94a9ecc602cffa2326a3119571b7c9efff917227e1362 (image=quay.io/ceph/ceph:v18, name=unruffled_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:17 compute-0 systemd[1]: libpod-conmon-bc2c9c35e0f65e7116e94a9ecc602cffa2326a3119571b7c9efff917227e1362.scope: Deactivated successfully.
Nov 24 19:49:17 compute-0 sudo[100755]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:17 compute-0 optimistic_nobel[100793]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:49:17 compute-0 optimistic_nobel[100793]: --> relative data size: 1.0
Nov 24 19:49:17 compute-0 optimistic_nobel[100793]: --> All data devices are unavailable
Nov 24 19:49:17 compute-0 systemd[1]: libpod-60c0bfdcc346e403bd5d1761fc49d2b28d2084f571d747489d69890e320ac4dd.scope: Deactivated successfully.
Nov 24 19:49:17 compute-0 podman[100763]: 2025-11-24 19:49:17.570894787 +0000 UTC m=+1.368920315 container died 60c0bfdcc346e403bd5d1761fc49d2b28d2084f571d747489d69890e320ac4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:49:17 compute-0 systemd[1]: libpod-60c0bfdcc346e403bd5d1761fc49d2b28d2084f571d747489d69890e320ac4dd.scope: Consumed 1.116s CPU time.
Nov 24 19:49:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f68582c2a80ac85e1abf921f2dabbebb838656fcb60b734e280ea410894d839-merged.mount: Deactivated successfully.
Nov 24 19:49:17 compute-0 podman[100763]: 2025-11-24 19:49:17.723642392 +0000 UTC m=+1.521667900 container remove 60c0bfdcc346e403bd5d1761fc49d2b28d2084f571d747489d69890e320ac4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_nobel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 24 19:49:17 compute-0 systemd[1]: libpod-conmon-60c0bfdcc346e403bd5d1761fc49d2b28d2084f571d747489d69890e320ac4dd.scope: Deactivated successfully.
Nov 24 19:49:17 compute-0 sudo[100605]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 24 19:49:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 19:49:17 compute-0 ceph-mon[75677]: osdmap e33: 3 total, 3 up, 3 in
Nov 24 19:49:17 compute-0 ceph-mon[75677]: from='client.14261 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:49:17 compute-0 sudo[100878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:17 compute-0 sudo[100878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:17 compute-0 sudo[100878]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:17 compute-0 sudo[100925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfgdbzqslkltfrtmcxllfjzxqlldisui ; /usr/bin/python3'
Nov 24 19:49:17 compute-0 sudo[100925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 24 19:49:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 24 19:49:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 24 19:49:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 19:49:17 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 34 pg[10.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:17 compute-0 sudo[100927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:17 compute-0 sudo[100927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:17 compute-0 sudo[100927]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:18 compute-0 sudo[100954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:18 compute-0 sudo[100954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:18 compute-0 python3[100933]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:18 compute-0 sudo[100954]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:18 compute-0 sudo[100980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:49:18 compute-0 sudo[100980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:18 compute-0 podman[100979]: 2025-11-24 19:49:18.214358075 +0000 UTC m=+0.114476987 container create ba2753dcc7e93354cfb6c00f8213abfbfe54c5acd6a075be1da33e484cffcea7 (image=quay.io/ceph/ceph:v18, name=bold_haslett, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:18 compute-0 podman[100979]: 2025-11-24 19:49:18.142719051 +0000 UTC m=+0.042838013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v82: 10 pgs: 2 unknown, 8 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Nov 24 19:49:18 compute-0 systemd[1]: Started libpod-conmon-ba2753dcc7e93354cfb6c00f8213abfbfe54c5acd6a075be1da33e484cffcea7.scope.
Nov 24 19:49:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3116ea9e3fd206ae67a66a9344b4e54ab833a635b3f2ee2d61e73ee9a9558/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3116ea9e3fd206ae67a66a9344b4e54ab833a635b3f2ee2d61e73ee9a9558/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:18 compute-0 podman[100979]: 2025-11-24 19:49:18.369405412 +0000 UTC m=+0.269524374 container init ba2753dcc7e93354cfb6c00f8213abfbfe54c5acd6a075be1da33e484cffcea7 (image=quay.io/ceph/ceph:v18, name=bold_haslett, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:18 compute-0 podman[100979]: 2025-11-24 19:49:18.380945104 +0000 UTC m=+0.281063996 container start ba2753dcc7e93354cfb6c00f8213abfbfe54c5acd6a075be1da33e484cffcea7 (image=quay.io/ceph/ceph:v18, name=bold_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:18 compute-0 podman[100979]: 2025-11-24 19:49:18.384840205 +0000 UTC m=+0.284959127 container attach ba2753dcc7e93354cfb6c00f8213abfbfe54c5acd6a075be1da33e484cffcea7 (image=quay.io/ceph/ceph:v18, name=bold_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 19:49:18 compute-0 podman[101064]: 2025-11-24 19:49:18.627314962 +0000 UTC m=+0.063562283 container create 36545b6e0ad647b2dc6a6c44812e643eb503fe6fbb80eb8467ee947662267e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mclean, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 19:49:18 compute-0 systemd[1]: Started libpod-conmon-36545b6e0ad647b2dc6a6c44812e643eb503fe6fbb80eb8467ee947662267e1e.scope.
Nov 24 19:49:18 compute-0 podman[101064]: 2025-11-24 19:49:18.600280094 +0000 UTC m=+0.036527465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:18 compute-0 podman[101064]: 2025-11-24 19:49:18.718229979 +0000 UTC m=+0.154477290 container init 36545b6e0ad647b2dc6a6c44812e643eb503fe6fbb80eb8467ee947662267e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 19:49:18 compute-0 podman[101064]: 2025-11-24 19:49:18.728570613 +0000 UTC m=+0.164817934 container start 36545b6e0ad647b2dc6a6c44812e643eb503fe6fbb80eb8467ee947662267e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mclean, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:18 compute-0 podman[101064]: 2025-11-24 19:49:18.732660052 +0000 UTC m=+0.168907373 container attach 36545b6e0ad647b2dc6a6c44812e643eb503fe6fbb80eb8467ee947662267e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 24 19:49:18 compute-0 loving_mclean[101081]: 167 167
Nov 24 19:49:18 compute-0 systemd[1]: libpod-36545b6e0ad647b2dc6a6c44812e643eb503fe6fbb80eb8467ee947662267e1e.scope: Deactivated successfully.
Nov 24 19:49:18 compute-0 podman[101064]: 2025-11-24 19:49:18.739779025 +0000 UTC m=+0.176026396 container died 36545b6e0ad647b2dc6a6c44812e643eb503fe6fbb80eb8467ee947662267e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aa329e0b04ebe2917b44ab43beaa2e04e0d94231bfb3f27d69705a9c0308eec-merged.mount: Deactivated successfully.
Nov 24 19:49:18 compute-0 podman[101064]: 2025-11-24 19:49:18.792156096 +0000 UTC m=+0.228403417 container remove 36545b6e0ad647b2dc6a6c44812e643eb503fe6fbb80eb8467ee947662267e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 19:49:18 compute-0 systemd[1]: libpod-conmon-36545b6e0ad647b2dc6a6c44812e643eb503fe6fbb80eb8467ee947662267e1e.scope: Deactivated successfully.
Nov 24 19:49:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 24 19:49:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 19:49:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 24 19:49:18 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 24 19:49:18 compute-0 ceph-mon[75677]: osdmap e34: 3 total, 3 up, 3 in
Nov 24 19:49:18 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 19:49:18 compute-0 ceph-mon[75677]: pgmap v82: 10 pgs: 2 unknown, 8 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Nov 24 19:49:18 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:18 compute-0 ansible-async_wrapper.py[100254]: Done in kid B.
Nov 24 19:49:18 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:49:18 compute-0 bold_haslett[101019]: 
Nov 24 19:49:18 compute-0 bold_haslett[101019]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 24 19:49:19 compute-0 systemd[1]: libpod-ba2753dcc7e93354cfb6c00f8213abfbfe54c5acd6a075be1da33e484cffcea7.scope: Deactivated successfully.
Nov 24 19:49:19 compute-0 podman[101124]: 2025-11-24 19:49:18.968529401 +0000 UTC m=+0.021545266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:19 compute-0 podman[101124]: 2025-11-24 19:49:19.380876878 +0000 UTC m=+0.433892683 container create a6ea672ff79de1263890b5ede78f45ea428e7c5df8928dbe5330e25d50d8cf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sinoussi, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:19 compute-0 systemd[1]: Started libpod-conmon-a6ea672ff79de1263890b5ede78f45ea428e7c5df8928dbe5330e25d50d8cf29.scope.
Nov 24 19:49:19 compute-0 podman[100979]: 2025-11-24 19:49:19.432999531 +0000 UTC m=+1.333118443 container died ba2753dcc7e93354cfb6c00f8213abfbfe54c5acd6a075be1da33e484cffcea7 (image=quay.io/ceph/ceph:v18, name=bold_haslett, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 19:49:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548b00eb8fa53440c359787f6389d24373b601ceac9030d56a49395ada2f3c1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548b00eb8fa53440c359787f6389d24373b601ceac9030d56a49395ada2f3c1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548b00eb8fa53440c359787f6389d24373b601ceac9030d56a49395ada2f3c1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548b00eb8fa53440c359787f6389d24373b601ceac9030d56a49395ada2f3c1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaf3116ea9e3fd206ae67a66a9344b4e54ab833a635b3f2ee2d61e73ee9a9558-merged.mount: Deactivated successfully.
Nov 24 19:49:19 compute-0 podman[101124]: 2025-11-24 19:49:19.511522571 +0000 UTC m=+0.564538436 container init a6ea672ff79de1263890b5ede78f45ea428e7c5df8928dbe5330e25d50d8cf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 19:49:19 compute-0 podman[101124]: 2025-11-24 19:49:19.524088034 +0000 UTC m=+0.577103849 container start a6ea672ff79de1263890b5ede78f45ea428e7c5df8928dbe5330e25d50d8cf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 24 19:49:19 compute-0 podman[100979]: 2025-11-24 19:49:19.568798345 +0000 UTC m=+1.468917257 container remove ba2753dcc7e93354cfb6c00f8213abfbfe54c5acd6a075be1da33e484cffcea7 (image=quay.io/ceph/ceph:v18, name=bold_haslett, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 24 19:49:19 compute-0 systemd[1]: libpod-conmon-ba2753dcc7e93354cfb6c00f8213abfbfe54c5acd6a075be1da33e484cffcea7.scope: Deactivated successfully.
Nov 24 19:49:19 compute-0 podman[101124]: 2025-11-24 19:49:19.582763403 +0000 UTC m=+0.635779258 container attach a6ea672ff79de1263890b5ede78f45ea428e7c5df8928dbe5330e25d50d8cf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 19:49:19 compute-0 sudo[100925]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 24 19:49:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 24 19:49:19 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 24 19:49:19 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 36 pg[11.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 24 19:49:19 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2147305791' entity='client.rgw.rgw.compute-0.dgkdrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 19:49:19 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3518808033' entity='client.rgw.rgw.compute-0.dgkdrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 19:49:19 compute-0 ceph-mon[75677]: osdmap e35: 3 total, 3 up, 3 in
Nov 24 19:49:19 compute-0 ceph-mon[75677]: from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]: {
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:     "0": [
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:         {
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "devices": [
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "/dev/loop3"
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             ],
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_name": "ceph_lv0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_size": "21470642176",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "name": "ceph_lv0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "tags": {
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.crush_device_class": "",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.encrypted": "0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.osd_id": "0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.type": "block",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.vdo": "0"
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             },
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "type": "block",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "vg_name": "ceph_vg0"
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:         }
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:     ],
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:     "1": [
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:         {
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "devices": [
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "/dev/loop4"
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             ],
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_name": "ceph_lv1",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_size": "21470642176",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "name": "ceph_lv1",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "tags": {
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.crush_device_class": "",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.encrypted": "0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.osd_id": "1",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.type": "block",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.vdo": "0"
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             },
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "type": "block",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "vg_name": "ceph_vg1"
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:         }
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:     ],
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:     "2": [
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:         {
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "devices": [
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "/dev/loop5"
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             ],
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_name": "ceph_lv2",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_size": "21470642176",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "name": "ceph_lv2",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "tags": {
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.crush_device_class": "",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.encrypted": "0",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.osd_id": "2",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.type": "block",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:                 "ceph.vdo": "0"
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             },
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "type": "block",
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:             "vg_name": "ceph_vg2"
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:         }
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]:     ]
Nov 24 19:49:20 compute-0 distracted_sinoussi[101154]: }
Nov 24 19:49:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 3 unknown, 8 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:20 compute-0 systemd[1]: libpod-a6ea672ff79de1263890b5ede78f45ea428e7c5df8928dbe5330e25d50d8cf29.scope: Deactivated successfully.
Nov 24 19:49:20 compute-0 podman[101178]: 2025-11-24 19:49:20.368019032 +0000 UTC m=+0.039447817 container died a6ea672ff79de1263890b5ede78f45ea428e7c5df8928dbe5330e25d50d8cf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sinoussi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-548b00eb8fa53440c359787f6389d24373b601ceac9030d56a49395ada2f3c1b-merged.mount: Deactivated successfully.
Nov 24 19:49:20 compute-0 podman[101178]: 2025-11-24 19:49:20.444998154 +0000 UTC m=+0.116426939 container remove a6ea672ff79de1263890b5ede78f45ea428e7c5df8928dbe5330e25d50d8cf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 19:49:20 compute-0 systemd[1]: libpod-conmon-a6ea672ff79de1263890b5ede78f45ea428e7c5df8928dbe5330e25d50d8cf29.scope: Deactivated successfully.
Nov 24 19:49:20 compute-0 sudo[100980]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:20 compute-0 sudo[101216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubwbjrlzakefdrnnevmabnknhjxdtrbz ; /usr/bin/python3'
Nov 24 19:49:20 compute-0 sudo[101216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:20 compute-0 sudo[101219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:20 compute-0 sudo[101219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:20 compute-0 sudo[101219]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:20 compute-0 sudo[101244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:20 compute-0 sudo[101244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:20 compute-0 python3[101218]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:20 compute-0 sudo[101244]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:20 compute-0 sudo[101270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:20 compute-0 sudo[101270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:20 compute-0 sudo[101270]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:20 compute-0 podman[101269]: 2025-11-24 19:49:20.789287179 +0000 UTC m=+0.075041522 container create 1b1762fc8e0ea956fd13870df902fe00abd9dff5dcc79b6837f2bc0203e0f9c3 (image=quay.io/ceph/ceph:v18, name=hardcore_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 19:49:20 compute-0 systemd[1]: Started libpod-conmon-1b1762fc8e0ea956fd13870df902fe00abd9dff5dcc79b6837f2bc0203e0f9c3.scope.
Nov 24 19:49:20 compute-0 podman[101269]: 2025-11-24 19:49:20.7589786 +0000 UTC m=+0.044732993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfb26714a8a5e52e0b8c7029d373ef6293f2ae343794735162e8283ab7c62ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbfb26714a8a5e52e0b8c7029d373ef6293f2ae343794735162e8283ab7c62ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:20 compute-0 sudo[101307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:49:20 compute-0 sudo[101307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:20 compute-0 podman[101269]: 2025-11-24 19:49:20.893786933 +0000 UTC m=+0.179541326 container init 1b1762fc8e0ea956fd13870df902fe00abd9dff5dcc79b6837f2bc0203e0f9c3 (image=quay.io/ceph/ceph:v18, name=hardcore_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:20 compute-0 podman[101269]: 2025-11-24 19:49:20.9064427 +0000 UTC m=+0.192197033 container start 1b1762fc8e0ea956fd13870df902fe00abd9dff5dcc79b6837f2bc0203e0f9c3 (image=quay.io/ceph/ceph:v18, name=hardcore_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 19:49:20 compute-0 podman[101269]: 2025-11-24 19:49:20.910127895 +0000 UTC m=+0.195882288 container attach 1b1762fc8e0ea956fd13870df902fe00abd9dff5dcc79b6837f2bc0203e0f9c3 (image=quay.io/ceph/ceph:v18, name=hardcore_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 19:49:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 24 19:49:20 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2147305791' entity='client.rgw.rgw.compute-0.dgkdrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 19:49:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 24 19:49:20 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 24 19:49:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 24 19:49:20 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2147305791' entity='client.rgw.rgw.compute-0.dgkdrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 19:49:20 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:20 compute-0 ceph-mon[75677]: osdmap e36: 3 total, 3 up, 3 in
Nov 24 19:49:20 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2147305791' entity='client.rgw.rgw.compute-0.dgkdrf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 19:49:20 compute-0 ceph-mon[75677]: pgmap v85: 11 pgs: 3 unknown, 8 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:20 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2147305791' entity='client.rgw.rgw.compute-0.dgkdrf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 19:49:20 compute-0 ceph-mon[75677]: osdmap e37: 3 total, 3 up, 3 in
Nov 24 19:49:21 compute-0 podman[101396]: 2025-11-24 19:49:21.328577433 +0000 UTC m=+0.065228604 container create 64b414cfe38f7010f5a0d0280a01372fd7485ca60e4f7d9044eafa00b91f1a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:21 compute-0 systemd[1]: Started libpod-conmon-64b414cfe38f7010f5a0d0280a01372fd7485ca60e4f7d9044eafa00b91f1a0b.scope.
Nov 24 19:49:21 compute-0 podman[101396]: 2025-11-24 19:49:21.301872577 +0000 UTC m=+0.038523798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:21 compute-0 podman[101396]: 2025-11-24 19:49:21.421184914 +0000 UTC m=+0.157836105 container init 64b414cfe38f7010f5a0d0280a01372fd7485ca60e4f7d9044eafa00b91f1a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lichterman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 19:49:21 compute-0 podman[101396]: 2025-11-24 19:49:21.432650703 +0000 UTC m=+0.169301854 container start 64b414cfe38f7010f5a0d0280a01372fd7485ca60e4f7d9044eafa00b91f1a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 19:49:21 compute-0 quirky_lichterman[101412]: 167 167
Nov 24 19:49:21 compute-0 podman[101396]: 2025-11-24 19:49:21.43634724 +0000 UTC m=+0.172998421 container attach 64b414cfe38f7010f5a0d0280a01372fd7485ca60e4f7d9044eafa00b91f1a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lichterman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:21 compute-0 systemd[1]: libpod-64b414cfe38f7010f5a0d0280a01372fd7485ca60e4f7d9044eafa00b91f1a0b.scope: Deactivated successfully.
Nov 24 19:49:21 compute-0 podman[101396]: 2025-11-24 19:49:21.438777066 +0000 UTC m=+0.175428247 container died 64b414cfe38f7010f5a0d0280a01372fd7485ca60e4f7d9044eafa00b91f1a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lichterman, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 19:49:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5418becc647741138cd21fa9feade5f894b8bdb89ed23212a38bd00a7d6c665a-merged.mount: Deactivated successfully.
Nov 24 19:49:21 compute-0 podman[101396]: 2025-11-24 19:49:21.481953678 +0000 UTC m=+0.218604829 container remove 64b414cfe38f7010f5a0d0280a01372fd7485ca60e4f7d9044eafa00b91f1a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 19:49:21 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:49:21 compute-0 systemd[1]: libpod-conmon-64b414cfe38f7010f5a0d0280a01372fd7485ca60e4f7d9044eafa00b91f1a0b.scope: Deactivated successfully.
Nov 24 19:49:21 compute-0 hardcore_antonelli[101330]: 
Nov 24 19:49:21 compute-0 hardcore_antonelli[101330]: [{"container_id": "82a5b30abd5b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.60%", "created": "2025-11-24T19:47:53.793408Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-24T19:47:53.856311Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T19:49:15.060492Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2025-11-24T19:47:53.681243Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@crash.compute-0", "version": "18.2.7"}, {"container_id": "68bd91a1ba69", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "29.15%", "created": "2025-11-24T19:46:44.264600Z", "daemon_id": "compute-0.ofslrn", "daemon_name": "mgr.compute-0.ofslrn", "daemon_type": "mgr", "events": ["2025-11-24T19:48:50.955143Z daemon:mgr.compute-0.ofslrn [INFO] \"Reconfigured mgr.compute-0.ofslrn on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T19:49:15.060319Z", "memory_usage": 548405248, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-24T19:46:44.119540Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@mgr.compute-0.ofslrn", "version": "18.2.7"}, {"container_id": "ba22cb483d92", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.44%", "created": "2025-11-24T19:46:38.288441Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-24T19:48:49.998454Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T19:49:15.060139Z", "memory_request": 2147483648, "memory_usage": 37444648, "ports": [], "service_name": "mon", "started": "2025-11-24T19:46:41.494462Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@mon.compute-0", "version": "18.2.7"}, {"container_id": "bbba25ec9aab", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.17%", "created": "2025-11-24T19:48:21.838922Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-24T19:48:21.907518Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T19:49:15.060686Z", "memory_request": 4294967296, "memory_usage": 58762199, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T19:48:21.708400Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@osd.0", "version": "18.2.7"}, {"container_id": "392320b68810", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.27%", "created": "2025-11-24T19:48:27.361010Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-24T19:48:27.441686Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T19:49:15.060824Z", "memory_request": 4294967296, "memory_usage": 56182702, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T19:48:27.190204Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@osd.1", "version": "18.2.7"}, {"container_id": "871186a32d76", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.45%", "created": "2025-11-24T19:48:32.960099Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-24T19:48:33.047528Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T19:49:15.060955Z", "memory_request": 4294967296, "memory_usage": 55438213, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T19:48:32.792115Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@osd.2", "version": "18.2.7"}, {"container_id": "c6648a1feda4", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "9.29%", "created": "2025-11-24T19:49:12.714925Z", "daemon_id": "rgw.compute-0.dgkdrf", "daemon_name": "rgw.rgw.compute-0.dgkdrf", "daemon_type": "rgw", "events": ["2025-11-24T19:49:12.780710Z daemon:rgw.rgw.compute-0.dgkdrf [INFO] \"Deployed rgw.rgw.compute-0.dgkdrf on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-11-24T19:49:15.061091Z", "memory_usage": 20751319, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-24T19:49:12.582265Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-05e060a3-406b-57f0-89d2-ec35f5b09305@rgw.rgw.compute-0.dgkdrf", "version": "18.2.7"}]
Nov 24 19:49:21 compute-0 systemd[1]: libpod-1b1762fc8e0ea956fd13870df902fe00abd9dff5dcc79b6837f2bc0203e0f9c3.scope: Deactivated successfully.
Nov 24 19:49:21 compute-0 podman[101269]: 2025-11-24 19:49:21.525419359 +0000 UTC m=+0.811173702 container died 1b1762fc8e0ea956fd13870df902fe00abd9dff5dcc79b6837f2bc0203e0f9c3 (image=quay.io/ceph/ceph:v18, name=hardcore_antonelli, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 19:49:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbfb26714a8a5e52e0b8c7029d373ef6293f2ae343794735162e8283ab7c62ad-merged.mount: Deactivated successfully.
Nov 24 19:49:21 compute-0 podman[101269]: 2025-11-24 19:49:21.588844007 +0000 UTC m=+0.874598350 container remove 1b1762fc8e0ea956fd13870df902fe00abd9dff5dcc79b6837f2bc0203e0f9c3 (image=quay.io/ceph/ceph:v18, name=hardcore_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 19:49:21 compute-0 systemd[1]: libpod-conmon-1b1762fc8e0ea956fd13870df902fe00abd9dff5dcc79b6837f2bc0203e0f9c3.scope: Deactivated successfully.
Nov 24 19:49:21 compute-0 sudo[101216]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:21 compute-0 podman[101450]: 2025-11-24 19:49:21.696736707 +0000 UTC m=+0.053968612 container create e79222c3e622589e55d69ee30c146b3d813dc5720427c748657c8fe1b26c6707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kepler, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 19:49:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:21 compute-0 systemd[1]: Started libpod-conmon-e79222c3e622589e55d69ee30c146b3d813dc5720427c748657c8fe1b26c6707.scope.
Nov 24 19:49:21 compute-0 podman[101450]: 2025-11-24 19:49:21.672074324 +0000 UTC m=+0.029306299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbcbda12e5d7cc44c82ea6fc90f7aa4388d62043f98ab3af5f26b0ea76d9761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbcbda12e5d7cc44c82ea6fc90f7aa4388d62043f98ab3af5f26b0ea76d9761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbcbda12e5d7cc44c82ea6fc90f7aa4388d62043f98ab3af5f26b0ea76d9761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbcbda12e5d7cc44c82ea6fc90f7aa4388d62043f98ab3af5f26b0ea76d9761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:21 compute-0 podman[101450]: 2025-11-24 19:49:21.808734345 +0000 UTC m=+0.165966280 container init e79222c3e622589e55d69ee30c146b3d813dc5720427c748657c8fe1b26c6707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kepler, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 19:49:21 compute-0 podman[101450]: 2025-11-24 19:49:21.816650263 +0000 UTC m=+0.173882188 container start e79222c3e622589e55d69ee30c146b3d813dc5720427c748657c8fe1b26c6707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 19:49:21 compute-0 podman[101450]: 2025-11-24 19:49:21.820651099 +0000 UTC m=+0.177883024 container attach e79222c3e622589e55d69ee30c146b3d813dc5720427c748657c8fe1b26c6707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 24 19:49:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2147305791' entity='client.rgw.rgw.compute-0.dgkdrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 19:49:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 24 19:49:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 24 19:49:21 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2147305791' entity='client.rgw.rgw.compute-0.dgkdrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 19:49:21 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2147305791' entity='client.rgw.rgw.compute-0.dgkdrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 19:49:21 compute-0 ceph-mon[75677]: osdmap e38: 3 total, 3 up, 3 in
Nov 24 19:49:22 compute-0 radosgw[99827]: LDAP not started since no server URIs were provided in the configuration.
Nov 24 19:49:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-rgw-rgw-compute-0-dgkdrf[99823]: 2025-11-24T19:49:22.096+0000 7f73d0002940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 24 19:49:22 compute-0 radosgw[99827]: framework: beast
Nov 24 19:49:22 compute-0 radosgw[99827]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 24 19:49:22 compute-0 radosgw[99827]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 24 19:49:22 compute-0 radosgw[99827]: starting handler: beast
Nov 24 19:49:22 compute-0 radosgw[99827]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 19:49:22 compute-0 radosgw[99827]: mgrc service_daemon_register rgw.14265 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dgkdrf,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=b206323f-01b0-4233-8e7f-bdf8f52b5f3b,zone_name=default,zonegroup_id=cda578d4-d840-44ee-8dd6-1318c3b8d738,zonegroup_name=default}
Nov 24 19:49:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 1 creating+peering, 10 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Nov 24 19:49:22 compute-0 sudo[102044]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdrcllybjrataubgihjtpznxvzmxlhpf ; /usr/bin/python3'
Nov 24 19:49:22 compute-0 sudo[102044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:22 compute-0 python3[102049]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:22 compute-0 podman[102065]: 2025-11-24 19:49:22.806625576 +0000 UTC m=+0.054937012 container create 3625e799cb21214d62bda23193feec53f750031e8fda9ad0b53f6a9e29464784 (image=quay.io/ceph/ceph:v18, name=infallible_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 19:49:22 compute-0 brave_kepler[101468]: {
Nov 24 19:49:22 compute-0 brave_kepler[101468]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "osd_id": 2,
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "type": "bluestore"
Nov 24 19:49:22 compute-0 brave_kepler[101468]:     },
Nov 24 19:49:22 compute-0 brave_kepler[101468]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "osd_id": 1,
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "type": "bluestore"
Nov 24 19:49:22 compute-0 brave_kepler[101468]:     },
Nov 24 19:49:22 compute-0 brave_kepler[101468]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "osd_id": 0,
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:49:22 compute-0 brave_kepler[101468]:         "type": "bluestore"
Nov 24 19:49:22 compute-0 brave_kepler[101468]:     }
Nov 24 19:49:22 compute-0 brave_kepler[101468]: }
Nov 24 19:49:22 compute-0 systemd[1]: Started libpod-conmon-3625e799cb21214d62bda23193feec53f750031e8fda9ad0b53f6a9e29464784.scope.
Nov 24 19:49:22 compute-0 systemd[1]: libpod-e79222c3e622589e55d69ee30c146b3d813dc5720427c748657c8fe1b26c6707.scope: Deactivated successfully.
Nov 24 19:49:22 compute-0 systemd[1]: libpod-e79222c3e622589e55d69ee30c146b3d813dc5720427c748657c8fe1b26c6707.scope: Consumed 1.034s CPU time.
Nov 24 19:49:22 compute-0 podman[101450]: 2025-11-24 19:49:22.848221299 +0000 UTC m=+1.205453224 container died e79222c3e622589e55d69ee30c146b3d813dc5720427c748657c8fe1b26c6707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kepler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 24 19:49:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dedebfe76a8f3ce4bd2fb4cfd4675efb596675394fa0ea9ba1c35c31f4091b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dedebfe76a8f3ce4bd2fb4cfd4675efb596675394fa0ea9ba1c35c31f4091b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:22 compute-0 podman[102065]: 2025-11-24 19:49:22.782759438 +0000 UTC m=+0.031070904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cbcbda12e5d7cc44c82ea6fc90f7aa4388d62043f98ab3af5f26b0ea76d9761-merged.mount: Deactivated successfully.
Nov 24 19:49:22 compute-0 podman[102065]: 2025-11-24 19:49:22.888272764 +0000 UTC m=+0.136584240 container init 3625e799cb21214d62bda23193feec53f750031e8fda9ad0b53f6a9e29464784 (image=quay.io/ceph/ceph:v18, name=infallible_keldysh, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 24 19:49:22 compute-0 podman[102065]: 2025-11-24 19:49:22.897215023 +0000 UTC m=+0.145526459 container start 3625e799cb21214d62bda23193feec53f750031e8fda9ad0b53f6a9e29464784 (image=quay.io/ceph/ceph:v18, name=infallible_keldysh, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:22 compute-0 podman[102065]: 2025-11-24 19:49:22.909741846 +0000 UTC m=+0.158053292 container attach 3625e799cb21214d62bda23193feec53f750031e8fda9ad0b53f6a9e29464784 (image=quay.io/ceph/ceph:v18, name=infallible_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 19:49:22 compute-0 podman[101450]: 2025-11-24 19:49:22.918533282 +0000 UTC m=+1.275765177 container remove e79222c3e622589e55d69ee30c146b3d813dc5720427c748657c8fe1b26c6707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 19:49:22 compute-0 systemd[1]: libpod-conmon-e79222c3e622589e55d69ee30c146b3d813dc5720427c748657c8fe1b26c6707.scope: Deactivated successfully.
Nov 24 19:49:22 compute-0 sudo[101307]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:22 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 19:49:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:22 compute-0 ceph-mon[75677]: from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 19:49:22 compute-0 ceph-mon[75677]: pgmap v88: 11 pgs: 1 creating+peering, 10 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Nov 24 19:49:22 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:22 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:22 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a043cf87-db22-4670-af12-88cab20a7237 does not exist
Nov 24 19:49:22 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev 344ef792-073c-4ff2-af0a-d54e884aac56 (Updating mds.cephfs deployment (+1 -> 1))
Nov 24 19:49:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jkqrlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 24 19:49:22 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jkqrlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 19:49:22 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jkqrlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 19:49:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:49:22 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:22 compute-0 ceph-mgr[75975]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.jkqrlp on compute-0
Nov 24 19:49:22 compute-0 ceph-mgr[75975]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.jkqrlp on compute-0
Nov 24 19:49:23 compute-0 sudo[102101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:23 compute-0 sudo[102101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:23 compute-0 sudo[102101]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:23 compute-0 sudo[102126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:23 compute-0 sudo[102126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:23 compute-0 sudo[102126]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:23 compute-0 sudo[102151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:23 compute-0 sudo[102151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:23 compute-0 sudo[102151]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:23 compute-0 sudo[102186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305
Nov 24 19:49:23 compute-0 sudo[102186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 19:49:23 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975560486' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 19:49:23 compute-0 infallible_keldysh[102086]: 
Nov 24 19:49:23 compute-0 infallible_keldysh[102086]: {"fsid":"05e060a3-406b-57f0-89d2-ec35f5b09305","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":161,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":38,"num_osds":3,"num_up_osds":3,"osd_up_since":1764013719,"num_in_osds":3,"osd_in_since":1764013690,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":10},{"state_name":"creating+peering","count":1}],"num_pgs":11,"num_pools":11,"num_objects":16,"data_bytes":460848,"bytes_used":84000768,"bytes_avail":64327925760,"bytes_total":64411926528,"inactive_pgs_ratio":0.090909093618392944,"read_bytes_sec":255,"write_bytes_sec":511,"read_op_per_sec":0,"write_op_per_sec":0},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-11-24T19:49:22.291173+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14265":{"start_epoch":3,"start_stamp":"2025-11-24T19:49:22.208755+0000","gid":14265,"addr":"192.168.122.100:0/2147305791","metadata":{"arch":"x86_64","ceph_release":"reef","ceph_version":"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)","ceph_version_short":"18.2.7","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.dgkdrf","kernel_description":"#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025","kernel_version":"5.14.0-639.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864308","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"b206323f-01b0-4233-8e7f-bdf8f52b5f3b","zone_name":"default","zonegroup_id":"cda578d4-d840-44ee-8dd6-1318c3b8d738","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{}}
Nov 24 19:49:23 compute-0 systemd[1]: libpod-3625e799cb21214d62bda23193feec53f750031e8fda9ad0b53f6a9e29464784.scope: Deactivated successfully.
Nov 24 19:49:23 compute-0 podman[102065]: 2025-11-24 19:49:23.538622567 +0000 UTC m=+0.786934083 container died 3625e799cb21214d62bda23193feec53f750031e8fda9ad0b53f6a9e29464784 (image=quay.io/ceph/ceph:v18, name=infallible_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 19:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-98dedebfe76a8f3ce4bd2fb4cfd4675efb596675394fa0ea9ba1c35c31f4091b-merged.mount: Deactivated successfully.
Nov 24 19:49:23 compute-0 podman[102065]: 2025-11-24 19:49:23.628231234 +0000 UTC m=+0.876542700 container remove 3625e799cb21214d62bda23193feec53f750031e8fda9ad0b53f6a9e29464784 (image=quay.io/ceph/ceph:v18, name=infallible_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 19:49:23 compute-0 systemd[1]: libpod-conmon-3625e799cb21214d62bda23193feec53f750031e8fda9ad0b53f6a9e29464784.scope: Deactivated successfully.
Nov 24 19:49:23 compute-0 sudo[102044]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:23 compute-0 podman[102272]: 2025-11-24 19:49:23.817115232 +0000 UTC m=+0.085553982 container create 3d5ffc8ef62b9879688a2c131647de53f8dc0a40365dac2b8a0d09662f80febf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bartik, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 19:49:23 compute-0 podman[102272]: 2025-11-24 19:49:23.772207725 +0000 UTC m=+0.040646515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:23 compute-0 systemd[1]: Started libpod-conmon-3d5ffc8ef62b9879688a2c131647de53f8dc0a40365dac2b8a0d09662f80febf.scope.
Nov 24 19:49:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:23 compute-0 podman[102272]: 2025-11-24 19:49:23.969225286 +0000 UTC m=+0.237664076 container init 3d5ffc8ef62b9879688a2c131647de53f8dc0a40365dac2b8a0d09662f80febf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 19:49:23 compute-0 podman[102272]: 2025-11-24 19:49:23.980280673 +0000 UTC m=+0.248719413 container start 3d5ffc8ef62b9879688a2c131647de53f8dc0a40365dac2b8a0d09662f80febf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 19:49:23 compute-0 ceph-mon[75677]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 19:49:23 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:23 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jkqrlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 19:49:23 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jkqrlp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 19:49:23 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:23 compute-0 ceph-mon[75677]: Deploying daemon mds.cephfs.compute-0.jkqrlp on compute-0
Nov 24 19:49:23 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1975560486' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 19:49:23 compute-0 strange_bartik[102288]: 167 167
Nov 24 19:49:23 compute-0 systemd[1]: libpod-3d5ffc8ef62b9879688a2c131647de53f8dc0a40365dac2b8a0d09662f80febf.scope: Deactivated successfully.
Nov 24 19:49:23 compute-0 podman[102272]: 2025-11-24 19:49:23.990513663 +0000 UTC m=+0.258952453 container attach 3d5ffc8ef62b9879688a2c131647de53f8dc0a40365dac2b8a0d09662f80febf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 19:49:23 compute-0 podman[102272]: 2025-11-24 19:49:23.991005739 +0000 UTC m=+0.259444489 container died 3d5ffc8ef62b9879688a2c131647de53f8dc0a40365dac2b8a0d09662f80febf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-906bb72f93c2eb2db6865897cd1dc238d6a3f215d269c4edeebe6041125f2347-merged.mount: Deactivated successfully.
Nov 24 19:49:24 compute-0 podman[102272]: 2025-11-24 19:49:24.087178631 +0000 UTC m=+0.355617371 container remove 3d5ffc8ef62b9879688a2c131647de53f8dc0a40365dac2b8a0d09662f80febf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 19:49:24 compute-0 systemd[1]: libpod-conmon-3d5ffc8ef62b9879688a2c131647de53f8dc0a40365dac2b8a0d09662f80febf.scope: Deactivated successfully.
Nov 24 19:49:24 compute-0 systemd[1]: Reloading.
Nov 24 19:49:24 compute-0 systemd-rc-local-generator[102334]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:49:24 compute-0 systemd-sysv-generator[102340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:49:24
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Some PGs (0.090909) are inactive; try again later
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 1 creating+peering, 10 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 190 B/s rd, 381 B/s wr, 1 op/s
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:49:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 19:49:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:49:24 compute-0 ceph-mgr[75975]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Nov 24 19:49:24 compute-0 systemd[1]: Reloading.
Nov 24 19:49:24 compute-0 systemd-rc-local-generator[102400]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:49:24 compute-0 systemd-sysv-generator[102403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:49:24 compute-0 sudo[102372]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xasqyjjnaehqdpxrmsfvjlnpkhjzjogv ; /usr/bin/python3'
Nov 24 19:49:24 compute-0 sudo[102372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:24 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.jkqrlp for 05e060a3-406b-57f0-89d2-ec35f5b09305...
Nov 24 19:49:24 compute-0 python3[102410]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:24 compute-0 podman[102426]: 2025-11-24 19:49:24.971332629 +0000 UTC m=+0.054029504 container create 84277c718692918f964d22cf3598e7d376f52579b48d95d4e814274b3d548265 (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 19:49:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 24 19:49:24 compute-0 ceph-mon[75677]: pgmap v89: 11 pgs: 1 creating+peering, 10 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 190 B/s rd, 381 B/s wr, 1 op/s
Nov 24 19:49:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 24 19:49:25 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 24 19:49:25 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev a918086a-4880-44d0-a878-0bcf7dca8980 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 24 19:49:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 19:49:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:25 compute-0 systemd[1]: Started libpod-conmon-84277c718692918f964d22cf3598e7d376f52579b48d95d4e814274b3d548265.scope.
Nov 24 19:49:25 compute-0 podman[102426]: 2025-11-24 19:49:24.952308923 +0000 UTC m=+0.035005758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b7a27d1f61909b09c9001f5e6f945e3cb990aea22c4984eaa1b84ee910953d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b7a27d1f61909b09c9001f5e6f945e3cb990aea22c4984eaa1b84ee910953d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:25 compute-0 podman[102426]: 2025-11-24 19:49:25.076207644 +0000 UTC m=+0.158904489 container init 84277c718692918f964d22cf3598e7d376f52579b48d95d4e814274b3d548265 (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 24 19:49:25 compute-0 podman[102426]: 2025-11-24 19:49:25.088689295 +0000 UTC m=+0.171386160 container start 84277c718692918f964d22cf3598e7d376f52579b48d95d4e814274b3d548265 (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 19:49:25 compute-0 podman[102426]: 2025-11-24 19:49:25.094220088 +0000 UTC m=+0.176916933 container attach 84277c718692918f964d22cf3598e7d376f52579b48d95d4e814274b3d548265 (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 19:49:25 compute-0 podman[102478]: 2025-11-24 19:49:25.155461037 +0000 UTC m=+0.069593142 container create e81b49c023633e7124b73c0bc5222285d91eff27c580b4e1145e992f5a99cf19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mds-cephfs-compute-0-jkqrlp, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:25 compute-0 podman[102478]: 2025-11-24 19:49:25.123264289 +0000 UTC m=+0.037396444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09d115103a8359bdf6d38084be0709433b099bb38180fa4d34b398a3d2fcbe4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09d115103a8359bdf6d38084be0709433b099bb38180fa4d34b398a3d2fcbe4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09d115103a8359bdf6d38084be0709433b099bb38180fa4d34b398a3d2fcbe4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09d115103a8359bdf6d38084be0709433b099bb38180fa4d34b398a3d2fcbe4/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.jkqrlp supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:25 compute-0 podman[102478]: 2025-11-24 19:49:25.251668031 +0000 UTC m=+0.165800196 container init e81b49c023633e7124b73c0bc5222285d91eff27c580b4e1145e992f5a99cf19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mds-cephfs-compute-0-jkqrlp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:25 compute-0 podman[102478]: 2025-11-24 19:49:25.258960689 +0000 UTC m=+0.173092794 container start e81b49c023633e7124b73c0bc5222285d91eff27c580b4e1145e992f5a99cf19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mds-cephfs-compute-0-jkqrlp, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:25 compute-0 bash[102478]: e81b49c023633e7124b73c0bc5222285d91eff27c580b4e1145e992f5a99cf19
Nov 24 19:49:25 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.jkqrlp for 05e060a3-406b-57f0-89d2-ec35f5b09305.
Nov 24 19:49:25 compute-0 ceph-mds[102499]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 19:49:25 compute-0 ceph-mds[102499]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 24 19:49:25 compute-0 ceph-mds[102499]: main not setting numa affinity
Nov 24 19:49:25 compute-0 ceph-mds[102499]: pidfile_write: ignore empty --pid-file
Nov 24 19:49:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mds-cephfs-compute-0-jkqrlp[102495]: starting mds.cephfs.compute-0.jkqrlp at 
Nov 24 19:49:25 compute-0 sudo[102186]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:25 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp Updating MDS map to version 2 from mon.0
Nov 24 19:49:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 19:49:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:25 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev 344ef792-073c-4ff2-af0a-d54e884aac56 (Updating mds.cephfs deployment (+1 -> 1))
Nov 24 19:49:25 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event 344ef792-073c-4ff2-af0a-d54e884aac56 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 24 19:49:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 24 19:49:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 19:49:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:25 compute-0 sudo[102518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:25 compute-0 sudo[102518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:25 compute-0 sudo[102518]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:25 compute-0 sudo[102562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:49:25 compute-0 sudo[102562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:25 compute-0 sudo[102562]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:25 compute-0 sudo[102587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:25 compute-0 sudo[102587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:25 compute-0 sudo[102587]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 19:49:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598347216' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:49:25 compute-0 strange_matsumoto[102464]: 
Nov 24 19:49:25 compute-0 systemd[1]: libpod-84277c718692918f964d22cf3598e7d376f52579b48d95d4e814274b3d548265.scope: Deactivated successfully.
Nov 24 19:49:25 compute-0 strange_matsumoto[102464]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.dgkdrf","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 24 19:49:25 compute-0 podman[102426]: 2025-11-24 19:49:25.680428253 +0000 UTC m=+0.763125128 container died 84277c718692918f964d22cf3598e7d376f52579b48d95d4e814274b3d548265 (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 19:49:25 compute-0 sudo[102612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-29b7a27d1f61909b09c9001f5e6f945e3cb990aea22c4984eaa1b84ee910953d-merged.mount: Deactivated successfully.
Nov 24 19:49:25 compute-0 sudo[102612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:25 compute-0 sudo[102612]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:25 compute-0 podman[102426]: 2025-11-24 19:49:25.741827546 +0000 UTC m=+0.824524421 container remove 84277c718692918f964d22cf3598e7d376f52579b48d95d4e814274b3d548265 (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 19:49:25 compute-0 systemd[1]: libpod-conmon-84277c718692918f964d22cf3598e7d376f52579b48d95d4e814274b3d548265.scope: Deactivated successfully.
Nov 24 19:49:25 compute-0 sudo[102372]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:25 compute-0 sudo[102649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:25 compute-0 sudo[102649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:25 compute-0 sudo[102649]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:25 compute-0 sudo[102677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 19:49:25 compute-0 sudo[102677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 24 19:49:26 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev 01bc71d6-7957-4b3a-a1f3-2c343caafd8d (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e3 new map
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T19:49:03.407355+0000
                                           modified        2025-11-24T19:49:03.407384+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jkqrlp{-1:14271} state up:standby seq 1 addr [v2:192.168.122.100:6814/1955264424,v1:192.168.122.100:6815/1955264424] compat {c=[1],r=[1],i=[7ff]}]
Nov 24 19:49:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:26 compute-0 ceph-mon[75677]: osdmap e39: 3 total, 3 up, 3 in
Nov 24 19:49:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp Updating MDS map to version 3 from mon.0
Nov 24 19:49:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:26 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1598347216' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp Monitors have assigned me to become a standby.
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1955264424,v1:192.168.122.100:6815/1955264424] up:boot
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1955264424,v1:192.168.122.100:6815/1955264424] as mds.0
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.jkqrlp assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.jkqrlp"} v 0) v1
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jkqrlp"}]: dispatch
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e3 all = 0
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e4 new map
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T19:49:03.407355+0000
                                           modified        2025-11-24T19:49:26.031247+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14271}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.jkqrlp{0:14271} state up:creating seq 1 addr [v2:192.168.122.100:6814/1955264424,v1:192.168.122.100:6815/1955264424] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp Updating MDS map to version 4 from mon.0
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.jkqrlp=up:creating}
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x1
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x100
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x600
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x601
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x602
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x603
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x604
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x605
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x606
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x607
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x608
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.cache creating system inode with ino:0x609
Nov 24 19:49:26 compute-0 ceph-mds[102499]: mds.0.4 creating_done
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.jkqrlp is now active in filesystem cephfs as rank 0
Nov 24 19:49:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 11 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 6.0 KiB/s wr, 205 op/s
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:26 compute-0 podman[102783]: 2025-11-24 19:49:26.510473385 +0000 UTC m=+0.065597025 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 24 19:49:26 compute-0 sudo[102827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kazmfegveyovppmxikplntnbtktobdmq ; /usr/bin/python3'
Nov 24 19:49:26 compute-0 sudo[102827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:26 compute-0 podman[102783]: 2025-11-24 19:49:26.615063921 +0000 UTC m=+0.170187531 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 24 19:49:26 compute-0 python3[102829]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:26 compute-0 podman[102858]: 2025-11-24 19:49:26.783982764 +0000 UTC m=+0.038278641 container create ead6e51700a623d88f4317e3e2d104627a770d5aa7e38049089ec30c9482101d (image=quay.io/ceph/ceph:v18, name=intelligent_davinci, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 19:49:26 compute-0 systemd[1]: Started libpod-conmon-ead6e51700a623d88f4317e3e2d104627a770d5aa7e38049089ec30c9482101d.scope.
Nov 24 19:49:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe2fce5c8b9147da3a809728a22a0faf858c1a1cda295a96b7c45dd83032549/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe2fce5c8b9147da3a809728a22a0faf858c1a1cda295a96b7c45dd83032549/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:26 compute-0 podman[102858]: 2025-11-24 19:49:26.861994027 +0000 UTC m=+0.116289994 container init ead6e51700a623d88f4317e3e2d104627a770d5aa7e38049089ec30c9482101d (image=quay.io/ceph/ceph:v18, name=intelligent_davinci, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 19:49:26 compute-0 podman[102858]: 2025-11-24 19:49:26.767697313 +0000 UTC m=+0.021993240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:26 compute-0 podman[102858]: 2025-11-24 19:49:26.873249319 +0000 UTC m=+0.127545226 container start ead6e51700a623d88f4317e3e2d104627a770d5aa7e38049089ec30c9482101d (image=quay.io/ceph/ceph:v18, name=intelligent_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:26 compute-0 podman[102858]: 2025-11-24 19:49:26.878670129 +0000 UTC m=+0.132966036 container attach ead6e51700a623d88f4317e3e2d104627a770d5aa7e38049089ec30c9482101d (image=quay.io/ceph/ceph:v18, name=intelligent_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 24 19:49:27 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev 2aa42362-2f5c-4dd6-800a-6cbbf3683a33 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:27 compute-0 ceph-mon[75677]: osdmap e40: 3 total, 3 up, 3 in
Nov 24 19:49:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mds.? [v2:192.168.122.100:6814/1955264424,v1:192.168.122.100:6815/1955264424] up:boot
Nov 24 19:49:27 compute-0 ceph-mon[75677]: daemon mds.cephfs.compute-0.jkqrlp assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 24 19:49:27 compute-0 ceph-mon[75677]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 24 19:49:27 compute-0 ceph-mon[75677]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 24 19:49:27 compute-0 ceph-mon[75677]: Cluster is now healthy
Nov 24 19:49:27 compute-0 ceph-mon[75677]: fsmap cephfs:0 1 up:standby
Nov 24 19:49:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jkqrlp"}]: dispatch
Nov 24 19:49:27 compute-0 ceph-mon[75677]: fsmap cephfs:1 {0=cephfs.compute-0.jkqrlp=up:creating}
Nov 24 19:49:27 compute-0 ceph-mon[75677]: daemon mds.cephfs.compute-0.jkqrlp is now active in filesystem cephfs as rank 0
Nov 24 19:49:27 compute-0 ceph-mon[75677]: pgmap v92: 11 pgs: 11 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 6.0 KiB/s wr, 205 op/s
Nov 24 19:49:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:27 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 19:49:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:27 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 19:49:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:27 compute-0 ceph-mon[75677]: osdmap e41: 3 total, 3 up, 3 in
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e5 new map
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T19:49:03.407355+0000
                                           modified        2025-11-24T19:49:27.037067+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14271}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.jkqrlp{0:14271} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1955264424,v1:192.168.122.100:6815/1955264424] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 24 19:49:27 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp Updating MDS map to version 5 from mon.0
Nov 24 19:49:27 compute-0 ceph-mds[102499]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 24 19:49:27 compute-0 ceph-mds[102499]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 24 19:49:27 compute-0 ceph-mds[102499]: mds.0.4 recovery_done -- successful recovery!
Nov 24 19:49:27 compute-0 ceph-mds[102499]: mds.0.4 active_start
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1955264424,v1:192.168.122.100:6815/1955264424] up:active
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.jkqrlp=up:active}
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2342736126' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 24 19:49:27 compute-0 intelligent_davinci[102884]: mimic
Nov 24 19:49:27 compute-0 systemd[1]: libpod-ead6e51700a623d88f4317e3e2d104627a770d5aa7e38049089ec30c9482101d.scope: Deactivated successfully.
Nov 24 19:49:27 compute-0 podman[102858]: 2025-11-24 19:49:27.46637414 +0000 UTC m=+0.720670057 container died ead6e51700a623d88f4317e3e2d104627a770d5aa7e38049089ec30c9482101d (image=quay.io/ceph/ceph:v18, name=intelligent_davinci, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 19:49:27 compute-0 sudo[102677]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-abe2fce5c8b9147da3a809728a22a0faf858c1a1cda295a96b7c45dd83032549-merged.mount: Deactivated successfully.
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:49:27 compute-0 podman[102858]: 2025-11-24 19:49:27.516219012 +0000 UTC m=+0.770514889 container remove ead6e51700a623d88f4317e3e2d104627a770d5aa7e38049089ec30c9482101d (image=quay.io/ceph/ceph:v18, name=intelligent_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:27 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 41f23b7a-fe46-4767-828b-7ac542fcb6eb does not exist
Nov 24 19:49:27 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7200a1bf-ecbd-4d93-b10f-7e9c23606452 does not exist
Nov 24 19:49:27 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 36e95b7e-8818-4ceb-bd6b-4c0222e74306 does not exist
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:49:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:49:27 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:27 compute-0 systemd[1]: libpod-conmon-ead6e51700a623d88f4317e3e2d104627a770d5aa7e38049089ec30c9482101d.scope: Deactivated successfully.
Nov 24 19:49:27 compute-0 sudo[102827]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:27 compute-0 sudo[103023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:27 compute-0 sudo[103023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:27 compute-0 sudo[103023]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:27 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=15/16 n=0 ec=14/14 lis/c=15/15 les/c/f=16/16/0 sis=41 pruub=11.953764915s) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active pruub 70.776336670s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:27 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=15/16 n=0 ec=14/14 lis/c=15/15 les/c/f=16/16/0 sis=41 pruub=11.953764915s) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown pruub 70.776336670s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:27 compute-0 sudo[103048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:27 compute-0 sudo[103048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:27 compute-0 sudo[103048]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:27 compute-0 sudo[103073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:27 compute-0 sudo[103073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:27 compute-0 sudo[103073]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:27 compute-0 sudo[103098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:49:27 compute-0 sudo[103098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:27 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=19/20 n=0 ec=12/12 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=8.828464508s) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active pruub 62.261283875s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:27 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=19/20 n=0 ec=12/12 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=8.828464508s) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown pruub 62.261283875s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 24 19:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 24 19:49:28 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 24 19:49:28 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev a8e40ed0-b161-4d0b-a2b7-3d8570956d66 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 24 19:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 24 19:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1c( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.19( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1a( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.b( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.4( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.2( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.d( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.10( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.14( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.13( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=15/16 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: mds.? [v2:192.168.122.100:6814/1955264424,v1:192.168.122.100:6815/1955264424] up:active
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: fsmap cephfs:1 {0=cephfs.compute-0.jkqrlp=up:active}
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2342736126' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: osdmap e42: 3 total, 3 up, 3 in
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1c( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.19( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1a( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=41/42 n=0 ec=14/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.b( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.4( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.2( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.d( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.10( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.14( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.13( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=15/15 les/c/f=16/16/0 sis=41) [1] r=0 lpr=41 pi=[15,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=41/42 n=0 ec=12/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v95: 73 pgs: 62 unknown, 11 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 8.0 KiB/s wr, 274 op/s
Nov 24 19:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:28 compute-0 podman[103163]: 2025-11-24 19:49:28.296780825 +0000 UTC m=+0.063591924 container create b0912ffd85a8741a4703a924788b00fd2a5f1a8cf60d9627ea3e43fc62dd871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:28 compute-0 systemd[1]: Started libpod-conmon-b0912ffd85a8741a4703a924788b00fd2a5f1a8cf60d9627ea3e43fc62dd871f.scope.
Nov 24 19:49:28 compute-0 podman[103163]: 2025-11-24 19:49:28.27110472 +0000 UTC m=+0.037915859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:28 compute-0 sudo[103204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qguvzfkgqscgmsgneminmlsybxfjgcjg ; /usr/bin/python3'
Nov 24 19:49:28 compute-0 sudo[103204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:49:28 compute-0 podman[103163]: 2025-11-24 19:49:28.39368677 +0000 UTC m=+0.160497869 container init b0912ffd85a8741a4703a924788b00fd2a5f1a8cf60d9627ea3e43fc62dd871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 19:49:28 compute-0 podman[103163]: 2025-11-24 19:49:28.401622869 +0000 UTC m=+0.168433958 container start b0912ffd85a8741a4703a924788b00fd2a5f1a8cf60d9627ea3e43fc62dd871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 19:49:28 compute-0 podman[103163]: 2025-11-24 19:49:28.40518023 +0000 UTC m=+0.171991329 container attach b0912ffd85a8741a4703a924788b00fd2a5f1a8cf60d9627ea3e43fc62dd871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:28 compute-0 inspiring_curran[103203]: 167 167
Nov 24 19:49:28 compute-0 systemd[1]: libpod-b0912ffd85a8741a4703a924788b00fd2a5f1a8cf60d9627ea3e43fc62dd871f.scope: Deactivated successfully.
Nov 24 19:49:28 compute-0 conmon[103203]: conmon b0912ffd85a8741a4703 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0912ffd85a8741a4703a924788b00fd2a5f1a8cf60d9627ea3e43fc62dd871f.scope/container/memory.events
Nov 24 19:49:28 compute-0 podman[103163]: 2025-11-24 19:49:28.408774783 +0000 UTC m=+0.175585872 container died b0912ffd85a8741a4703a924788b00fd2a5f1a8cf60d9627ea3e43fc62dd871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-53302697cd703c79dd97898f2abc3fe72280eaef2234591e16cba0625565de4b-merged.mount: Deactivated successfully.
Nov 24 19:49:28 compute-0 podman[103163]: 2025-11-24 19:49:28.453910267 +0000 UTC m=+0.220721356 container remove b0912ffd85a8741a4703a924788b00fd2a5f1a8cf60d9627ea3e43fc62dd871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 19:49:28 compute-0 systemd[1]: libpod-conmon-b0912ffd85a8741a4703a924788b00fd2a5f1a8cf60d9627ea3e43fc62dd871f.scope: Deactivated successfully.
Nov 24 19:49:28 compute-0 python3[103208]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:49:28 compute-0 podman[103224]: 2025-11-24 19:49:28.623413127 +0000 UTC m=+0.059107263 container create 31989facc360ad39c21c117ebeccad256ec9f502655cb743dae4a61bdf7180ad (image=quay.io/ceph/ceph:v18, name=sharp_feynman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 19:49:28 compute-0 systemd[1]: Started libpod-conmon-31989facc360ad39c21c117ebeccad256ec9f502655cb743dae4a61bdf7180ad.scope.
Nov 24 19:49:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Nov 24 19:49:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Nov 24 19:49:28 compute-0 podman[103242]: 2025-11-24 19:49:28.691737208 +0000 UTC m=+0.066198456 container create cbe9b004a6416ab6b2c0afbe03eadf1e6abebffdd7e7a37c8387cd0f76315c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lovelace, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:28 compute-0 podman[103224]: 2025-11-24 19:49:28.602298996 +0000 UTC m=+0.037993112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:49:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d073e5dde4da98b442a29e9463ed2b0fb5035e8a7b320261c65c7de0573f2a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d073e5dde4da98b442a29e9463ed2b0fb5035e8a7b320261c65c7de0573f2a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:28 compute-0 systemd[1]: Started libpod-conmon-cbe9b004a6416ab6b2c0afbe03eadf1e6abebffdd7e7a37c8387cd0f76315c4b.scope.
Nov 24 19:49:28 compute-0 podman[103224]: 2025-11-24 19:49:28.733188905 +0000 UTC m=+0.168883051 container init 31989facc360ad39c21c117ebeccad256ec9f502655cb743dae4a61bdf7180ad (image=quay.io/ceph/ceph:v18, name=sharp_feynman, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:28 compute-0 podman[103224]: 2025-11-24 19:49:28.743733266 +0000 UTC m=+0.179427392 container start 31989facc360ad39c21c117ebeccad256ec9f502655cb743dae4a61bdf7180ad (image=quay.io/ceph/ceph:v18, name=sharp_feynman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:49:28 compute-0 podman[103224]: 2025-11-24 19:49:28.749036912 +0000 UTC m=+0.184731048 container attach 31989facc360ad39c21c117ebeccad256ec9f502655cb743dae4a61bdf7180ad (image=quay.io/ceph/ceph:v18, name=sharp_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 19:49:28 compute-0 podman[103242]: 2025-11-24 19:49:28.662361167 +0000 UTC m=+0.036822475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3021fc80e3325c9f83776828fc6ac797ca033ab2da9027f2bb083918945bfd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3021fc80e3325c9f83776828fc6ac797ca033ab2da9027f2bb083918945bfd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3021fc80e3325c9f83776828fc6ac797ca033ab2da9027f2bb083918945bfd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3021fc80e3325c9f83776828fc6ac797ca033ab2da9027f2bb083918945bfd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3021fc80e3325c9f83776828fc6ac797ca033ab2da9027f2bb083918945bfd8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:28 compute-0 podman[103242]: 2025-11-24 19:49:28.808155624 +0000 UTC m=+0.182616912 container init cbe9b004a6416ab6b2c0afbe03eadf1e6abebffdd7e7a37c8387cd0f76315c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lovelace, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 19:49:28 compute-0 podman[103242]: 2025-11-24 19:49:28.819030685 +0000 UTC m=+0.193491913 container start cbe9b004a6416ab6b2c0afbe03eadf1e6abebffdd7e7a37c8387cd0f76315c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lovelace, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 19:49:28 compute-0 podman[103242]: 2025-11-24 19:49:28.822971928 +0000 UTC m=+0.197433176 container attach cbe9b004a6416ab6b2c0afbe03eadf1e6abebffdd7e7a37c8387cd0f76315c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 24 19:49:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 24 19:49:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 24 19:49:29 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 24 19:49:29 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev e41f104b-d791-4c26-a03d-afebed3874a1 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 24 19:49:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 19:49:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:29 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=15.700211525s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active pruub 70.263092041s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:29 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=15.700211525s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown pruub 70.263092041s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:29 compute-0 ceph-mon[75677]: pgmap v95: 73 pgs: 62 unknown, 11 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 8.0 KiB/s wr, 274 op/s
Nov 24 19:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:29 compute-0 ceph-mon[75677]: 3.1 deep-scrub starts
Nov 24 19:49:29 compute-0 ceph-mon[75677]: 3.1 deep-scrub ok
Nov 24 19:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 24 19:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:29 compute-0 ceph-mon[75677]: osdmap e43: 3 total, 3 up, 3 in
Nov 24 19:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 24 19:49:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/610162540' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 24 19:49:29 compute-0 sharp_feynman[103256]: 
Nov 24 19:49:29 compute-0 sharp_feynman[103256]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Nov 24 19:49:29 compute-0 systemd[1]: libpod-31989facc360ad39c21c117ebeccad256ec9f502655cb743dae4a61bdf7180ad.scope: Deactivated successfully.
Nov 24 19:49:29 compute-0 podman[103224]: 2025-11-24 19:49:29.35354627 +0000 UTC m=+0.789240406 container died 31989facc360ad39c21c117ebeccad256ec9f502655cb743dae4a61bdf7180ad (image=quay.io/ceph/ceph:v18, name=sharp_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:29 compute-0 ceph-mgr[75975]: [progress INFO root] Writing back 5 completed events
Nov 24 19:49:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 19:49:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-61d073e5dde4da98b442a29e9463ed2b0fb5035e8a7b320261c65c7de0573f2a-merged.mount: Deactivated successfully.
Nov 24 19:49:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:29 compute-0 podman[103224]: 2025-11-24 19:49:29.43464286 +0000 UTC m=+0.870336986 container remove 31989facc360ad39c21c117ebeccad256ec9f502655cb743dae4a61bdf7180ad (image=quay.io/ceph/ceph:v18, name=sharp_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:29 compute-0 systemd[1]: libpod-conmon-31989facc360ad39c21c117ebeccad256ec9f502655cb743dae4a61bdf7180ad.scope: Deactivated successfully.
Nov 24 19:49:29 compute-0 sudo[103204]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:29 compute-0 heuristic_lovelace[103262]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:49:29 compute-0 heuristic_lovelace[103262]: --> relative data size: 1.0
Nov 24 19:49:29 compute-0 heuristic_lovelace[103262]: --> All data devices are unavailable
Nov 24 19:49:30 compute-0 systemd[1]: libpod-cbe9b004a6416ab6b2c0afbe03eadf1e6abebffdd7e7a37c8387cd0f76315c4b.scope: Deactivated successfully.
Nov 24 19:49:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 24 19:49:30 compute-0 systemd[1]: libpod-cbe9b004a6416ab6b2c0afbe03eadf1e6abebffdd7e7a37c8387cd0f76315c4b.scope: Consumed 1.160s CPU time.
Nov 24 19:49:30 compute-0 podman[103242]: 2025-11-24 19:49:30.026824281 +0000 UTC m=+1.401285549 container died cbe9b004a6416ab6b2c0afbe03eadf1e6abebffdd7e7a37c8387cd0f76315c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 24 19:49:30 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 24 19:49:30 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev 175f5a92-099a-4139-95b2-c333745d4e96 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 24 19:49:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 19:49:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=43 pruub=10.593638420s) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active pruub 77.312522888s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=43 pruub=10.593638420s) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown pruub 77.312522888s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=43/44 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:30 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/610162540' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 24 19:49:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:30 compute-0 ceph-mon[75677]: osdmap e44: 3 total, 3 up, 3 in
Nov 24 19:49:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3021fc80e3325c9f83776828fc6ac797ca033ab2da9027f2bb083918945bfd8-merged.mount: Deactivated successfully.
Nov 24 19:49:30 compute-0 podman[103242]: 2025-11-24 19:49:30.198146948 +0000 UTC m=+1.572608206 container remove cbe9b004a6416ab6b2c0afbe03eadf1e6abebffdd7e7a37c8387cd0f76315c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:30 compute-0 systemd[1]: libpod-conmon-cbe9b004a6416ab6b2c0afbe03eadf1e6abebffdd7e7a37c8387cd0f76315c4b.scope: Deactivated successfully.
Nov 24 19:49:30 compute-0 sudo[103098]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v98: 135 pgs: 124 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 24 19:49:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 24 19:49:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 24 19:49:30 compute-0 sudo[103339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:30 compute-0 sudo[103339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:30 compute-0 sudo[103339]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:30 compute-0 sudo[103364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:30 compute-0 sudo[103364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:30 compute-0 sudo[103364]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:30 compute-0 sudo[103389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:30 compute-0 sudo[103389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:30 compute-0 sudo[103389]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:30 compute-0 sudo[103414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:49:30 compute-0 sudo[103414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 24 19:49:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 24 19:49:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 24 19:49:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 24 19:49:31 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev 4f4f87f0-db72-42db-84e5-63ee23cda0d9 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 24 19:49:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 19:49:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:31 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=15.712329865s) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active pruub 77.916107178s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[6.0( v 44'39 (0'0,44'39] local-lis/les=19/20 n=22 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=13.681868553s) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 40'38 mlcod 40'38 active pruub 81.408149719s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:31 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=15.712329865s) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown pruub 77.916107178s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[6.0( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=1 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=13.681868553s) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 40'38 mlcod 0'0 unknown pruub 81.408149719s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 podman[103481]: 2025-11-24 19:49:31.057629553 +0000 UTC m=+0.063559113 container create 1ab5fbf705a85e02241a61841e24bd854bc40193137a98647b7b4235cfd02451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.1e( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.b( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=43/45 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.16( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.17( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:31 compute-0 systemd[77249]: Starting Mark boot as successful...
Nov 24 19:49:31 compute-0 systemd[77249]: Finished Mark boot as successful.
Nov 24 19:49:31 compute-0 systemd[1]: Started libpod-conmon-1ab5fbf705a85e02241a61841e24bd854bc40193137a98647b7b4235cfd02451.scope.
Nov 24 19:49:31 compute-0 podman[103481]: 2025-11-24 19:49:31.023419561 +0000 UTC m=+0.029349141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:31 compute-0 ceph-mon[75677]: pgmap v98: 135 pgs: 124 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 24 19:49:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 24 19:49:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 24 19:49:31 compute-0 ceph-mon[75677]: osdmap e45: 3 total, 3 up, 3 in
Nov 24 19:49:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:31 compute-0 podman[103481]: 2025-11-24 19:49:31.14501784 +0000 UTC m=+0.150947450 container init 1ab5fbf705a85e02241a61841e24bd854bc40193137a98647b7b4235cfd02451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:31 compute-0 podman[103481]: 2025-11-24 19:49:31.157349317 +0000 UTC m=+0.163278877 container start 1ab5fbf705a85e02241a61841e24bd854bc40193137a98647b7b4235cfd02451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:31 compute-0 podman[103481]: 2025-11-24 19:49:31.161229558 +0000 UTC m=+0.167159118 container attach 1ab5fbf705a85e02241a61841e24bd854bc40193137a98647b7b4235cfd02451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:31 compute-0 elated_banach[103498]: 167 167
Nov 24 19:49:31 compute-0 systemd[1]: libpod-1ab5fbf705a85e02241a61841e24bd854bc40193137a98647b7b4235cfd02451.scope: Deactivated successfully.
Nov 24 19:49:31 compute-0 podman[103481]: 2025-11-24 19:49:31.165386839 +0000 UTC m=+0.171316399 container died 1ab5fbf705a85e02241a61841e24bd854bc40193137a98647b7b4235cfd02451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-20b4d641869d1e8a9ef80e7df07a28e6741459934947a304dbef1508032254ae-merged.mount: Deactivated successfully.
Nov 24 19:49:31 compute-0 podman[103481]: 2025-11-24 19:49:31.211871785 +0000 UTC m=+0.217801345 container remove 1ab5fbf705a85e02241a61841e24bd854bc40193137a98647b7b4235cfd02451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_banach, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:31 compute-0 systemd[1]: libpod-conmon-1ab5fbf705a85e02241a61841e24bd854bc40193137a98647b7b4235cfd02451.scope: Deactivated successfully.
Nov 24 19:49:31 compute-0 podman[103523]: 2025-11-24 19:49:31.442253761 +0000 UTC m=+0.070497309 container create e5abf67f13283ec8736892c0a1e2742f9493618ece995f2a3b0cc589612c21bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:31 compute-0 systemd[1]: Started libpod-conmon-e5abf67f13283ec8736892c0a1e2742f9493618ece995f2a3b0cc589612c21bd.scope.
Nov 24 19:49:31 compute-0 podman[103523]: 2025-11-24 19:49:31.41410976 +0000 UTC m=+0.042353348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a8c24e73cda7dd245951a4d13b69b88aec2d73de3b75582289c74457c6c4425/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a8c24e73cda7dd245951a4d13b69b88aec2d73de3b75582289c74457c6c4425/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a8c24e73cda7dd245951a4d13b69b88aec2d73de3b75582289c74457c6c4425/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a8c24e73cda7dd245951a4d13b69b88aec2d73de3b75582289c74457c6c4425/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:31 compute-0 podman[103523]: 2025-11-24 19:49:31.568316011 +0000 UTC m=+0.196559599 container init e5abf67f13283ec8736892c0a1e2742f9493618ece995f2a3b0cc589612c21bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:31 compute-0 podman[103523]: 2025-11-24 19:49:31.579879674 +0000 UTC m=+0.208123212 container start e5abf67f13283ec8736892c0a1e2742f9493618ece995f2a3b0cc589612c21bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:31 compute-0 podman[103523]: 2025-11-24 19:49:31.583765555 +0000 UTC m=+0.212009103 container attach e5abf67f13283ec8736892c0a1e2742f9493618ece995f2a3b0cc589612c21bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:49:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 24 19:49:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 24 19:49:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 24 19:49:32 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev e6d1fedb-57d6-44ac-a832-8fb558ee1e7f (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 24 19:49:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 19:49:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.9( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.4( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.a( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.8( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.5( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.7( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.b( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.6( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.1( v 44'39 (0'0,44'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.3( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.2( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.e( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.c( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.d( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.f( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.9( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=45/46 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.a( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.4( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.5( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.7( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.b( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.1( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.3( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.0( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 40'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.6( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.2( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.8( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.e( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.c( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.d( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 46 pg[6.f( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v101: 181 pgs: 77 unknown, 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 24 19:49:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:32 compute-0 fervent_wing[103539]: {
Nov 24 19:49:32 compute-0 fervent_wing[103539]:     "0": [
Nov 24 19:49:32 compute-0 fervent_wing[103539]:         {
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "devices": [
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "/dev/loop3"
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             ],
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_name": "ceph_lv0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_size": "21470642176",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "name": "ceph_lv0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "tags": {
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.crush_device_class": "",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.encrypted": "0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.osd_id": "0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.type": "block",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.vdo": "0"
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             },
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "type": "block",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "vg_name": "ceph_vg0"
Nov 24 19:49:32 compute-0 fervent_wing[103539]:         }
Nov 24 19:49:32 compute-0 fervent_wing[103539]:     ],
Nov 24 19:49:32 compute-0 fervent_wing[103539]:     "1": [
Nov 24 19:49:32 compute-0 fervent_wing[103539]:         {
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "devices": [
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "/dev/loop4"
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             ],
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_name": "ceph_lv1",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_size": "21470642176",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "name": "ceph_lv1",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "tags": {
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.crush_device_class": "",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.encrypted": "0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.osd_id": "1",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.type": "block",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.vdo": "0"
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             },
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "type": "block",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "vg_name": "ceph_vg1"
Nov 24 19:49:32 compute-0 fervent_wing[103539]:         }
Nov 24 19:49:32 compute-0 fervent_wing[103539]:     ],
Nov 24 19:49:32 compute-0 fervent_wing[103539]:     "2": [
Nov 24 19:49:32 compute-0 fervent_wing[103539]:         {
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "devices": [
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "/dev/loop5"
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             ],
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_name": "ceph_lv2",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_size": "21470642176",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "name": "ceph_lv2",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "tags": {
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.crush_device_class": "",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.encrypted": "0",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.osd_id": "2",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.type": "block",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:                 "ceph.vdo": "0"
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             },
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "type": "block",
Nov 24 19:49:32 compute-0 fervent_wing[103539]:             "vg_name": "ceph_vg2"
Nov 24 19:49:32 compute-0 fervent_wing[103539]:         }
Nov 24 19:49:32 compute-0 fervent_wing[103539]:     ]
Nov 24 19:49:32 compute-0 fervent_wing[103539]: }
Nov 24 19:49:32 compute-0 systemd[1]: libpod-e5abf67f13283ec8736892c0a1e2742f9493618ece995f2a3b0cc589612c21bd.scope: Deactivated successfully.
Nov 24 19:49:32 compute-0 podman[103523]: 2025-11-24 19:49:32.344250438 +0000 UTC m=+0.972494006 container died e5abf67f13283ec8736892c0a1e2742f9493618ece995f2a3b0cc589612c21bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 19:49:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a8c24e73cda7dd245951a4d13b69b88aec2d73de3b75582289c74457c6c4425-merged.mount: Deactivated successfully.
Nov 24 19:49:32 compute-0 podman[103523]: 2025-11-24 19:49:32.429459148 +0000 UTC m=+1.057702666 container remove e5abf67f13283ec8736892c0a1e2742f9493618ece995f2a3b0cc589612c21bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:32 compute-0 systemd[1]: libpod-conmon-e5abf67f13283ec8736892c0a1e2742f9493618ece995f2a3b0cc589612c21bd.scope: Deactivated successfully.
Nov 24 19:49:32 compute-0 sudo[103414]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:32 compute-0 sudo[103562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:32 compute-0 sudo[103562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:32 compute-0 sudo[103562]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:32 compute-0 sudo[103587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:32 compute-0 sudo[103587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:32 compute-0 sudo[103587]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:32 compute-0 sudo[103612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:32 compute-0 sudo[103612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:32 compute-0 sudo[103612]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:32 compute-0 sudo[103637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:49:32 compute-0 sudo[103637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 24 19:49:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 24 19:49:33 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 24 19:49:33 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 47 pg[8.0( v 31'4 (0'0,31'4] local-lis/les=30/31 n=4 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47 pruub=13.754780769s) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 31'3 mlcod 31'3 active pruub 77.971092224s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:33 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 47 pg[9.0( v 38'385 (0'0,38'385] local-lis/les=32/33 n=177 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=15.779646873s) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 38'384 mlcod 38'384 active pruub 79.996047974s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:33 compute-0 ceph-mon[75677]: osdmap e46: 3 total, 3 up, 3 in
Nov 24 19:49:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:33 compute-0 ceph-mon[75677]: pgmap v101: 181 pgs: 77 unknown, 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 24 19:49:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:33 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev 84e5faa0-7828-498b-8a30-cf8634398a0a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 24 19:49:33 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 47 pg[8.0( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47 pruub=13.754780769s) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 31'3 mlcod 0'0 unknown pruub 77.971092224s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 19:49:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:33 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 47 pg[9.0( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=15.779646873s) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 38'384 mlcod 0'0 unknown pruub 79.996047974s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:33 compute-0 podman[103701]: 2025-11-24 19:49:33.318994444 +0000 UTC m=+0.064331857 container create 1aef3b0e78df8c6661a1072c686e64eb360c3c0afec7cffc9a83e20865f6f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 19:49:33 compute-0 systemd[1]: Started libpod-conmon-1aef3b0e78df8c6661a1072c686e64eb360c3c0afec7cffc9a83e20865f6f6df.scope.
Nov 24 19:49:33 compute-0 podman[103701]: 2025-11-24 19:49:33.292088121 +0000 UTC m=+0.037425564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:33 compute-0 podman[103701]: 2025-11-24 19:49:33.418689067 +0000 UTC m=+0.164026460 container init 1aef3b0e78df8c6661a1072c686e64eb360c3c0afec7cffc9a83e20865f6f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 19:49:33 compute-0 podman[103701]: 2025-11-24 19:49:33.429167146 +0000 UTC m=+0.174504539 container start 1aef3b0e78df8c6661a1072c686e64eb360c3c0afec7cffc9a83e20865f6f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:33 compute-0 podman[103701]: 2025-11-24 19:49:33.43283574 +0000 UTC m=+0.178173153 container attach 1aef3b0e78df8c6661a1072c686e64eb360c3c0afec7cffc9a83e20865f6f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:33 compute-0 bold_merkle[103717]: 167 167
Nov 24 19:49:33 compute-0 systemd[1]: libpod-1aef3b0e78df8c6661a1072c686e64eb360c3c0afec7cffc9a83e20865f6f6df.scope: Deactivated successfully.
Nov 24 19:49:33 compute-0 podman[103701]: 2025-11-24 19:49:33.436061651 +0000 UTC m=+0.181399094 container died 1aef3b0e78df8c6661a1072c686e64eb360c3c0afec7cffc9a83e20865f6f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 19:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb5d4514ccc4592a06dbf616a0b4501fd5a948a8e27c08766711a019e736632e-merged.mount: Deactivated successfully.
Nov 24 19:49:33 compute-0 podman[103701]: 2025-11-24 19:49:33.480881195 +0000 UTC m=+0.226218578 container remove 1aef3b0e78df8c6661a1072c686e64eb360c3c0afec7cffc9a83e20865f6f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 19:49:33 compute-0 systemd[1]: libpod-conmon-1aef3b0e78df8c6661a1072c686e64eb360c3c0afec7cffc9a83e20865f6f6df.scope: Deactivated successfully.
Nov 24 19:49:33 compute-0 podman[103741]: 2025-11-24 19:49:33.729747412 +0000 UTC m=+0.074716682 container create 32ea1bafee7336e6687750b3352206d7277e905866680146c0300b1c5a92a9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_feynman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 24 19:49:33 compute-0 systemd[1]: Started libpod-conmon-32ea1bafee7336e6687750b3352206d7277e905866680146c0300b1c5a92a9ab.scope.
Nov 24 19:49:33 compute-0 podman[103741]: 2025-11-24 19:49:33.700199916 +0000 UTC m=+0.045169236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1330492726aa8be696a90769d4dae70e7228ad26b3efca81fccf092e640e7d39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1330492726aa8be696a90769d4dae70e7228ad26b3efca81fccf092e640e7d39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1330492726aa8be696a90769d4dae70e7228ad26b3efca81fccf092e640e7d39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1330492726aa8be696a90769d4dae70e7228ad26b3efca81fccf092e640e7d39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:33 compute-0 podman[103741]: 2025-11-24 19:49:33.89439743 +0000 UTC m=+0.239366710 container init 32ea1bafee7336e6687750b3352206d7277e905866680146c0300b1c5a92a9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:33 compute-0 podman[103741]: 2025-11-24 19:49:33.901011437 +0000 UTC m=+0.245980707 container start 32ea1bafee7336e6687750b3352206d7277e905866680146c0300b1c5a92a9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_feynman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:33 compute-0 podman[103741]: 2025-11-24 19:49:33.912205317 +0000 UTC m=+0.257174597 container attach 32ea1bafee7336e6687750b3352206d7277e905866680146c0300b1c5a92a9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_feynman, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:49:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 24 19:49:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 24 19:49:34 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] update: starting ev a6b3153a-ff81-4add-89f9-e98e507a82c8 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 24 19:49:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:34 compute-0 ceph-mon[75677]: osdmap e47: 3 total, 3 up, 3 in
Nov 24 19:49:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev a918086a-4880-44d0-a878-0bcf7dca8980 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event a918086a-4880-44d0-a878-0bcf7dca8980 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev 01bc71d6-7957-4b3a-a1f3-2c343caafd8d (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event 01bc71d6-7957-4b3a-a1f3-2c343caafd8d (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev 2aa42362-2f5c-4dd6-800a-6cbbf3683a33 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event 2aa42362-2f5c-4dd6-800a-6cbbf3683a33 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev a8e40ed0-b161-4d0b-a2b7-3d8570956d66 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event a8e40ed0-b161-4d0b-a2b7-3d8570956d66 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev e41f104b-d791-4c26-a03d-afebed3874a1 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event e41f104b-d791-4c26-a03d-afebed3874a1 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev 175f5a92-099a-4139-95b2-c333745d4e96 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event 175f5a92-099a-4139-95b2-c333745d4e96 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev 4f4f87f0-db72-42db-84e5-63ee23cda0d9 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event 4f4f87f0-db72-42db-84e5-63ee23cda0d9 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev e6d1fedb-57d6-44ac-a832-8fb558ee1e7f (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event e6d1fedb-57d6-44ac-a832-8fb558ee1e7f (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev 84e5faa0-7828-498b-8a30-cf8634398a0a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event 84e5faa0-7828-498b-8a30-cf8634398a0a (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] complete: finished ev a6b3153a-ff81-4add-89f9-e98e507a82c8 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event a6b3153a-ff81-4add-89f9-e98e507a82c8 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.15( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.14( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.15( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.14( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.17( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.17( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.16( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.16( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.11( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.11( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.10( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.10( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.12( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.13( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.13( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.12( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.c( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.d( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.c( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.d( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.f( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.e( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.8( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.9( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.a( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.b( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.2( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.3( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.f( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1( v 31'4 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.e( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.b( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.a( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.8( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.9( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.3( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.6( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.7( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.6( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.7( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.2( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.5( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.4( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.4( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.5( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1b( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1a( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1b( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1a( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.19( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.18( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.18( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.19( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1e( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1f( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1f( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1d( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1c( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1d( v 38'385 lc 0'0 (0'0,38'385] local-lis/les=32/33 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1c( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1e( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.17( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.16( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.10( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.13( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.12( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.14( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.8( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.2( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.3( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.0( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 38'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.0( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 31'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.a( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.7( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.5( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.4( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1a( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.19( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 48 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v104: 243 pgs: 93 unknown, 150 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 24 19:49:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:34 compute-0 ceph-mgr[75975]: [progress INFO root] Writing back 15 completed events
Nov 24 19:49:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 19:49:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:34 compute-0 sad_feynman[103758]: {
Nov 24 19:49:34 compute-0 sad_feynman[103758]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "osd_id": 2,
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "type": "bluestore"
Nov 24 19:49:34 compute-0 sad_feynman[103758]:     },
Nov 24 19:49:34 compute-0 sad_feynman[103758]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "osd_id": 1,
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "type": "bluestore"
Nov 24 19:49:34 compute-0 sad_feynman[103758]:     },
Nov 24 19:49:34 compute-0 sad_feynman[103758]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "osd_id": 0,
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:49:34 compute-0 sad_feynman[103758]:         "type": "bluestore"
Nov 24 19:49:34 compute-0 sad_feynman[103758]:     }
Nov 24 19:49:34 compute-0 sad_feynman[103758]: }
Nov 24 19:49:34 compute-0 systemd[1]: libpod-32ea1bafee7336e6687750b3352206d7277e905866680146c0300b1c5a92a9ab.scope: Deactivated successfully.
Nov 24 19:49:34 compute-0 systemd[1]: libpod-32ea1bafee7336e6687750b3352206d7277e905866680146c0300b1c5a92a9ab.scope: Consumed 1.095s CPU time.
Nov 24 19:49:35 compute-0 podman[103791]: 2025-11-24 19:49:35.045267592 +0000 UTC m=+0.032769528 container died 32ea1bafee7336e6687750b3352206d7277e905866680146c0300b1c5a92a9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 19:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1330492726aa8be696a90769d4dae70e7228ad26b3efca81fccf092e640e7d39-merged.mount: Deactivated successfully.
Nov 24 19:49:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 24 19:49:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 19:49:35 compute-0 ceph-mon[75677]: osdmap e48: 3 total, 3 up, 3 in
Nov 24 19:49:35 compute-0 ceph-mon[75677]: pgmap v104: 243 pgs: 93 unknown, 150 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 24 19:49:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 24 19:49:35 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 24 19:49:35 compute-0 podman[103791]: 2025-11-24 19:49:35.148364251 +0000 UTC m=+0.135866197 container remove 32ea1bafee7336e6687750b3352206d7277e905866680146c0300b1c5a92a9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_feynman, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:35 compute-0 systemd[1]: libpod-conmon-32ea1bafee7336e6687750b3352206d7277e905866680146c0300b1c5a92a9ab.scope: Deactivated successfully.
Nov 24 19:49:35 compute-0 sudo[103637]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:35 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0497b5a0-7ce7-40df-9894-a40a9dd2ebfc does not exist
Nov 24 19:49:35 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 90b9fc5e-a023-453a-af6e-d58eec80d576 does not exist
Nov 24 19:49:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 24 19:49:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 24 19:49:35 compute-0 sudo[103806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:35 compute-0 sudo[103806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:35 compute-0 sudo[103806]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:35 compute-0 sudo[103831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:49:35 compute-0 sudo[103831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:35 compute-0 sudo[103831]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:35 compute-0 sudo[103856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:35 compute-0 sudo[103856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:35 compute-0 sudo[103856]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 24 19:49:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 24 19:49:35 compute-0 sudo[103881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:35 compute-0 sudo[103881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:35 compute-0 sudo[103881]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:35 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 24 19:49:35 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 24 19:49:35 compute-0 sudo[103906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:35 compute-0 sudo[103906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:35 compute-0 sudo[103906]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:35 compute-0 sudo[103931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 19:49:35 compute-0 sudo[103931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 19:49:36 compute-0 ceph-mon[75677]: osdmap e49: 3 total, 3 up, 3 in
Nov 24 19:49:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:36 compute-0 ceph-mon[75677]: 4.1 scrub starts
Nov 24 19:49:36 compute-0 ceph-mon[75677]: 4.1 scrub ok
Nov 24 19:49:36 compute-0 ceph-mon[75677]: 3.2 scrub starts
Nov 24 19:49:36 compute-0 ceph-mon[75677]: 3.2 scrub ok
Nov 24 19:49:36 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 49 pg[10.0( v 35'16 (0'0,35'16] local-lis/les=34/35 n=8 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=14.672594070s) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 35'15 mlcod 35'15 active pruub 76.456237793s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:36 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=8.695132256s) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active pruub 76.110328674s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:36 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 49 pg[10.0( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=14.672594070s) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 35'15 mlcod 0'0 unknown pruub 76.456237793s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:36 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=8.695132256s) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown pruub 76.110328674s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v106: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 24 19:49:36 compute-0 podman[104026]: 2025-11-24 19:49:36.522418266 +0000 UTC m=+0.082701732 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 19:49:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Nov 24 19:49:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Nov 24 19:49:36 compute-0 podman[104026]: 2025-11-24 19:49:36.640980969 +0000 UTC m=+0.201264345 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 19:49:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 24 19:49:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 24 19:49:37 compute-0 ceph-mon[75677]: 2.1 scrub starts
Nov 24 19:49:37 compute-0 ceph-mon[75677]: 2.1 scrub ok
Nov 24 19:49:37 compute-0 ceph-mon[75677]: pgmap v106: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 24 19:49:37 compute-0 ceph-mon[75677]: 3.3 deep-scrub starts
Nov 24 19:49:37 compute-0 ceph-mon[75677]: 3.3 deep-scrub ok
Nov 24 19:49:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.12( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.11( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1f( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.10( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1e( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1d( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1c( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1b( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1a( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.19( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.18( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.7( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.6( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.5( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.4( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.f( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.8( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.9( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.3( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.a( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.b( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.c( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.d( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.e( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.2( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.13( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.14( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.15( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.16( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.17( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1f( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1c( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1d( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1b( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.18( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.0( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 35'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.9( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.5( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.a( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.c( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.d( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.e( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.14( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.15( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.3( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 50 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:37 compute-0 sudo[103931]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:49:37 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:49:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:49:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:49:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 196638be-4f0d-41d6-86ac-ad36f021a005 does not exist
Nov 24 19:49:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f4ac549d-08fe-4919-bb7c-58e2b2dd4977 does not exist
Nov 24 19:49:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 72ddb3f6-6ad3-46a4-a57b-4725c5e2b511 does not exist
Nov 24 19:49:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:49:37 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:49:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:49:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:49:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:49:37 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:37 compute-0 sudo[104184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:37 compute-0 sudo[104184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:37 compute-0 sudo[104184]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:37 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 24 19:49:37 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 24 19:49:37 compute-0 sudo[104209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:37 compute-0 sudo[104209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:37 compute-0 sudo[104209]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:37 compute-0 sudo[104234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:37 compute-0 sudo[104234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:37 compute-0 sudo[104234]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:37 compute-0 sudo[104259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:49:37 compute-0 sudo[104259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:38 compute-0 ceph-mon[75677]: osdmap e50: 3 total, 3 up, 3 in
Nov 24 19:49:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:49:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:49:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:49:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:49:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v108: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 24 19:49:38 compute-0 podman[104325]: 2025-11-24 19:49:38.42447122 +0000 UTC m=+0.071399378 container create dcd3ddd26138bdfc89bcf8d66269036b6df5e54a522d19523025b1b792ffb491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:49:38 compute-0 systemd[1]: Started libpod-conmon-dcd3ddd26138bdfc89bcf8d66269036b6df5e54a522d19523025b1b792ffb491.scope.
Nov 24 19:49:38 compute-0 podman[104325]: 2025-11-24 19:49:38.395621166 +0000 UTC m=+0.042549404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:38 compute-0 podman[104325]: 2025-11-24 19:49:38.524291626 +0000 UTC m=+0.171219864 container init dcd3ddd26138bdfc89bcf8d66269036b6df5e54a522d19523025b1b792ffb491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 24 19:49:38 compute-0 podman[104325]: 2025-11-24 19:49:38.536053695 +0000 UTC m=+0.182981863 container start dcd3ddd26138bdfc89bcf8d66269036b6df5e54a522d19523025b1b792ffb491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:38 compute-0 podman[104325]: 2025-11-24 19:49:38.540530015 +0000 UTC m=+0.187458173 container attach dcd3ddd26138bdfc89bcf8d66269036b6df5e54a522d19523025b1b792ffb491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 19:49:38 compute-0 suspicious_mcnulty[104342]: 167 167
Nov 24 19:49:38 compute-0 systemd[1]: libpod-dcd3ddd26138bdfc89bcf8d66269036b6df5e54a522d19523025b1b792ffb491.scope: Deactivated successfully.
Nov 24 19:49:38 compute-0 podman[104325]: 2025-11-24 19:49:38.54417742 +0000 UTC m=+0.191105598 container died dcd3ddd26138bdfc89bcf8d66269036b6df5e54a522d19523025b1b792ffb491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aadd74ee43781b62e401ad3ef22864b3dec9a7667cacb70e6e2dc97e17ec7f3-merged.mount: Deactivated successfully.
Nov 24 19:49:38 compute-0 podman[104325]: 2025-11-24 19:49:38.594649691 +0000 UTC m=+0.241577839 container remove dcd3ddd26138bdfc89bcf8d66269036b6df5e54a522d19523025b1b792ffb491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:38 compute-0 systemd[1]: libpod-conmon-dcd3ddd26138bdfc89bcf8d66269036b6df5e54a522d19523025b1b792ffb491.scope: Deactivated successfully.
Nov 24 19:49:38 compute-0 podman[104367]: 2025-11-24 19:49:38.843254128 +0000 UTC m=+0.071264663 container create 7ee580bc6a887a6f8c2ce8da1a3478cd3ff6894e691ff26dab6c1c5b1212c9fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 19:49:38 compute-0 systemd[1]: Started libpod-conmon-7ee580bc6a887a6f8c2ce8da1a3478cd3ff6894e691ff26dab6c1c5b1212c9fa.scope.
Nov 24 19:49:38 compute-0 podman[104367]: 2025-11-24 19:49:38.815453237 +0000 UTC m=+0.043463822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ce4c2d9454772c80aedd0071761cc1a3ca3957cb12db49ec6874b39b9aac97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ce4c2d9454772c80aedd0071761cc1a3ca3957cb12db49ec6874b39b9aac97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ce4c2d9454772c80aedd0071761cc1a3ca3957cb12db49ec6874b39b9aac97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ce4c2d9454772c80aedd0071761cc1a3ca3957cb12db49ec6874b39b9aac97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ce4c2d9454772c80aedd0071761cc1a3ca3957cb12db49ec6874b39b9aac97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:38 compute-0 podman[104367]: 2025-11-24 19:49:38.97482496 +0000 UTC m=+0.202835525 container init 7ee580bc6a887a6f8c2ce8da1a3478cd3ff6894e691ff26dab6c1c5b1212c9fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:38 compute-0 podman[104367]: 2025-11-24 19:49:38.993701582 +0000 UTC m=+0.221712107 container start 7ee580bc6a887a6f8c2ce8da1a3478cd3ff6894e691ff26dab6c1c5b1212c9fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 19:49:38 compute-0 podman[104367]: 2025-11-24 19:49:38.998985957 +0000 UTC m=+0.226996522 container attach 7ee580bc6a887a6f8c2ce8da1a3478cd3ff6894e691ff26dab6c1c5b1212c9fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:49:39 compute-0 ceph-mon[75677]: 2.2 scrub starts
Nov 24 19:49:39 compute-0 ceph-mon[75677]: 2.2 scrub ok
Nov 24 19:49:39 compute-0 ceph-mon[75677]: pgmap v108: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 24 19:49:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 24 19:49:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 24 19:49:40 compute-0 wonderful_merkle[104383]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:49:40 compute-0 wonderful_merkle[104383]: --> relative data size: 1.0
Nov 24 19:49:40 compute-0 wonderful_merkle[104383]: --> All data devices are unavailable
Nov 24 19:49:40 compute-0 systemd[1]: libpod-7ee580bc6a887a6f8c2ce8da1a3478cd3ff6894e691ff26dab6c1c5b1212c9fa.scope: Deactivated successfully.
Nov 24 19:49:40 compute-0 podman[104367]: 2025-11-24 19:49:40.173089038 +0000 UTC m=+1.401099543 container died 7ee580bc6a887a6f8c2ce8da1a3478cd3ff6894e691ff26dab6c1c5b1212c9fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 19:49:40 compute-0 systemd[1]: libpod-7ee580bc6a887a6f8c2ce8da1a3478cd3ff6894e691ff26dab6c1c5b1212c9fa.scope: Consumed 1.140s CPU time.
Nov 24 19:49:40 compute-0 ceph-mon[75677]: 3.4 scrub starts
Nov 24 19:49:40 compute-0 ceph-mon[75677]: 3.4 scrub ok
Nov 24 19:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-13ce4c2d9454772c80aedd0071761cc1a3ca3957cb12db49ec6874b39b9aac97-merged.mount: Deactivated successfully.
Nov 24 19:49:40 compute-0 podman[104367]: 2025-11-24 19:49:40.24274238 +0000 UTC m=+1.470752915 container remove 7ee580bc6a887a6f8c2ce8da1a3478cd3ff6894e691ff26dab6c1c5b1212c9fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_merkle, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 19:49:40 compute-0 systemd[1]: libpod-conmon-7ee580bc6a887a6f8c2ce8da1a3478cd3ff6894e691ff26dab6c1c5b1212c9fa.scope: Deactivated successfully.
Nov 24 19:49:40 compute-0 sudo[104259]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 24 19:49:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 24 19:49:40 compute-0 sudo[104427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:40 compute-0 sudo[104427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:40 compute-0 sudo[104427]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:40 compute-0 sudo[104452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:40 compute-0 sudo[104452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:40 compute-0 sudo[104452]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:40 compute-0 sudo[104477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:40 compute-0 sudo[104477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:40 compute-0 sudo[104477]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:40 compute-0 sudo[104502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:49:40 compute-0 sudo[104502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:40 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Nov 24 19:49:40 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Nov 24 19:49:41 compute-0 podman[104567]: 2025-11-24 19:49:41.086719049 +0000 UTC m=+0.064875753 container create de545fdf4d9e35ff58d0d66dc56e62e69d7beee21c319807626f9aff6992f01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 19:49:41 compute-0 systemd[1]: Started libpod-conmon-de545fdf4d9e35ff58d0d66dc56e62e69d7beee21c319807626f9aff6992f01c.scope.
Nov 24 19:49:41 compute-0 podman[104567]: 2025-11-24 19:49:41.060840259 +0000 UTC m=+0.038997003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:41 compute-0 podman[104567]: 2025-11-24 19:49:41.177546155 +0000 UTC m=+0.155702859 container init de545fdf4d9e35ff58d0d66dc56e62e69d7beee21c319807626f9aff6992f01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:49:41 compute-0 podman[104567]: 2025-11-24 19:49:41.189830619 +0000 UTC m=+0.167987323 container start de545fdf4d9e35ff58d0d66dc56e62e69d7beee21c319807626f9aff6992f01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 19:49:41 compute-0 podman[104567]: 2025-11-24 19:49:41.193924847 +0000 UTC m=+0.172081591 container attach de545fdf4d9e35ff58d0d66dc56e62e69d7beee21c319807626f9aff6992f01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:49:41 compute-0 vigilant_wing[104583]: 167 167
Nov 24 19:49:41 compute-0 systemd[1]: libpod-de545fdf4d9e35ff58d0d66dc56e62e69d7beee21c319807626f9aff6992f01c.scope: Deactivated successfully.
Nov 24 19:49:41 compute-0 podman[104567]: 2025-11-24 19:49:41.197440708 +0000 UTC m=+0.175597412 container died de545fdf4d9e35ff58d0d66dc56e62e69d7beee21c319807626f9aff6992f01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 19:49:41 compute-0 ceph-mon[75677]: pgmap v109: 305 pgs: 62 unknown, 243 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:41 compute-0 ceph-mon[75677]: 4.2 scrub starts
Nov 24 19:49:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f97d89f4c1a89a2c2d2955c025604d0278c9da57cff833de30abacc9e9593e5e-merged.mount: Deactivated successfully.
Nov 24 19:49:41 compute-0 podman[104567]: 2025-11-24 19:49:41.249579891 +0000 UTC m=+0.227736585 container remove de545fdf4d9e35ff58d0d66dc56e62e69d7beee21c319807626f9aff6992f01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 19:49:41 compute-0 systemd[1]: libpod-conmon-de545fdf4d9e35ff58d0d66dc56e62e69d7beee21c319807626f9aff6992f01c.scope: Deactivated successfully.
Nov 24 19:49:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 24 19:49:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 24 19:49:41 compute-0 podman[104607]: 2025-11-24 19:49:41.471147492 +0000 UTC m=+0.051360330 container create 86a9486b615568b5a881691ed9ac9b8b96631a276e8988cbb110a605a9baaca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:49:41 compute-0 systemd[1]: Started libpod-conmon-86a9486b615568b5a881691ed9ac9b8b96631a276e8988cbb110a605a9baaca4.scope.
Nov 24 19:49:41 compute-0 podman[104607]: 2025-11-24 19:49:41.446020235 +0000 UTC m=+0.026233123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f5c4efa6c88e7163e53c0439258cec534d68e0044ef222b3a79efea44aa9aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f5c4efa6c88e7163e53c0439258cec534d68e0044ef222b3a79efea44aa9aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f5c4efa6c88e7163e53c0439258cec534d68e0044ef222b3a79efea44aa9aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f5c4efa6c88e7163e53c0439258cec534d68e0044ef222b3a79efea44aa9aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:41 compute-0 podman[104607]: 2025-11-24 19:49:41.590859062 +0000 UTC m=+0.171071980 container init 86a9486b615568b5a881691ed9ac9b8b96631a276e8988cbb110a605a9baaca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:41 compute-0 podman[104607]: 2025-11-24 19:49:41.605659626 +0000 UTC m=+0.185872494 container start 86a9486b615568b5a881691ed9ac9b8b96631a276e8988cbb110a605a9baaca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 19:49:41 compute-0 podman[104607]: 2025-11-24 19:49:41.610200558 +0000 UTC m=+0.190413426 container attach 86a9486b615568b5a881691ed9ac9b8b96631a276e8988cbb110a605a9baaca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 19:49:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:42 compute-0 ceph-mon[75677]: 4.2 scrub ok
Nov 24 19:49:42 compute-0 ceph-mon[75677]: 2.3 deep-scrub starts
Nov 24 19:49:42 compute-0 ceph-mon[75677]: 2.3 deep-scrub ok
Nov 24 19:49:42 compute-0 ceph-mon[75677]: 4.3 scrub starts
Nov 24 19:49:42 compute-0 ceph-mon[75677]: 4.3 scrub ok
Nov 24 19:49:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 24 19:49:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 19:49:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 24 19:49:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 19:49:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 19:49:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:42 compute-0 reverent_cannon[104623]: {
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:     "0": [
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:         {
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "devices": [
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "/dev/loop3"
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             ],
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_name": "ceph_lv0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_size": "21470642176",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "name": "ceph_lv0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "tags": {
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.crush_device_class": "",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.encrypted": "0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.osd_id": "0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.type": "block",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.vdo": "0"
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             },
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "type": "block",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "vg_name": "ceph_vg0"
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:         }
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:     ],
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:     "1": [
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:         {
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "devices": [
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "/dev/loop4"
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             ],
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_name": "ceph_lv1",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_size": "21470642176",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "name": "ceph_lv1",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "tags": {
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.crush_device_class": "",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.encrypted": "0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.osd_id": "1",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.type": "block",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.vdo": "0"
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             },
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "type": "block",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "vg_name": "ceph_vg1"
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:         }
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:     ],
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:     "2": [
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:         {
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "devices": [
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "/dev/loop5"
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             ],
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_name": "ceph_lv2",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_size": "21470642176",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "name": "ceph_lv2",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "tags": {
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.cluster_name": "ceph",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.crush_device_class": "",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.encrypted": "0",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.osd_id": "2",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.type": "block",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:                 "ceph.vdo": "0"
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             },
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "type": "block",
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:             "vg_name": "ceph_vg2"
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:         }
Nov 24 19:49:42 compute-0 reverent_cannon[104623]:     ]
Nov 24 19:49:42 compute-0 reverent_cannon[104623]: }
Nov 24 19:49:42 compute-0 systemd[1]: libpod-86a9486b615568b5a881691ed9ac9b8b96631a276e8988cbb110a605a9baaca4.scope: Deactivated successfully.
Nov 24 19:49:42 compute-0 podman[104607]: 2025-11-24 19:49:42.383381079 +0000 UTC m=+0.963593937 container died 86a9486b615568b5a881691ed9ac9b8b96631a276e8988cbb110a605a9baaca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 24 19:49:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6f5c4efa6c88e7163e53c0439258cec534d68e0044ef222b3a79efea44aa9aa-merged.mount: Deactivated successfully.
Nov 24 19:49:42 compute-0 podman[104607]: 2025-11-24 19:49:42.455123527 +0000 UTC m=+1.035336395 container remove 86a9486b615568b5a881691ed9ac9b8b96631a276e8988cbb110a605a9baaca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 19:49:42 compute-0 systemd[1]: libpod-conmon-86a9486b615568b5a881691ed9ac9b8b96631a276e8988cbb110a605a9baaca4.scope: Deactivated successfully.
Nov 24 19:49:42 compute-0 sudo[104502]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:42 compute-0 sudo[104644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:42 compute-0 sudo[104644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:42 compute-0 sudo[104644]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:42 compute-0 sudo[104669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:49:42 compute-0 sudo[104669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:42 compute-0 sudo[104669]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:42 compute-0 sudo[104694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:42 compute-0 sudo[104694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:42 compute-0 sudo[104694]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:42 compute-0 sudo[104719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:49:42 compute-0 sudo[104719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 24 19:49:43 compute-0 ceph-mon[75677]: pgmap v110: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 19:49:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 19:49:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 24 19:49:43 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.1c( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.815967560s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.731140137s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.1c( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.815856934s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.731140137s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.8( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819694519s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735099792s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.8( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819613457s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735099792s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.7( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819445610s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735107422s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.7( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819381714s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735107422s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.9( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.823978424s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 92.739753723s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.9( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.823943138s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.739753723s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.1b( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819255829s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735130310s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.a( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819365501s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735374451s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.1b( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819211006s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735130310s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.7( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.830069542s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 92.746093750s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.7( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.830035210s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.746093750s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.a( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819334030s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735374451s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.1a( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819127083s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735404968s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.b( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829812050s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 92.746093750s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.5( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819020271s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735282898s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.1a( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.819093704s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735404968s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.5( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829746246s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 92.746078491s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.b( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829774857s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.746093750s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.5( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818925858s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735282898s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.5( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829695702s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.746078491s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.9( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818799019s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735282898s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.9( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818772316s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735282898s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.1( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829528809s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 92.746124268s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.4( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818700790s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735313416s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.1( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829495430s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.746124268s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.3( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829515457s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 92.746170044s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.1( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818760872s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735458374s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.3( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829466820s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.746170044s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.1( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818730354s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735458374s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.2( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818621635s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735404968s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.2( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818591118s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735404968s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.f( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829600334s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 92.746566772s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.d( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818604469s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735603333s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.4( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818305969s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735313416s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.d( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818576813s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735603333s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.f( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829536438s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.746566772s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.e( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818646431s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735748291s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.e( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818602562s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735748291s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.d( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829294205s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 92.746536255s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.f( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818544388s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735801697s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[6.d( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.829259872s) [1] r=-1 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.746536255s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.f( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818514824s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735801697s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.10( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818395615s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735763550s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.10( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818358421s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735763550s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.11( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818310738s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735778809s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.11( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818283081s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735778809s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.12( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818526268s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.736053467s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.12( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818500519s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.736053467s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.13( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818211555s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735816956s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.14( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818202972s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735893250s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.18( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818202972s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 91.735984802s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.13( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818170547s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735816956s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.14( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818095207s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735893250s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[4.18( empty local-lis/les=43/45 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51 pruub=11.818176270s) [2] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.735984802s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.10( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.12( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.14( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.8( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[4.18( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.9( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[4.1b( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.5( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.7( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[4.1a( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[4.e( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.d( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[4.1( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.f( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[4.a( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.4( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[4.13( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[4.2( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[4.11( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[4.1c( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.918779373s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.702537537s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.812026024s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224784851s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.811988831s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224784851s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.852334976s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.265251160s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.852287292s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265251160s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.851244926s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.264244080s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.926129341s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339286804s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.797050476s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.210266113s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.797015190s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.210266113s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.851216316s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.264244080s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850789070s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.264160156s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850769997s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.264160156s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.918808937s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.332283020s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.918777466s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.332283020s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.799371719s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.212936401s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.799353600s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.212936401s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.926080704s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339286804s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.851464272s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.265190125s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.926082611s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339836121s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.851434708s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265190125s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.926058769s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339836121s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.810745239s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224685669s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.810716629s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224685669s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.810733795s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224716187s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.810701370s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224716187s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.810300827s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224411011s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.787587166s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.201622009s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.810256004s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224411011s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.787301064s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.201522827s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.787273407s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.201522827s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.787462234s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.201622009s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850894928s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.265228271s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850872993s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265228271s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850771904s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.265182495s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850746155s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265182495s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.925132751s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339660645s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.925107002s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339660645s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.925060272s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339752197s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850541115s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.265251160s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850507736s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265251160s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.794948578s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.578750610s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.918738365s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.702537537s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.924708366s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339645386s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850197792s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.265190125s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.924614906s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339645386s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850141525s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265190125s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850464821s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.265533447s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.924577713s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339752197s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.809057236s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224395752s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.809025764s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224395752s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.924214363s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339691162s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.808754921s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224311829s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850416183s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265533447s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.808726311s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224311829s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.795226097s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.210876465s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.795201302s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.210876465s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.849595070s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.265396118s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.849578857s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265396118s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.849446297s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.265449524s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.794858932s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.210868835s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.923700333s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339721680s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.794818878s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.210868835s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.923665047s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339721680s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.849399567s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265449524s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.794775963s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.210952759s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.924184799s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339691162s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.794758797s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.210952759s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.923391342s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339759827s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.849169731s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.265548706s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.923372269s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339759827s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.849135399s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265548706s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.807762146s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224243164s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.807715416s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224243164s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[5.1e( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.848857880s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.265602112s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.848865509s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.265640259s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.848827362s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265602112s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.848841667s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265640259s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.922834396s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339775085s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.922815323s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339775085s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.853226662s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.270301819s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.853212357s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270301819s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.794085503s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.211219788s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.923214912s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.340377808s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.923202515s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.340377808s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.19( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.794911385s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.578750610s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.806915283s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224159241s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.806900978s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224159241s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.793994904s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.211219788s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.793907166s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.211235046s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.793891907s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.211235046s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.848283768s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.265724182s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.848266602s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.265724182s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.793657303s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.211311340s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.793642044s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.211311340s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.806630135s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224388123s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.921981812s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339805603s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.921959877s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339805603s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.806594849s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224388123s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.806203842s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224136353s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.806182861s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224136353s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.793304443s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.211349487s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.793276787s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.211349487s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.805805206s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224052429s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.805780411s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224052429s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.852001190s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.270294189s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.851967812s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270294189s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.921557426s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.339859009s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.921422005s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.339859009s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.792620659s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.211219788s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.1d( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.851414680s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270324707s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.851392746s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270324707s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.921401024s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.340492249s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.804970741s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224082947s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.804947853s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224082947s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.921358109s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.340492249s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.851080894s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270347595s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.851060867s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270347595s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850961685s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270370483s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.792578697s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.211219788s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850936890s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270370483s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.804464340s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.223915100s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.920513153s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.340110779s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.804430008s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.223915100s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.920489311s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.340110779s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.791759491s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.211410522s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.791724205s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.211410522s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850804329s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270576477s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850779533s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270576477s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.18( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.16( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850513458s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.270370483s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.850484848s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270370483s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.790848732s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.211540222s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.790811539s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.211540222s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.802824020s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.223754883s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.849735260s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270729065s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.920267105s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.340164185s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.849703789s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270729065s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.802788734s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.223754883s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.919102669s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.340164185s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.849383354s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.270584106s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.802350998s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.223731995s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.802325249s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.223731995s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.802116394s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.223724365s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.802088737s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.223724365s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.13( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[5.14( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[5.15( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.849356651s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270584106s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.790218353s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.212150574s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.11( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.790180206s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.212142944s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.790173531s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.212150574s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.790148735s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.212142944s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.848499298s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270645142s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.848474503s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270645142s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.848255157s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.270591736s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.848229408s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270591736s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917854309s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.340408325s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917820930s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.340408325s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.1b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.847851753s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270614624s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.1b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.847836494s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270614624s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.f( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.800798416s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.223686218s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.800780296s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.223686218s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917527199s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.340446472s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917486191s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.340446472s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.789131165s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.212158203s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.789113998s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.212158203s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.847554207s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270721436s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.847531319s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270721436s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917113304s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.340454102s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917090416s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.340454102s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.788856506s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.212318420s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.788840294s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.212318420s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.851393700s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.635620117s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.916781425s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.340484619s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.846972466s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270706177s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.916740417s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.340484619s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.846946716s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270706177s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.846941948s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.270812988s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.846925735s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270812988s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.920838356s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.344749451s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.846763611s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270782471s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.846744537s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270782471s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.846661568s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.270820618s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.846636772s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270820618s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.793688774s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.218009949s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917811394s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.340415955s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.793663025s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.218009949s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.916004181s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.340415955s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.15( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.788253784s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.212707520s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.15( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.788168907s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.212707520s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.788099289s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.212867737s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.915789604s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.340560913s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.788074493s) [2] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.212867737s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.915745735s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.340560913s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.845954895s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270866394s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.845933914s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270866394s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.919626236s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 84.344734192s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.919578552s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.344734192s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.851363182s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.635620117s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.845416069s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.270675659s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.802531242s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586921692s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.845350266s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270675659s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.919414520s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.344749451s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.845388412s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 89.270919800s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.845349312s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270919800s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.798585892s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.224250793s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.b( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.786934853s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 83.212692261s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.798517227s) [2] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.224250793s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.786893845s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.212692261s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.844941139s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 89.270927429s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=14.844912529s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.270927429s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.791925430s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active pruub 87.217979431s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.17( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51 pruub=12.791893959s) [0] r=-1 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.217979431s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.802508354s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586921692s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.8( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917613983s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.702316284s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917431831s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.702316284s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.801764488s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586914062s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.11( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.15( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.801733971s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586914062s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.801589012s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586906433s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.13( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.801564217s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586906433s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.16( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.801241875s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586929321s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.801214218s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586929321s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.916574478s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.702346802s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.12( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.916342735s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.702239990s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.916464806s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.702346802s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.916313171s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.702239990s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.849617958s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.635665894s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.849588394s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.635665894s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.800704956s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586891174s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.800684929s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586891174s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[5.7( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.849559784s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.635948181s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.800408363s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586853027s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[5.5( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.849530220s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.635948181s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.9( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.800389290s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586853027s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.849829674s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.636398315s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.849801064s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.636398315s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.d( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[5.4( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.849400520s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.636108398s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.849371910s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.636108398s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.799839020s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586715698s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.799814224s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586715698s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.2( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.850504875s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.637542725s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.850480080s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.637542725s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.848875046s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.635971069s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.848834991s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.635971069s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.799305916s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586715698s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.1f( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.1b( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.799275398s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586715698s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.915000916s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.702537537s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.914973259s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.702537537s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.914896011s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.702568054s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.914867401s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.702568054s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.848620415s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.636466980s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.798804283s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586685181s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.848592758s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.636466980s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.798780441s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586685181s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.914535522s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.702613831s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.798380852s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586524963s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.914505959s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.702613831s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.919808388s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.707969666s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.798357964s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586524963s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.919782639s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.707969666s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.914382935s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.702682495s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.848227501s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.636528015s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.914365768s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.702682495s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.848196030s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.636528015s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.914337158s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.702735901s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.914322853s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.702735901s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.798173904s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586631775s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.798151016s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586631775s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.848216057s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.636741638s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.848179817s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.636741638s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.797729492s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586402893s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.848192215s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.636878967s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.797695160s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586402893s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.c( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.848171234s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.636878967s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.9( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.913969994s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 78.702781677s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.7( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.9( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.913944244s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 78.702781677s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.847963333s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.636947632s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.797447205s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586441040s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.797419548s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586441040s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.847937584s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.636947632s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.918805122s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.707878113s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.f( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.3( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.797253609s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586402893s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.797234535s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586402893s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.918720245s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.707878113s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.847760201s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.637313843s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.5( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.4( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.6( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.913110733s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.702667236s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.847706795s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.637313843s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.913040161s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.702667236s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.9( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.847106934s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.636947632s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.791787148s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.581657410s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.1( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.d( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.918012619s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 78.707901001s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[5.3( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.847072601s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.636947632s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.796432495s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586364746s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.d( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917956352s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 78.707901001s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.791708946s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.581657410s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.796405792s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586364746s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.846859932s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.636909485s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.846832275s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.636909485s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.791498184s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.581626892s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.a( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.791476250s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.581626892s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.846726418s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.636924744s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[5.2( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917755127s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.708000183s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.791241646s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.581489563s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.846693039s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.636924744s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[2.1b( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.e( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917645454s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 78.707901001s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.791222572s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.581489563s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917723656s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.708000183s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.e( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917603493s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 78.707901001s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917615891s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.708137512s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917453766s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.708015442s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=49/50 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917583466s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.708137512s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.791027069s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.581657410s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.917395592s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.708015442s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.1c( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[3.1e( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.784612656s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.581657410s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.784681320s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.581474304s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.783829689s) [1] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.581474304s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.1d( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.782905579s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.580978394s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.6( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.19( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.14( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.909914017s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 78.708000183s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.782868385s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.580978394s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.18( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.14( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.909845352s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 78.708000183s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.15( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.909763336s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 78.708030701s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.783127785s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.581466675s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.783094406s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.581466675s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.909724236s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.708122253s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.909695625s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.708122253s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.838618279s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.637214661s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.838588715s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.637214661s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.909448624s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 78.708160400s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.15( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.909695625s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 78.708030701s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=49/50 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=9.909420013s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 78.708160400s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[2.1f( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.838708878s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.637519836s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.787611008s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active pruub 77.586433411s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.3( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.1( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 51 pg[5.1a( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.838684082s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.637519836s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=8.787587166s) [0] r=-1 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.586433411s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.a( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[3.1d( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[3.18( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[3.7( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.830084801s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active pruub 79.637191772s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.830050468s) [1] r=-1 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.637191772s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[3.8( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[3.5( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[3.e( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[3.11( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[3.16( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.9( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.c( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.f( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.12( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.15( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[3.17( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 24 19:49:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 24 19:49:43 compute-0 podman[104783]: 2025-11-24 19:49:43.351489698 +0000 UTC m=+0.056497612 container create 3ffda21c9ad7329bca1a17e0e2822bb6ae6ff921ad81de11dff98f939edd0c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 19:49:43 compute-0 systemd[1]: Started libpod-conmon-3ffda21c9ad7329bca1a17e0e2822bb6ae6ff921ad81de11dff98f939edd0c81.scope.
Nov 24 19:49:43 compute-0 podman[104783]: 2025-11-24 19:49:43.324215893 +0000 UTC m=+0.029223837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:43 compute-0 podman[104783]: 2025-11-24 19:49:43.450891731 +0000 UTC m=+0.155899635 container init 3ffda21c9ad7329bca1a17e0e2822bb6ae6ff921ad81de11dff98f939edd0c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:43 compute-0 podman[104783]: 2025-11-24 19:49:43.457186248 +0000 UTC m=+0.162194192 container start 3ffda21c9ad7329bca1a17e0e2822bb6ae6ff921ad81de11dff98f939edd0c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 19:49:43 compute-0 podman[104783]: 2025-11-24 19:49:43.460925066 +0000 UTC m=+0.165932970 container attach 3ffda21c9ad7329bca1a17e0e2822bb6ae6ff921ad81de11dff98f939edd0c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 19:49:43 compute-0 priceless_benz[104799]: 167 167
Nov 24 19:49:43 compute-0 systemd[1]: libpod-3ffda21c9ad7329bca1a17e0e2822bb6ae6ff921ad81de11dff98f939edd0c81.scope: Deactivated successfully.
Nov 24 19:49:43 compute-0 podman[104783]: 2025-11-24 19:49:43.464984292 +0000 UTC m=+0.169992236 container died 3ffda21c9ad7329bca1a17e0e2822bb6ae6ff921ad81de11dff98f939edd0c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 19:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a286b271dee8b56e64b9e8e0304f0439ce2cfac5eef313b5b7f26fded1907fb2-merged.mount: Deactivated successfully.
Nov 24 19:49:43 compute-0 podman[104783]: 2025-11-24 19:49:43.506498223 +0000 UTC m=+0.211506167 container remove 3ffda21c9ad7329bca1a17e0e2822bb6ae6ff921ad81de11dff98f939edd0c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 24 19:49:43 compute-0 systemd[1]: libpod-conmon-3ffda21c9ad7329bca1a17e0e2822bb6ae6ff921ad81de11dff98f939edd0c81.scope: Deactivated successfully.
Nov 24 19:49:43 compute-0 podman[104822]: 2025-11-24 19:49:43.746946396 +0000 UTC m=+0.068434776 container create d3393ac1949c3aecb25f4efba261d40954a574a1cdb6ae50836eeea57ff5cfe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pasteur, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 19:49:43 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 24 19:49:43 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 24 19:49:43 compute-0 systemd[1]: Started libpod-conmon-d3393ac1949c3aecb25f4efba261d40954a574a1cdb6ae50836eeea57ff5cfe5.scope.
Nov 24 19:49:43 compute-0 podman[104822]: 2025-11-24 19:49:43.718272078 +0000 UTC m=+0.039760498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:49:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeca7d86a397f26e350ae3ccfb945dc42abb859596230601f8dfb619d04aed8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeca7d86a397f26e350ae3ccfb945dc42abb859596230601f8dfb619d04aed8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeca7d86a397f26e350ae3ccfb945dc42abb859596230601f8dfb619d04aed8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeca7d86a397f26e350ae3ccfb945dc42abb859596230601f8dfb619d04aed8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:49:43 compute-0 podman[104822]: 2025-11-24 19:49:43.858869092 +0000 UTC m=+0.180357522 container init d3393ac1949c3aecb25f4efba261d40954a574a1cdb6ae50836eeea57ff5cfe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:43 compute-0 podman[104822]: 2025-11-24 19:49:43.872723166 +0000 UTC m=+0.194211536 container start d3393ac1949c3aecb25f4efba261d40954a574a1cdb6ae50836eeea57ff5cfe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pasteur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:49:43 compute-0 podman[104822]: 2025-11-24 19:49:43.87701327 +0000 UTC m=+0.198501700 container attach d3393ac1949c3aecb25f4efba261d40954a574a1cdb6ae50836eeea57ff5cfe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 24 19:49:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 24 19:49:44 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 24 19:49:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 19:49:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 19:49:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:49:44 compute-0 ceph-mon[75677]: osdmap e51: 3 total, 3 up, 3 in
Nov 24 19:49:44 compute-0 ceph-mon[75677]: 4.6 scrub starts
Nov 24 19:49:44 compute-0 ceph-mon[75677]: 4.6 scrub ok
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.1a( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.13( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.17( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.11( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.15( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.12( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.11( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.13( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[5.14( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[5.15( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.16( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=51/52 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.8( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.b( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.e( v 50'17 lc 35'7 (0'0,50'17] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=50'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[5.3( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[5.2( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.d( v 50'17 lc 35'9 (0'0,50'17] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=50'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.1f( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.2( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[5.5( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=51/52 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.f( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.1c( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[5.4( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=51/52 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.1d( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.15( v 50'17 lc 35'5 (0'0,50'17] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=50'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[5.7( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=51/52 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[10.9( v 50'17 lc 35'15 (0'0,50'17] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=50'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.18( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[5.1e( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[7.1f( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.1b( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.f( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[2.19( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[4.18( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[3.1e( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[4.1a( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[3.1d( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[4.1b( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[3.8( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.16( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=51/52 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.9( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.d( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.3( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[7.4( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.c( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.1( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[7.18( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[7.9( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[7.6( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.3( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.f( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.6( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.e( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[7.3( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[7.f( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.a( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.9( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[7.13( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[3.7( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.15( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[3.5( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.1( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.12( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.17( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[7.1b( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [0] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=51/52 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.2( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 52 pg[3.1f( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [0] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.c( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[4.e( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[4.1( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.5( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.8( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.e( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=51/52 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.a( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[3.e( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.15( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[4.a( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[8.1b( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.11( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[3.11( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[4.13( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[4.11( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=51/52 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[3.18( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[7.1c( empty local-lis/les=51/52 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=51) [2] r=0 lpr=51 pi=[45,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[4.1c( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [2] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 52 pg[3.16( empty local-lis/les=51/52 n=0 ec=41/14 lis/c=41/41 les/c/f=42/42/0 sis=51) [2] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.5( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=51/52 n=1 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.a( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.c( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.9( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.4( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.7( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.6( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.1( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.f( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[2.1b( empty local-lis/les=51/52 n=0 ec=41/12 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.1d( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.1a( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.18( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[5.19( empty local-lis/les=51/52 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[6.3( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=51/52 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=44'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.2( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.f( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[6.d( v 44'39 lc 40'13 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.d( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[6.f( v 44'39 lc 40'1 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[6.1( v 44'39 (0'0,44'39] local-lis/les=51/52 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.7( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[6.5( v 44'39 lc 40'11 (0'0,44'39] local-lis/les=51/52 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[6.7( v 44'39 lc 40'21 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[10.14( v 50'17 lc 35'13 (0'0,50'17] local-lis/les=51/52 n=0 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=50'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.4( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[6.9( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.5( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[6.b( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=51) [1] r=0 lpr=51 pi=[45,51)/1 crt=44'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.9( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.14( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.8( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.10( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 52 pg[4.12( empty local-lis/les=51/52 n=0 ec=43/16 lis/c=43/43 les/c/f=45/45/0 sis=51) [1] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v113: 305 pgs: 16 unknown, 73 peering, 216 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:44 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 24 19:49:44 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 24 19:49:44 compute-0 happy_pasteur[104838]: {
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "osd_id": 2,
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "type": "bluestore"
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:     },
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "osd_id": 1,
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "type": "bluestore"
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:     },
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "osd_id": 0,
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:         "type": "bluestore"
Nov 24 19:49:44 compute-0 happy_pasteur[104838]:     }
Nov 24 19:49:44 compute-0 happy_pasteur[104838]: }
Nov 24 19:49:44 compute-0 systemd[1]: libpod-d3393ac1949c3aecb25f4efba261d40954a574a1cdb6ae50836eeea57ff5cfe5.scope: Deactivated successfully.
Nov 24 19:49:44 compute-0 systemd[1]: libpod-d3393ac1949c3aecb25f4efba261d40954a574a1cdb6ae50836eeea57ff5cfe5.scope: Consumed 1.064s CPU time.
Nov 24 19:49:44 compute-0 podman[104822]: 2025-11-24 19:49:44.945298456 +0000 UTC m=+1.266786826 container died d3393ac1949c3aecb25f4efba261d40954a574a1cdb6ae50836eeea57ff5cfe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:49:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebeca7d86a397f26e350ae3ccfb945dc42abb859596230601f8dfb619d04aed8-merged.mount: Deactivated successfully.
Nov 24 19:49:45 compute-0 podman[104822]: 2025-11-24 19:49:45.022859796 +0000 UTC m=+1.344348176 container remove d3393ac1949c3aecb25f4efba261d40954a574a1cdb6ae50836eeea57ff5cfe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pasteur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:49:45 compute-0 systemd[1]: libpod-conmon-d3393ac1949c3aecb25f4efba261d40954a574a1cdb6ae50836eeea57ff5cfe5.scope: Deactivated successfully.
Nov 24 19:49:45 compute-0 sudo[104719]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:49:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:49:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:45 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2f2e1d97-789c-4ef6-b103-5acf03227871 does not exist
Nov 24 19:49:45 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6d9c8c0a-e1e4-4d9e-9e05-42de51b6e079 does not exist
Nov 24 19:49:45 compute-0 sudo[104887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:49:45 compute-0 sudo[104887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:45 compute-0 sudo[104887]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 24 19:49:45 compute-0 ceph-mon[75677]: 2.c scrub starts
Nov 24 19:49:45 compute-0 ceph-mon[75677]: 2.c scrub ok
Nov 24 19:49:45 compute-0 ceph-mon[75677]: osdmap e52: 3 total, 3 up, 3 in
Nov 24 19:49:45 compute-0 ceph-mon[75677]: pgmap v113: 305 pgs: 16 unknown, 73 peering, 216 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:49:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:49:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 24 19:49:45 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 24 19:49:45 compute-0 sudo[104912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:49:45 compute-0 sudo[104912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:49:45 compute-0 sudo[104912]: pam_unix(sudo:session): session closed for user root
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 53 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:45 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 24 19:49:45 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 24 19:49:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 24 19:49:46 compute-0 ceph-mon[75677]: 2.e scrub starts
Nov 24 19:49:46 compute-0 ceph-mon[75677]: 2.e scrub ok
Nov 24 19:49:46 compute-0 ceph-mon[75677]: osdmap e53: 3 total, 3 up, 3 in
Nov 24 19:49:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 24 19:49:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.014223099s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.460021973s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.014138222s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.460021973s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.015092850s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.461242676s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.014950752s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.461242676s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.018884659s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.465538025s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.018626213s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.465576172s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.013029099s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.460174561s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.018278122s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.465538025s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.018394470s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.465576172s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.012895584s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.460174561s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.013537407s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.461105347s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.013394356s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.461128235s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.017910957s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.465843201s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.013295174s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.461128235s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.013462067s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.461105347s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.017848015s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.465843201s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.012569427s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.461059570s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.012129784s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.461059570s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.011890411s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.461219788s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.010704994s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.460105896s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.011788368s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.461219788s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.010430336s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.460105896s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.015575409s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.465469360s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.016187668s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.465782166s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.015525818s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.465469360s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=52/53 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.015671730s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.465782166s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.009107590s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.459892273s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.014789581s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.465614319s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.008995056s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.459892273s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.014359474s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.465614319s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.008738518s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 92.459968567s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 54 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=52/53 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.008541107s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.459968567s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v116: 305 pgs: 1 active+recovering+remapped, 13 active+recovery_wait+remapped, 2 active+remapped, 73 peering, 216 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75/213 objects misplaced (35.211%); 281 B/s, 2 keys/s, 5 objects/s recovering
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 54 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 24 19:49:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 24 19:49:47 compute-0 ceph-mon[75677]: 2.10 scrub starts
Nov 24 19:49:47 compute-0 ceph-mon[75677]: 2.10 scrub ok
Nov 24 19:49:47 compute-0 ceph-mon[75677]: osdmap e54: 3 total, 3 up, 3 in
Nov 24 19:49:47 compute-0 ceph-mon[75677]: pgmap v116: 305 pgs: 1 active+recovering+remapped, 13 active+recovery_wait+remapped, 2 active+remapped, 73 peering, 216 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75/213 objects misplaced (35.211%); 281 B/s, 2 keys/s, 5 objects/s recovering
Nov 24 19:49:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.5( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.11( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.9( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.b( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.3( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.1( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.1d( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.d( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.1b( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 55 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:47 compute-0 sshd-session[104843]: Invalid user system from 27.79.44.141 port 35498
Nov 24 19:49:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 24 19:49:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 24 19:49:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 24 19:49:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 24 19:49:47 compute-0 sshd-session[104843]: Connection closed by invalid user system 27.79.44.141 port 35498 [preauth]
Nov 24 19:49:48 compute-0 ceph-mon[75677]: osdmap e55: 3 total, 3 up, 3 in
Nov 24 19:49:48 compute-0 ceph-mon[75677]: 3.b scrub starts
Nov 24 19:49:48 compute-0 ceph-mon[75677]: 3.b scrub ok
Nov 24 19:49:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 1 active+recovering+remapped, 13 active+recovery_wait+remapped, 2 active+remapped, 73 peering, 216 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.2 KiB/s wr, 136 op/s; 75/213 objects misplaced (35.211%); 277 B/s, 2 keys/s, 5 objects/s recovering
Nov 24 19:49:48 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 24 19:49:48 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 24 19:49:49 compute-0 ceph-mon[75677]: 4.b scrub starts
Nov 24 19:49:49 compute-0 ceph-mon[75677]: 4.b scrub ok
Nov 24 19:49:49 compute-0 ceph-mon[75677]: pgmap v118: 305 pgs: 1 active+recovering+remapped, 13 active+recovery_wait+remapped, 2 active+remapped, 73 peering, 216 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.2 KiB/s wr, 136 op/s; 75/213 objects misplaced (35.211%); 277 B/s, 2 keys/s, 5 objects/s recovering
Nov 24 19:49:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 24 19:49:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 24 19:49:49 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 24 19:49:49 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 24 19:49:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v119: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.8 KiB/s wr, 92 op/s; 522 B/s, 1 keys/s, 17 objects/s recovering
Nov 24 19:49:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 24 19:49:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 19:49:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 24 19:49:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 19:49:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 24 19:49:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 19:49:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 19:49:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 24 19:49:50 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 24 19:49:50 compute-0 ceph-mon[75677]: 2.12 scrub starts
Nov 24 19:49:50 compute-0 ceph-mon[75677]: 2.12 scrub ok
Nov 24 19:49:50 compute-0 ceph-mon[75677]: 3.d scrub starts
Nov 24 19:49:50 compute-0 ceph-mon[75677]: 3.d scrub ok
Nov 24 19:49:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 19:49:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 19:49:50 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 24 19:49:50 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 24 19:49:51 compute-0 ceph-mon[75677]: 2.14 scrub starts
Nov 24 19:49:51 compute-0 ceph-mon[75677]: 2.14 scrub ok
Nov 24 19:49:51 compute-0 ceph-mon[75677]: pgmap v119: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.8 KiB/s wr, 92 op/s; 522 B/s, 1 keys/s, 17 objects/s recovering
Nov 24 19:49:51 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 19:49:51 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 19:49:51 compute-0 ceph-mon[75677]: osdmap e56: 3 total, 3 up, 3 in
Nov 24 19:49:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:51 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 24 19:49:51 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 24 19:49:51 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 56 pg[6.a( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=12.217325211s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 100.746093750s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:51 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 56 pg[6.6( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=12.217363358s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 100.746490479s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:51 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 56 pg[6.6( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=12.217317581s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.746490479s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:51 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 56 pg[6.2( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=12.217208862s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 100.746482849s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:51 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 56 pg[6.e( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=12.217215538s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 100.746604919s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:51 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 56 pg[6.e( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=12.217170715s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.746604919s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:51 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 56 pg[6.a( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=12.217233658s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.746093750s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:51 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 56 pg[6.2( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=12.217161179s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.746482849s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:51 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:51 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 56 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:51 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 56 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:51 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 56 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 333 B/s, 13 objects/s recovering
Nov 24 19:49:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 24 19:49:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 19:49:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 24 19:49:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 19:49:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 24 19:49:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 19:49:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 19:49:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 24 19:49:52 compute-0 ceph-mon[75677]: 2.1a scrub starts
Nov 24 19:49:52 compute-0 ceph-mon[75677]: 2.1a scrub ok
Nov 24 19:49:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 19:49:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 19:49:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.3( v 44'39 (0'0,44'39] local-lis/les=51/52 n=2 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=15.912396431s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=44'39 mlcod 44'39 active pruub 99.433944702s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.3( v 44'39 (0'0,44'39] local-lis/les=51/52 n=2 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=15.912135124s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=44'39 mlcod 0'0 unknown NOTIFY pruub 99.433944702s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.f( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=15.912320137s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=44'39 mlcod 44'39 active pruub 99.434455872s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.f( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=15.912230492s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=44'39 mlcod 0'0 unknown NOTIFY pruub 99.434455872s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.7( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=15.912169456s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=44'39 mlcod 44'39 active pruub 99.434700012s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.7( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=15.912118912s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=44'39 mlcod 0'0 unknown NOTIFY pruub 99.434700012s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.b( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=51/51 les/c/f=52/53/0 sis=57 pruub=15.911815643s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=44'39 mlcod 44'39 active pruub 99.434860229s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.b( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=51/51 les/c/f=52/53/0 sis=57 pruub=15.911777496s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=44'39 mlcod 0'0 unknown NOTIFY pruub 99.434860229s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.6( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=56/57 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=44'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:52 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 57 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:52 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 57 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:52 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=51/51 les/c/f=52/53/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:52 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 57 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.2( v 44'39 (0'0,44'39] local-lis/les=56/57 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.a( v 44'39 (0'0,44'39] local-lis/les=56/57 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:52 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 57 pg[6.e( v 44'39 lc 40'19 (0'0,44'39] local-lis/les=56/57 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 24 19:49:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 24 19:49:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 24 19:49:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 24 19:49:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 24 19:49:53 compute-0 ceph-mon[75677]: 2.1e scrub starts
Nov 24 19:49:53 compute-0 ceph-mon[75677]: 2.1e scrub ok
Nov 24 19:49:53 compute-0 ceph-mon[75677]: pgmap v121: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 333 B/s, 13 objects/s recovering
Nov 24 19:49:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 19:49:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 19:49:53 compute-0 ceph-mon[75677]: osdmap e57: 3 total, 3 up, 3 in
Nov 24 19:49:53 compute-0 ceph-mon[75677]: 3.10 scrub starts
Nov 24 19:49:53 compute-0 ceph-mon[75677]: 3.10 scrub ok
Nov 24 19:49:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 24 19:49:53 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 24 19:49:53 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 58 pg[6.b( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=57/58 n=1 ec=45/19 lis/c=51/51 les/c/f=52/53/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=44'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:53 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 58 pg[6.7( v 44'39 lc 40'21 (0'0,44'39] local-lis/les=57/58 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:53 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 58 pg[6.3( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=57/58 n=2 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=44'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:53 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 58 pg[6.f( v 44'39 lc 40'1 (0'0,44'39] local-lis/les=57/58 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 440 B/s, 1 keys/s, 14 objects/s recovering
Nov 24 19:49:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 24 19:49:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 19:49:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 24 19:49:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 19:49:54 compute-0 ceph-mon[75677]: 4.c scrub starts
Nov 24 19:49:54 compute-0 ceph-mon[75677]: 4.c scrub ok
Nov 24 19:49:54 compute-0 ceph-mon[75677]: osdmap e58: 3 total, 3 up, 3 in
Nov 24 19:49:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 19:49:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 19:49:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 24 19:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:49:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 19:49:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 19:49:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 24 19:49:54 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 24 19:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:49:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 24 19:49:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 24 19:49:54 compute-0 ceph-mgr[75975]: [progress INFO root] Completed event f599a5f6-f456-48e8-9a4f-efbf0fd99188 (Global Recovery Event) in 30 seconds
Nov 24 19:49:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 24 19:49:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 24 19:49:54 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 59 pg[6.4( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=9.315988541s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 100.746070862s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:54 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 59 pg[6.4( v 44'39 (0'0,44'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=9.315928459s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.746070862s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:54 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 59 pg[6.c( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=9.315639496s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 100.746711731s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:49:54 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 59 pg[6.c( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=59 pruub=9.315563202s) [1] r=-1 lpr=59 pi=[45,59)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.746711731s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:49:54 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 59 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:54 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 59 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:49:54 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Nov 24 19:49:54 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Nov 24 19:49:55 compute-0 ceph-mon[75677]: pgmap v124: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 440 B/s, 1 keys/s, 14 objects/s recovering
Nov 24 19:49:55 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 19:49:55 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 19:49:55 compute-0 ceph-mon[75677]: osdmap e59: 3 total, 3 up, 3 in
Nov 24 19:49:55 compute-0 ceph-mon[75677]: 4.15 scrub starts
Nov 24 19:49:55 compute-0 ceph-mon[75677]: 4.15 scrub ok
Nov 24 19:49:55 compute-0 ceph-mon[75677]: 3.13 scrub starts
Nov 24 19:49:55 compute-0 ceph-mon[75677]: 3.13 scrub ok
Nov 24 19:49:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 24 19:49:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 24 19:49:55 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 24 19:49:55 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 60 pg[6.4( v 44'39 lc 40'15 (0'0,44'39] local-lis/les=59/60 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:55 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 60 pg[6.c( v 44'39 lc 40'17 (0'0,44'39] local-lis/les=59/60 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=59) [1] r=0 lpr=59 pi=[45,59)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:49:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 24 19:49:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 24 19:49:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v127: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/213 objects degraded (0.469%); 2/213 objects misplaced (0.939%); 181 B/s, 2 keys/s, 2 objects/s recovering
Nov 24 19:49:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 1/213 objects degraded (0.469%), 1 pg degraded (PG_DEGRADED)
Nov 24 19:49:56 compute-0 ceph-mon[75677]: 5.6 deep-scrub starts
Nov 24 19:49:56 compute-0 ceph-mon[75677]: 5.6 deep-scrub ok
Nov 24 19:49:56 compute-0 ceph-mon[75677]: osdmap e60: 3 total, 3 up, 3 in
Nov 24 19:49:56 compute-0 ceph-mon[75677]: 4.16 scrub starts
Nov 24 19:49:56 compute-0 ceph-mon[75677]: 4.16 scrub ok
Nov 24 19:49:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:49:57 compute-0 ceph-mon[75677]: pgmap v127: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/213 objects degraded (0.469%); 2/213 objects misplaced (0.939%); 181 B/s, 2 keys/s, 2 objects/s recovering
Nov 24 19:49:57 compute-0 ceph-mon[75677]: Health check failed: Degraded data redundancy: 1/213 objects degraded (0.469%), 1 pg degraded (PG_DEGRADED)
Nov 24 19:49:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/213 objects degraded (0.469%); 2/213 objects misplaced (0.939%); 122 B/s, 1 keys/s, 1 objects/s recovering
Nov 24 19:49:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 24 19:49:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 24 19:49:58 compute-0 ceph-mon[75677]: 4.17 scrub starts
Nov 24 19:49:58 compute-0 ceph-mon[75677]: 4.17 scrub ok
Nov 24 19:49:58 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 24 19:49:58 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 24 19:49:59 compute-0 ceph-mon[75677]: pgmap v128: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/213 objects degraded (0.469%); 2/213 objects misplaced (0.939%); 122 B/s, 1 keys/s, 1 objects/s recovering
Nov 24 19:49:59 compute-0 ceph-mgr[75975]: [progress INFO root] Writing back 16 completed events
Nov 24 19:49:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 19:49:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/213 objects degraded (0.469%); 2/213 objects misplaced (0.939%); 104 B/s, 1 keys/s, 1 objects/s recovering
Nov 24 19:50:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.14 deep-scrub starts
Nov 24 19:50:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.14 deep-scrub ok
Nov 24 19:50:00 compute-0 ceph-mon[75677]: 5.8 scrub starts
Nov 24 19:50:00 compute-0 ceph-mon[75677]: 5.8 scrub ok
Nov 24 19:50:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 24 19:50:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 24 19:50:01 compute-0 ceph-mon[75677]: pgmap v129: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/213 objects degraded (0.469%); 2/213 objects misplaced (0.939%); 104 B/s, 1 keys/s, 1 objects/s recovering
Nov 24 19:50:01 compute-0 ceph-mon[75677]: 3.14 deep-scrub starts
Nov 24 19:50:01 compute-0 ceph-mon[75677]: 3.14 deep-scrub ok
Nov 24 19:50:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 195 B/s, 0 objects/s recovering
Nov 24 19:50:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 24 19:50:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 19:50:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 24 19:50:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 19:50:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 24 19:50:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 24 19:50:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Nov 24 19:50:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Nov 24 19:50:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 24 19:50:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/213 objects degraded (0.469%), 1 pg degraded)
Nov 24 19:50:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 19:50:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 19:50:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 19:50:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 24 19:50:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 24 19:50:02 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 61 pg[6.d( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=13.751882553s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=44'39 mlcod 44'39 active pruub 107.434288025s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:02 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 61 pg[6.d( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=13.751791000s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=44'39 mlcod 0'0 unknown NOTIFY pruub 107.434288025s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:02 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 61 pg[6.5( v 44'39 (0'0,44'39] local-lis/les=51/52 n=2 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=13.751971245s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=44'39 mlcod 44'39 active pruub 107.434814453s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:02 compute-0 ceph-mon[75677]: 3.19 scrub starts
Nov 24 19:50:02 compute-0 ceph-mon[75677]: 3.19 scrub ok
Nov 24 19:50:02 compute-0 ceph-mon[75677]: pgmap v130: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 195 B/s, 0 objects/s recovering
Nov 24 19:50:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 19:50:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 19:50:02 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 61 pg[6.5( v 44'39 (0'0,44'39] local-lis/les=51/52 n=2 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=13.750669479s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=44'39 mlcod 0'0 unknown NOTIFY pruub 107.434814453s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:02 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 61 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:02 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 61 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:02 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Nov 24 19:50:02 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Nov 24 19:50:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 24 19:50:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 24 19:50:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 24 19:50:03 compute-0 ceph-mon[75677]: 3.1a scrub starts
Nov 24 19:50:03 compute-0 ceph-mon[75677]: 3.1a scrub ok
Nov 24 19:50:03 compute-0 ceph-mon[75677]: 4.19 deep-scrub starts
Nov 24 19:50:03 compute-0 ceph-mon[75677]: 4.19 deep-scrub ok
Nov 24 19:50:03 compute-0 ceph-mon[75677]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/213 objects degraded (0.469%), 1 pg degraded)
Nov 24 19:50:03 compute-0 ceph-mon[75677]: Cluster is now healthy
Nov 24 19:50:03 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 19:50:03 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 19:50:03 compute-0 ceph-mon[75677]: osdmap e61: 3 total, 3 up, 3 in
Nov 24 19:50:03 compute-0 ceph-mon[75677]: 5.a deep-scrub starts
Nov 24 19:50:03 compute-0 ceph-mon[75677]: 5.a deep-scrub ok
Nov 24 19:50:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 24 19:50:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 24 19:50:03 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 62 pg[6.5( v 44'39 lc 40'11 (0'0,44'39] local-lis/les=61/62 n=2 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:03 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 62 pg[6.d( v 44'39 lc 40'13 (0'0,44'39] local-lis/les=61/62 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 204 B/s, 0 objects/s recovering
Nov 24 19:50:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 24 19:50:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 19:50:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 24 19:50:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 19:50:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 24 19:50:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 19:50:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 19:50:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 24 19:50:04 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 24 19:50:04 compute-0 ceph-mon[75677]: 3.1c scrub starts
Nov 24 19:50:04 compute-0 ceph-mon[75677]: 3.1c scrub ok
Nov 24 19:50:04 compute-0 ceph-mon[75677]: osdmap e62: 3 total, 3 up, 3 in
Nov 24 19:50:04 compute-0 ceph-mon[75677]: pgmap v133: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 204 B/s, 0 objects/s recovering
Nov 24 19:50:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 19:50:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 19:50:04 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 24 19:50:04 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 24 19:50:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 24 19:50:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 24 19:50:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 19:50:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 19:50:05 compute-0 ceph-mon[75677]: osdmap e63: 3 total, 3 up, 3 in
Nov 24 19:50:05 compute-0 ceph-mon[75677]: 5.b scrub starts
Nov 24 19:50:05 compute-0 ceph-mon[75677]: 5.b scrub ok
Nov 24 19:50:05 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 63 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.438899040s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 105.265617371s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:05 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 63 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.438828468s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.265617371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:05 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 63 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.443140984s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 105.270767212s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:05 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 63 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.443097115s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.270767212s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:05 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 63 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.443050385s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 105.270805359s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:05 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 63 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.443006516s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.270805359s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:05 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:05 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 63 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.442876816s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 105.271446228s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:05 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:05 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 63 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.442824364s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.271446228s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:05 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:05 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 272 B/s, 1 objects/s recovering
Nov 24 19:50:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 24 19:50:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 19:50:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 24 19:50:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 19:50:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 24 19:50:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 19:50:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 19:50:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 24 19:50:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 64 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=12.734837532s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=38'385 mlcod 0'0 active pruub 115.993209839s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 64 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=12.734763145s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 115.993209839s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 64 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=12.734805107s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=38'385 mlcod 0'0 active pruub 115.993316650s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 64 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=12.734751701s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 115.993316650s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 64 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=12.734351158s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=38'385 mlcod 0'0 active pruub 115.993446350s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 64 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=12.734328270s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 115.993446350s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 64 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=12.734182358s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=38'385 mlcod 0'0 active pruub 115.993782043s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 64 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=12.734150887s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 115.993782043s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-mon[75677]: 7.7 scrub starts
Nov 24 19:50:06 compute-0 ceph-mon[75677]: 7.7 scrub ok
Nov 24 19:50:06 compute-0 ceph-mon[75677]: pgmap v135: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 272 B/s, 1 objects/s recovering
Nov 24 19:50:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 19:50:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=-1 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 64 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 64 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 64 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 64 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 64 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 64 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 64 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 64 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 24 19:50:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 24 19:50:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 65 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 65 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 65 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 65 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 65 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 65 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=54/55 n=6 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 65 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:06 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 65 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 65 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=64/65 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 65 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=64/65 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 65 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=64/65 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:06 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 65 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=64/65 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2]/[1] async=[2] r=0 lpr=64 pi=[47,64)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 19:50:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 19:50:07 compute-0 ceph-mon[75677]: osdmap e64: 3 total, 3 up, 3 in
Nov 24 19:50:07 compute-0 ceph-mon[75677]: osdmap e65: 3 total, 3 up, 3 in
Nov 24 19:50:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 24 19:50:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 24 19:50:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 24 19:50:07 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 66 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:07 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 66 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:07 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 66 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:07 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 66 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:07 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 66 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:07 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 66 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:07 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 66 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:07 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 66 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:07 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 66 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=64/65 n=6 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66 pruub=14.942723274s) [2] async=[2] r=-1 lpr=66 pi=[47,66)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 113.928703308s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:07 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 66 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=64/65 n=5 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66 pruub=14.942870140s) [2] async=[2] r=-1 lpr=66 pi=[47,66)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 113.928726196s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:07 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 66 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=64/65 n=6 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66 pruub=14.942595482s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.928703308s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:07 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 66 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=64/65 n=6 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66 pruub=14.942447662s) [2] async=[2] r=-1 lpr=66 pi=[47,66)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 113.928749084s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:07 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 66 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=64/65 n=6 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66 pruub=14.942368507s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.928749084s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:07 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 66 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=64/65 n=5 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66 pruub=14.942546844s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.928726196s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:07 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 66 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=64/65 n=5 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66 pruub=14.941392899s) [2] async=[2] r=-1 lpr=66 pi=[47,66)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 113.927421570s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:07 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 66 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=64/65 n=5 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66 pruub=14.940088272s) [2] r=-1 lpr=66 pi=[47,66)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.927421570s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:07 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 66 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=65/66 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:07 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 66 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=65/66 n=6 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:07 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 66 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=65/66 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:07 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 66 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=65/66 n=6 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 24 19:50:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 19:50:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 24 19:50:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 19:50:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 24 19:50:08 compute-0 ceph-mon[75677]: osdmap e66: 3 total, 3 up, 3 in
Nov 24 19:50:08 compute-0 ceph-mon[75677]: pgmap v139: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 19:50:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 19:50:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 19:50:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 19:50:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 24 19:50:08 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 24 19:50:08 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 67 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=67 pruub=13.299493790s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 113.270988464s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 67 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=67 pruub=13.299333572s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.270988464s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:08 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 67 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=67 pruub=13.299491882s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 113.271354675s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 67 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=67 pruub=13.299410820s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.271354675s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:08 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 67 pg[6.8( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=11.250866890s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 116.746871948s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 67 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.032426834s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 38'385 active pruub 120.528549194s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 67 pg[6.8( v 44'39 (0'0,44'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=67 pruub=11.250790596s) [2] r=-1 lpr=67 pi=[45,67)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.746871948s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:08 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 67 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.032354355s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 120.528549194s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:08 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 67 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.038229942s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 38'385 active pruub 120.535202026s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 67 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.038109779s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 120.535202026s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:08 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 67 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=65/66 n=5 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.031095505s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 38'385 active pruub 120.528480530s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 67 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=65/66 n=5 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.031128883s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 38'385 active pruub 120.528541565s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 67 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=65/66 n=5 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.030985832s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 120.528480530s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:08 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 67 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=65/66 n=5 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.031000137s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 120.528541565s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.6( v 38'385 (0'0,38'385] local-lis/les=66/67 n=6 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=66/67 n=5 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=66/67 n=5 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:08 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 67 pg[9.e( v 38'385 (0'0,38'385] local-lis/les=66/67 n=6 ec=47/32 lis/c=64/47 les/c/f=65/48/0 sis=66) [2] r=0 lpr=66 pi=[47,66)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 24 19:50:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 19:50:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 19:50:09 compute-0 ceph-mon[75677]: osdmap e67: 3 total, 3 up, 3 in
Nov 24 19:50:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 24 19:50:09 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 24 19:50:09 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 68 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=0 lpr=68 pi=[47,68)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:09 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 68 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=0 lpr=68 pi=[47,68)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:09 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 68 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=0 lpr=68 pi=[47,68)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:09 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 68 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=0 lpr=68 pi=[47,68)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:09 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:09 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:09 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:09 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:09 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 68 pg[9.7( v 38'385 (0'0,38'385] local-lis/les=67/68 n=6 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:09 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 68 pg[9.17( v 38'385 (0'0,38'385] local-lis/les=67/68 n=5 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:09 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 68 pg[6.8( v 44'39 (0'0,44'39] local-lis/les=67/68 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=67) [2] r=0 lpr=67 pi=[45,67)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:09 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 68 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=67/68 n=5 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:09 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 68 pg[9.f( v 38'385 (0'0,38'385] local-lis/les=67/68 n=6 ec=47/32 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 24 19:50:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 19:50:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 24 19:50:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 19:50:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 24 19:50:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 24 19:50:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 24 19:50:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 19:50:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 19:50:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 24 19:50:10 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 24 19:50:10 compute-0 ceph-mon[75677]: osdmap e68: 3 total, 3 up, 3 in
Nov 24 19:50:10 compute-0 ceph-mon[75677]: pgmap v142: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 19:50:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 19:50:10 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 69 pg[6.9( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=13.432654381s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 115.435127258s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:10 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 69 pg[6.9( v 44'39 (0'0,44'39] local-lis/les=51/52 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=13.432593346s) [0] r=-1 lpr=69 pi=[51,69)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 115.435127258s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:10 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=69) [0] r=0 lpr=69 pi=[51,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:11 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 69 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=68/69 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[47,68)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:11 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 69 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=68/69 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[47,68)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:11 compute-0 sudo[104960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gemfhbbwyljwisqluyoaocznoierxfbq ; /usr/bin/python3'
Nov 24 19:50:11 compute-0 sudo[104960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:50:11 compute-0 python3[104962]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:50:11 compute-0 podman[104963]: 2025-11-24 19:50:11.576717006 +0000 UTC m=+0.046354461 container create 5ac37c9ad59038b614768c9c278cb83c34171e0eae8a2b017f9b30f5c2b40301 (image=quay.io/ceph/ceph:v18, name=silly_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:50:11 compute-0 systemd[1]: Started libpod-conmon-5ac37c9ad59038b614768c9c278cb83c34171e0eae8a2b017f9b30f5c2b40301.scope.
Nov 24 19:50:11 compute-0 podman[104963]: 2025-11-24 19:50:11.556224915 +0000 UTC m=+0.025862450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:50:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832940ba2f5c795785e942e4ae0b758ae51967095c7593b47add4d1cbf26432f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832940ba2f5c795785e942e4ae0b758ae51967095c7593b47add4d1cbf26432f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:11 compute-0 podman[104963]: 2025-11-24 19:50:11.67939539 +0000 UTC m=+0.149032905 container init 5ac37c9ad59038b614768c9c278cb83c34171e0eae8a2b017f9b30f5c2b40301 (image=quay.io/ceph/ceph:v18, name=silly_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:50:11 compute-0 podman[104963]: 2025-11-24 19:50:11.692387301 +0000 UTC m=+0.162024786 container start 5ac37c9ad59038b614768c9c278cb83c34171e0eae8a2b017f9b30f5c2b40301 (image=quay.io/ceph/ceph:v18, name=silly_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:50:11 compute-0 podman[104963]: 2025-11-24 19:50:11.696803445 +0000 UTC m=+0.166440990 container attach 5ac37c9ad59038b614768c9c278cb83c34171e0eae8a2b017f9b30f5c2b40301 (image=quay.io/ceph/ceph:v18, name=silly_perlman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:50:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 24 19:50:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 24 19:50:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 24 19:50:11 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 70 pg[6.9( v 44'39 (0'0,44'39] local-lis/les=69/70 n=1 ec=45/19 lis/c=51/51 les/c/f=52/52/0 sis=69) [0] r=0 lpr=69 pi=[51,69)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:11 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 70 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/47 les/c/f=69/48/0 sis=70 pruub=15.274371147s) [2] async=[2] r=-1 lpr=70 pi=[47,70)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 118.201286316s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:11 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 70 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/47 les/c/f=69/48/0 sis=70 pruub=15.274265289s) [2] r=-1 lpr=70 pi=[47,70)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.201286316s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:11 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 70 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=68/69 n=5 ec=47/32 lis/c=68/47 les/c/f=69/48/0 sis=70 pruub=15.269223213s) [2] async=[2] r=-1 lpr=70 pi=[47,70)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 118.197853088s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:11 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 70 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=68/69 n=5 ec=47/32 lis/c=68/47 les/c/f=69/48/0 sis=70 pruub=15.269134521s) [2] r=-1 lpr=70 pi=[47,70)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.197853088s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:11 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 70 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:11 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 70 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:11 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 70 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:11 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 70 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:11 compute-0 ceph-mon[75677]: 7.b scrub starts
Nov 24 19:50:11 compute-0 ceph-mon[75677]: 7.b scrub ok
Nov 24 19:50:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 19:50:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 19:50:11 compute-0 ceph-mon[75677]: osdmap e69: 3 total, 3 up, 3 in
Nov 24 19:50:11 compute-0 ceph-mon[75677]: osdmap e70: 3 total, 3 up, 3 in
Nov 24 19:50:11 compute-0 silly_perlman[104979]: could not fetch user info: no user info saved
Nov 24 19:50:11 compute-0 systemd[1]: libpod-5ac37c9ad59038b614768c9c278cb83c34171e0eae8a2b017f9b30f5c2b40301.scope: Deactivated successfully.
Nov 24 19:50:11 compute-0 podman[104963]: 2025-11-24 19:50:11.936783455 +0000 UTC m=+0.406421000 container died 5ac37c9ad59038b614768c9c278cb83c34171e0eae8a2b017f9b30f5c2b40301 (image=quay.io/ceph/ceph:v18, name=silly_perlman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:50:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-832940ba2f5c795785e942e4ae0b758ae51967095c7593b47add4d1cbf26432f-merged.mount: Deactivated successfully.
Nov 24 19:50:11 compute-0 podman[104963]: 2025-11-24 19:50:11.984011174 +0000 UTC m=+0.453648659 container remove 5ac37c9ad59038b614768c9c278cb83c34171e0eae8a2b017f9b30f5c2b40301 (image=quay.io/ceph/ceph:v18, name=silly_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:50:12 compute-0 systemd[1]: libpod-conmon-5ac37c9ad59038b614768c9c278cb83c34171e0eae8a2b017f9b30f5c2b40301.scope: Deactivated successfully.
Nov 24 19:50:12 compute-0 sudo[104960]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:12 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 24 19:50:12 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 24 19:50:12 compute-0 sudo[105100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vylvpotldufoatfjxmkfvqwpegvyuegv ; /usr/bin/python3'
Nov 24 19:50:12 compute-0 sudo[105100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:50:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 9 objects/s recovering
Nov 24 19:50:12 compute-0 python3[105102]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:50:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 24 19:50:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 24 19:50:12 compute-0 podman[105103]: 2025-11-24 19:50:12.457351175 +0000 UTC m=+0.067619888 container create bc353b9134a9822e00906f2ad7e13af43345d128da6a6b83fe4c53e020a36019 (image=quay.io/ceph/ceph:v18, name=youthful_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 19:50:12 compute-0 systemd[1]: Started libpod-conmon-bc353b9134a9822e00906f2ad7e13af43345d128da6a6b83fe4c53e020a36019.scope.
Nov 24 19:50:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:50:12 compute-0 podman[105103]: 2025-11-24 19:50:12.428769573 +0000 UTC m=+0.039038326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 19:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676100752d6ed0ab09fea6d064f175c1b9ad83b3a041f21fdcb185a87d3d9ccf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676100752d6ed0ab09fea6d064f175c1b9ad83b3a041f21fdcb185a87d3d9ccf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:12 compute-0 podman[105103]: 2025-11-24 19:50:12.536815544 +0000 UTC m=+0.147084297 container init bc353b9134a9822e00906f2ad7e13af43345d128da6a6b83fe4c53e020a36019 (image=quay.io/ceph/ceph:v18, name=youthful_jemison, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:50:12 compute-0 podman[105103]: 2025-11-24 19:50:12.546262312 +0000 UTC m=+0.156531015 container start bc353b9134a9822e00906f2ad7e13af43345d128da6a6b83fe4c53e020a36019 (image=quay.io/ceph/ceph:v18, name=youthful_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 19:50:12 compute-0 podman[105103]: 2025-11-24 19:50:12.550565012 +0000 UTC m=+0.160833755 container attach bc353b9134a9822e00906f2ad7e13af43345d128da6a6b83fe4c53e020a36019 (image=quay.io/ceph/ceph:v18, name=youthful_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 19:50:12 compute-0 youthful_jemison[105118]: {
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "user_id": "openstack",
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "display_name": "openstack",
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "email": "",
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "suspended": 0,
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "max_buckets": 1000,
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "subusers": [],
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "keys": [
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         {
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:             "user": "openstack",
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:             "access_key": "VJHKMB0IN3VE2O8KWY7X",
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:             "secret_key": "syHpApkHOEVgwV3Egq7uudulgV05TdQdC8QtTHON"
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         }
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     ],
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "swift_keys": [],
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "caps": [],
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "op_mask": "read, write, delete",
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "default_placement": "",
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "default_storage_class": "",
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "placement_tags": [],
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "bucket_quota": {
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         "enabled": false,
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         "check_on_raw": false,
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         "max_size": -1,
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         "max_size_kb": 0,
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         "max_objects": -1
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     },
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "user_quota": {
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         "enabled": false,
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         "check_on_raw": false,
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         "max_size": -1,
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         "max_size_kb": 0,
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:         "max_objects": -1
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     },
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "temp_url_keys": [],
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "type": "rgw",
Nov 24 19:50:12 compute-0 youthful_jemison[105118]:     "mfa_ids": []
Nov 24 19:50:12 compute-0 youthful_jemison[105118]: }
Nov 24 19:50:12 compute-0 youthful_jemison[105118]: 
Nov 24 19:50:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 24 19:50:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 24 19:50:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 24 19:50:12 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 71 pg[9.8( v 38'385 (0'0,38'385] local-lis/les=70/71 n=6 ec=47/32 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:12 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 71 pg[9.18( v 38'385 (0'0,38'385] local-lis/les=70/71 n=5 ec=47/32 lis/c=68/47 les/c/f=69/48/0 sis=70) [2] r=0 lpr=70 pi=[47,70)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:12 compute-0 systemd[1]: libpod-bc353b9134a9822e00906f2ad7e13af43345d128da6a6b83fe4c53e020a36019.scope: Deactivated successfully.
Nov 24 19:50:12 compute-0 podman[105103]: 2025-11-24 19:50:12.807051545 +0000 UTC m=+0.417320228 container died bc353b9134a9822e00906f2ad7e13af43345d128da6a6b83fe4c53e020a36019 (image=quay.io/ceph/ceph:v18, name=youthful_jemison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 19:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-676100752d6ed0ab09fea6d064f175c1b9ad83b3a041f21fdcb185a87d3d9ccf-merged.mount: Deactivated successfully.
Nov 24 19:50:12 compute-0 podman[105103]: 2025-11-24 19:50:12.852527453 +0000 UTC m=+0.462796136 container remove bc353b9134a9822e00906f2ad7e13af43345d128da6a6b83fe4c53e020a36019 (image=quay.io/ceph/ceph:v18, name=youthful_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:50:12 compute-0 ceph-mon[75677]: 5.d scrub starts
Nov 24 19:50:12 compute-0 ceph-mon[75677]: 5.d scrub ok
Nov 24 19:50:12 compute-0 ceph-mon[75677]: pgmap v145: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 9 objects/s recovering
Nov 24 19:50:12 compute-0 ceph-mon[75677]: osdmap e71: 3 total, 3 up, 3 in
Nov 24 19:50:12 compute-0 systemd[1]: libpod-conmon-bc353b9134a9822e00906f2ad7e13af43345d128da6a6b83fe4c53e020a36019.scope: Deactivated successfully.
Nov 24 19:50:12 compute-0 sudo[105100]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:12 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 24 19:50:13 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 24 19:50:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 24 19:50:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 24 19:50:13 compute-0 ceph-mon[75677]: 7.d scrub starts
Nov 24 19:50:13 compute-0 ceph-mon[75677]: 7.d scrub ok
Nov 24 19:50:13 compute-0 ceph-mon[75677]: 5.e scrub starts
Nov 24 19:50:13 compute-0 ceph-mon[75677]: 5.e scrub ok
Nov 24 19:50:13 compute-0 ceph-mon[75677]: 4.1d scrub starts
Nov 24 19:50:13 compute-0 ceph-mon[75677]: 4.1d scrub ok
Nov 24 19:50:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 913 B/s rd, 0 op/s; 171 B/s, 8 objects/s recovering
Nov 24 19:50:14 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 24 19:50:14 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 24 19:50:15 compute-0 ceph-mon[75677]: pgmap v147: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 913 B/s rd, 0 op/s; 171 B/s, 8 objects/s recovering
Nov 24 19:50:16 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 24 19:50:16 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 24 19:50:16 compute-0 ceph-mon[75677]: 5.10 scrub starts
Nov 24 19:50:16 compute-0 ceph-mon[75677]: 5.10 scrub ok
Nov 24 19:50:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 2 op/s; 164 B/s, 8 objects/s recovering
Nov 24 19:50:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 24 19:50:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 19:50:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 24 19:50:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 19:50:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 24 19:50:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 19:50:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 19:50:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 24 19:50:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 24 19:50:17 compute-0 ceph-mon[75677]: 5.17 scrub starts
Nov 24 19:50:17 compute-0 ceph-mon[75677]: 5.17 scrub ok
Nov 24 19:50:17 compute-0 ceph-mon[75677]: pgmap v148: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 2 op/s; 164 B/s, 8 objects/s recovering
Nov 24 19:50:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 19:50:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 19:50:17 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 72 pg[6.a( v 44'39 (0'0,44'39] local-lis/les=56/57 n=1 ec=45/19 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=15.003329277s) [0] r=-1 lpr=72 pi=[56,72)/1 crt=44'39 lcod 0'0 mlcod 0'0 active pruub 123.532997131s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:17 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=56/56 les/c/f=57/57/0 sis=72) [0] r=0 lpr=72 pi=[56,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:17 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 72 pg[6.a( v 44'39 (0'0,44'39] local-lis/les=56/57 n=1 ec=45/19 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=15.003288269s) [0] r=-1 lpr=72 pi=[56,72)/1 crt=44'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.532997131s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 24 19:50:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 24 19:50:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 24 19:50:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 24 19:50:18 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 24 19:50:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 19:50:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 19:50:18 compute-0 ceph-mon[75677]: osdmap e72: 3 total, 3 up, 3 in
Nov 24 19:50:18 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 73 pg[6.a( v 44'39 (0'0,44'39] local-lis/les=72/73 n=1 ec=45/19 lis/c=56/56 les/c/f=57/57/0 sis=72) [0] r=0 lpr=72 pi=[56,72)/1 crt=44'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 2 op/s; 36 B/s, 1 objects/s recovering
Nov 24 19:50:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 24 19:50:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 19:50:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 24 19:50:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 19:50:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 24 19:50:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 24 19:50:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 24 19:50:19 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 19:50:19 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 19:50:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 24 19:50:19 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 24 19:50:19 compute-0 ceph-mon[75677]: 4.1e scrub starts
Nov 24 19:50:19 compute-0 ceph-mon[75677]: 4.1e scrub ok
Nov 24 19:50:19 compute-0 ceph-mon[75677]: osdmap e73: 3 total, 3 up, 3 in
Nov 24 19:50:19 compute-0 ceph-mon[75677]: pgmap v151: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 2 op/s; 36 B/s, 1 objects/s recovering
Nov 24 19:50:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 19:50:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 19:50:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s; 36 B/s, 1 objects/s recovering
Nov 24 19:50:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 24 19:50:20 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 19:50:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 24 19:50:20 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 19:50:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 24 19:50:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 24 19:50:20 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 74 pg[6.b( v 44'39 (0'0,44'39] local-lis/les=57/58 n=1 ec=45/19 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=12.921747208s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=44'39 mlcod 44'39 active pruub 130.055541992s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:20 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 74 pg[6.b( v 44'39 (0'0,44'39] local-lis/les=57/58 n=1 ec=45/19 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=12.921650887s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=44'39 mlcod 0'0 unknown NOTIFY pruub 130.055541992s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:20 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 24 19:50:20 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 19:50:20 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 19:50:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 24 19:50:20 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 24 19:50:20 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 75 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=75 pruub=9.568390846s) [2] r=-1 lpr=75 pi=[47,75)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 121.266632080s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:20 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 75 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=75 pruub=9.568326950s) [2] r=-1 lpr=75 pi=[47,75)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 121.266632080s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:20 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 75 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=75 pruub=9.572036743s) [2] r=-1 lpr=75 pi=[47,75)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 121.271781921s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:20 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 75 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=75 pruub=9.571995735s) [2] r=-1 lpr=75 pi=[47,75)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 121.271781921s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:20 compute-0 ceph-mon[75677]: 7.10 scrub starts
Nov 24 19:50:20 compute-0 ceph-mon[75677]: 7.10 scrub ok
Nov 24 19:50:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 19:50:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 19:50:20 compute-0 ceph-mon[75677]: osdmap e74: 3 total, 3 up, 3 in
Nov 24 19:50:20 compute-0 ceph-mon[75677]: pgmap v153: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s; 36 B/s, 1 objects/s recovering
Nov 24 19:50:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 19:50:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 19:50:20 compute-0 ceph-mon[75677]: 7.12 scrub starts
Nov 24 19:50:20 compute-0 ceph-mon[75677]: 7.12 scrub ok
Nov 24 19:50:20 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=75) [2] r=0 lpr=75 pi=[47,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:20 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=75) [2] r=0 lpr=75 pi=[47,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:20 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 75 pg[6.b( v 44'39 lc 0'0 (0'0,44'39] local-lis/les=74/75 n=1 ec=45/19 lis/c=57/57 les/c/f=58/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=44'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:21 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 24 19:50:21 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 24 19:50:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 24 19:50:21 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 19:50:21 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 19:50:21 compute-0 ceph-mon[75677]: osdmap e75: 3 total, 3 up, 3 in
Nov 24 19:50:21 compute-0 ceph-mon[75677]: 5.1b scrub starts
Nov 24 19:50:21 compute-0 ceph-mon[75677]: 5.1b scrub ok
Nov 24 19:50:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 24 19:50:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 24 19:50:21 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[47,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:21 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[47,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:21 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[47,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:21 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[47,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:21 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 76 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=0 lpr=76 pi=[47,76)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:21 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 76 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=0 lpr=76 pi=[47,76)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:21 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 76 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=0 lpr=76 pi=[47,76)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:21 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 76 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=47/48 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] r=0 lpr=76 pi=[47,76)/1 crt=38'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:22 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Nov 24 19:50:22 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Nov 24 19:50:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v156: 305 pgs: 2 unknown, 1 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 24 19:50:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 24 19:50:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 24 19:50:22 compute-0 ceph-mon[75677]: osdmap e76: 3 total, 3 up, 3 in
Nov 24 19:50:22 compute-0 ceph-mon[75677]: 5.1c deep-scrub starts
Nov 24 19:50:22 compute-0 ceph-mon[75677]: 5.1c deep-scrub ok
Nov 24 19:50:22 compute-0 ceph-mon[75677]: pgmap v156: 305 pgs: 2 unknown, 1 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:22 compute-0 ceph-mon[75677]: 7.14 scrub starts
Nov 24 19:50:22 compute-0 ceph-mon[75677]: 7.14 scrub ok
Nov 24 19:50:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 24 19:50:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 24 19:50:22 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 77 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=76/77 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[47,76)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:22 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 77 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=76/77 n=5 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[47,76)/1 crt=38'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 24 19:50:23 compute-0 ceph-mon[75677]: osdmap e77: 3 total, 3 up, 3 in
Nov 24 19:50:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 24 19:50:23 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 24 19:50:23 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 78 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:23 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 78 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=0/0 n=6 ec=47/32 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:23 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 78 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:23 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 78 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:23 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 78 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=76/77 n=5 ec=47/32 lis/c=76/47 les/c/f=77/48/0 sis=78 pruub=15.312209129s) [2] async=[2] r=-1 lpr=78 pi=[47,78)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 130.074111938s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:23 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 78 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=76/77 n=6 ec=47/32 lis/c=76/47 les/c/f=77/48/0 sis=78 pruub=15.308996201s) [2] async=[2] r=-1 lpr=78 pi=[47,78)/1 crt=38'385 lcod 0'0 mlcod 0'0 active pruub 130.070907593s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:23 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 78 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=76/77 n=6 ec=47/32 lis/c=76/47 les/c/f=77/48/0 sis=78 pruub=15.308845520s) [2] r=-1 lpr=78 pi=[47,78)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.070907593s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:23 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 78 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=76/77 n=5 ec=47/32 lis/c=76/47 les/c/f=77/48/0 sis=78 pruub=15.312064171s) [2] r=-1 lpr=78 pi=[47,78)/1 crt=38'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.074111938s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:50:24
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Some PGs (0.006557) are unknown; try again later
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 2 unknown, 1 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:50:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:50:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 24 19:50:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 24 19:50:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 24 19:50:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 24 19:50:24 compute-0 ceph-mon[75677]: osdmap e78: 3 total, 3 up, 3 in
Nov 24 19:50:24 compute-0 ceph-mon[75677]: pgmap v159: 305 pgs: 2 unknown, 1 peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:24 compute-0 ceph-mon[75677]: 4.1f scrub starts
Nov 24 19:50:24 compute-0 ceph-mon[75677]: 4.1f scrub ok
Nov 24 19:50:24 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 24 19:50:24 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 79 pg[9.c( v 38'385 (0'0,38'385] local-lis/les=78/79 n=6 ec=47/32 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:24 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 79 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=78/79 n=5 ec=47/32 lis/c=76/47 les/c/f=77/48/0 sis=78) [2] r=0 lpr=78 pi=[47,78)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:25 compute-0 ceph-mon[75677]: osdmap e79: 3 total, 3 up, 3 in
Nov 24 19:50:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 431 B/s wr, 14 op/s; 23 B/s, 2 objects/s recovering
Nov 24 19:50:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 24 19:50:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 19:50:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 24 19:50:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 19:50:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 24 19:50:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 24 19:50:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 24 19:50:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 24 19:50:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 24 19:50:26 compute-0 ceph-mon[75677]: pgmap v161: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 431 B/s wr, 14 op/s; 23 B/s, 2 objects/s recovering
Nov 24 19:50:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 19:50:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 19:50:26 compute-0 ceph-mon[75677]: 7.16 scrub starts
Nov 24 19:50:26 compute-0 ceph-mon[75677]: 7.16 scrub ok
Nov 24 19:50:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 19:50:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 19:50:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 24 19:50:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 24 19:50:26 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 80 pg[6.d( v 44'39 (0'0,44'39] local-lis/les=61/62 n=1 ec=45/19 lis/c=61/61 les/c/f=62/62/0 sis=80 pruub=8.928446770s) [1] r=-1 lpr=80 pi=[61,80)/1 crt=44'39 mlcod 44'39 active pruub 132.240493774s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:26 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 80 pg[6.d( v 44'39 (0'0,44'39] local-lis/les=61/62 n=1 ec=45/19 lis/c=61/61 les/c/f=62/62/0 sis=80 pruub=8.928350449s) [1] r=-1 lpr=80 pi=[61,80)/1 crt=44'39 mlcod 0'0 unknown NOTIFY pruub 132.240493774s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:26 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 80 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=61/61 les/c/f=62/62/0 sis=80) [1] r=0 lpr=80 pi=[61,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:27 compute-0 sshd-session[105215]: Accepted publickey for zuul from 192.168.122.30 port 42818 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:50:27 compute-0 systemd-logind[795]: New session 34 of user zuul.
Nov 24 19:50:27 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 24 19:50:27 compute-0 sshd-session[105215]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:50:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 24 19:50:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 24 19:50:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 24 19:50:27 compute-0 ceph-mon[75677]: 5.1e scrub starts
Nov 24 19:50:27 compute-0 ceph-mon[75677]: 5.1e scrub ok
Nov 24 19:50:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 19:50:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 19:50:27 compute-0 ceph-mon[75677]: osdmap e80: 3 total, 3 up, 3 in
Nov 24 19:50:27 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 81 pg[6.d( v 44'39 lc 40'13 (0'0,44'39] local-lis/les=80/81 n=1 ec=45/19 lis/c=61/61 les/c/f=62/62/0 sis=80) [1] r=0 lpr=80 pi=[61,80)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 434 B/s wr, 14 op/s; 23 B/s, 2 objects/s recovering
Nov 24 19:50:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 24 19:50:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 19:50:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 24 19:50:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 19:50:28 compute-0 python3.9[105368]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:50:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 24 19:50:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 19:50:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 19:50:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 24 19:50:28 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 24 19:50:28 compute-0 ceph-mon[75677]: osdmap e81: 3 total, 3 up, 3 in
Nov 24 19:50:28 compute-0 ceph-mon[75677]: pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 434 B/s wr, 14 op/s; 23 B/s, 2 objects/s recovering
Nov 24 19:50:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 19:50:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 19:50:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 19:50:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 19:50:29 compute-0 ceph-mon[75677]: osdmap e82: 3 total, 3 up, 3 in
Nov 24 19:50:30 compute-0 sudo[105584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruxhhdezdiaayxfippeqdwciqbsgvbkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013829.5564532-32-241040862242182/AnsiballZ_command.py'
Nov 24 19:50:30 compute-0 sudo[105584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:50:30 compute-0 python3.9[105586]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:50:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 359 B/s wr, 12 op/s; 19 B/s, 2 objects/s recovering
Nov 24 19:50:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 24 19:50:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 19:50:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 24 19:50:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 19:50:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 24 19:50:30 compute-0 ceph-mon[75677]: pgmap v166: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 359 B/s wr, 12 op/s; 19 B/s, 2 objects/s recovering
Nov 24 19:50:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 19:50:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 19:50:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 19:50:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 19:50:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 24 19:50:30 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 24 19:50:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 83 pg[6.f( v 44'39 (0'0,44'39] local-lis/les=57/58 n=1 ec=45/19 lis/c=57/57 les/c/f=58/58/0 sis=83 pruub=10.525682449s) [2] r=-1 lpr=83 pi=[57,83)/1 crt=44'39 mlcod 44'39 active pruub 138.062820435s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:30 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 83 pg[6.f( v 44'39 (0'0,44'39] local-lis/les=57/58 n=1 ec=45/19 lis/c=57/57 les/c/f=58/58/0 sis=83 pruub=10.525505066s) [2] r=-1 lpr=83 pi=[57,83)/1 crt=44'39 mlcod 0'0 unknown NOTIFY pruub 138.062820435s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:30 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 83 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=57/57 les/c/f=58/58/0 sis=83) [2] r=0 lpr=83 pi=[57,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 24 19:50:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 19:50:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 19:50:31 compute-0 ceph-mon[75677]: osdmap e83: 3 total, 3 up, 3 in
Nov 24 19:50:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 24 19:50:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 24 19:50:31 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 84 pg[6.f( v 44'39 lc 40'1 (0'0,44'39] local-lis/les=83/84 n=1 ec=45/19 lis/c=57/57 les/c/f=58/58/0 sis=83) [2] r=0 lpr=83 pi=[57,83)/1 crt=44'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 24 19:50:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 24 19:50:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 19:50:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 24 19:50:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 24 19:50:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 24 19:50:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 24 19:50:32 compute-0 ceph-mon[75677]: osdmap e84: 3 total, 3 up, 3 in
Nov 24 19:50:32 compute-0 ceph-mon[75677]: pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 24 19:50:32 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 19:50:33 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 24 19:50:33 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 24 19:50:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 24 19:50:33 compute-0 ceph-mon[75677]: osdmap e85: 3 total, 3 up, 3 in
Nov 24 19:50:33 compute-0 ceph-mon[75677]: 5.1f scrub starts
Nov 24 19:50:33 compute-0 ceph-mon[75677]: 5.1f scrub ok
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:50:34 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 19:50:34 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 24 19:50:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 24 19:50:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 24 19:50:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 19:50:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 24 19:50:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 24 19:50:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 24 19:50:34 compute-0 ceph-mon[75677]: 10.3 scrub starts
Nov 24 19:50:34 compute-0 ceph-mon[75677]: 10.3 scrub ok
Nov 24 19:50:34 compute-0 ceph-mon[75677]: pgmap v171: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 24 19:50:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 19:50:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 24 19:50:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 24 19:50:34 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 24 19:50:35 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 24 19:50:35 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 24 19:50:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 0 objects/s recovering
Nov 24 19:50:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Nov 24 19:50:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 0 objects/s recovering
Nov 24 19:50:40 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 24 19:50:40 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 24 19:50:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 0 objects/s recovering
Nov 24 19:50:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 24 19:50:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 19:50:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 24 19:50:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 19:50:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 24 19:50:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 19:50:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 24 19:50:42 compute-0 ceph-mon[75677]: 2.18 scrub starts
Nov 24 19:50:42 compute-0 ceph-mon[75677]: 2.18 scrub ok
Nov 24 19:50:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 24 19:50:42 compute-0 ceph-mon[75677]: osdmap e86: 3 total, 3 up, 3 in
Nov 24 19:50:42 compute-0 ceph-mon[75677]: 10.5 scrub starts
Nov 24 19:50:42 compute-0 ceph-mon[75677]: 10.5 scrub ok
Nov 24 19:50:42 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.c deep-scrub starts
Nov 24 19:50:42 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 6.279372215s
Nov 24 19:50:42 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 6.279372692s
Nov 24 19:50:42 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.279663086s, txc = 0x55ba3cf1c900
Nov 24 19:50:42 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.c deep-scrub ok
Nov 24 19:50:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Nov 24 19:50:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 1 active+clean+scrubbing+deep, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 62 B/s, 0 objects/s recovering
Nov 24 19:50:42 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.212952137s, txc = 0x55ba3d1b8000
Nov 24 19:50:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 24 19:50:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 19:50:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 19:50:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 19:50:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 19:50:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 24 19:50:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 24 19:50:43 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 24 19:50:43 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 24 19:50:43 compute-0 ceph-mon[75677]: pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 0 objects/s recovering
Nov 24 19:50:43 compute-0 ceph-mon[75677]: 7.17 deep-scrub starts
Nov 24 19:50:43 compute-0 ceph-mon[75677]: pgmap v174: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 0 objects/s recovering
Nov 24 19:50:43 compute-0 ceph-mon[75677]: 10.a scrub starts
Nov 24 19:50:43 compute-0 ceph-mon[75677]: 10.a scrub ok
Nov 24 19:50:43 compute-0 ceph-mon[75677]: pgmap v175: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 0 objects/s recovering
Nov 24 19:50:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 19:50:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 19:50:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 19:50:43 compute-0 ceph-mon[75677]: 10.c deep-scrub starts
Nov 24 19:50:43 compute-0 ceph-mon[75677]: 10.c deep-scrub ok
Nov 24 19:50:43 compute-0 ceph-mon[75677]: 7.17 deep-scrub ok
Nov 24 19:50:43 compute-0 ceph-mon[75677]: pgmap v176: 305 pgs: 1 active+clean+scrubbing+deep, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 62 B/s, 0 objects/s recovering
Nov 24 19:50:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 19:50:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 19:50:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 19:50:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 19:50:43 compute-0 ceph-mon[75677]: osdmap e87: 3 total, 3 up, 3 in
Nov 24 19:50:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 24 19:50:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 19:50:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 24 19:50:43 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 24 19:50:44 compute-0 ceph-mon[75677]: 10.18 scrub starts
Nov 24 19:50:44 compute-0 ceph-mon[75677]: 10.18 scrub ok
Nov 24 19:50:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 19:50:44 compute-0 ceph-mon[75677]: osdmap e88: 3 total, 3 up, 3 in
Nov 24 19:50:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 24 19:50:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 24 19:50:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 1 active+clean+scrubbing+deep, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 24 19:50:44 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 19:50:44 compute-0 sudo[105584]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 24 19:50:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 24 19:50:44 compute-0 sshd-session[105218]: Connection closed by 192.168.122.30 port 42818
Nov 24 19:50:44 compute-0 sshd-session[105215]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:50:44 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 24 19:50:44 compute-0 systemd[1]: session-34.scope: Consumed 8.729s CPU time.
Nov 24 19:50:44 compute-0 systemd-logind[795]: Session 34 logged out. Waiting for processes to exit.
Nov 24 19:50:44 compute-0 systemd-logind[795]: Removed session 34.
Nov 24 19:50:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 24 19:50:45 compute-0 ceph-mon[75677]: 7.19 scrub starts
Nov 24 19:50:45 compute-0 ceph-mon[75677]: 7.19 scrub ok
Nov 24 19:50:45 compute-0 ceph-mon[75677]: pgmap v179: 305 pgs: 1 active+clean+scrubbing+deep, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 19:50:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 24 19:50:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 24 19:50:45 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 24 19:50:45 compute-0 sudo[105643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:45 compute-0 sudo[105643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:45 compute-0 sudo[105643]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:45 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 89 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=89 pruub=13.852714539s) [2] r=-1 lpr=89 pi=[54,89)/1 crt=38'385 mlcod 0'0 active pruub 155.991180420s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:45 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 89 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=89 pruub=13.852646828s) [2] r=-1 lpr=89 pi=[54,89)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 155.991180420s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:45 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 89 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=89) [2] r=0 lpr=89 pi=[54,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:45 compute-0 sudo[105668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:50:45 compute-0 sudo[105668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:45 compute-0 sudo[105668]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:45 compute-0 sudo[105693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:45 compute-0 sudo[105693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:45 compute-0 sudo[105693]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:45 compute-0 sudo[105718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 19:50:45 compute-0 sudo[105718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 24 19:50:46 compute-0 ceph-mon[75677]: 2.16 scrub starts
Nov 24 19:50:46 compute-0 ceph-mon[75677]: 2.16 scrub ok
Nov 24 19:50:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 24 19:50:46 compute-0 ceph-mon[75677]: osdmap e89: 3 total, 3 up, 3 in
Nov 24 19:50:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 24 19:50:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 24 19:50:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 90 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=90) [2]/[0] r=0 lpr=90 pi=[54,90)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:46 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 90 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=90) [2]/[0] r=0 lpr=90 pi=[54,90)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:46 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[54,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:46 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[54,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:46 compute-0 podman[105812]: 2025-11-24 19:50:46.21705338 +0000 UTC m=+0.084018678 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:50:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 1 unknown, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:46 compute-0 podman[105812]: 2025-11-24 19:50:46.364072514 +0000 UTC m=+0.231037852 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:50:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.1d deep-scrub starts
Nov 24 19:50:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.1d deep-scrub ok
Nov 24 19:50:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 24 19:50:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 24 19:50:47 compute-0 ceph-mon[75677]: osdmap e90: 3 total, 3 up, 3 in
Nov 24 19:50:47 compute-0 ceph-mon[75677]: pgmap v182: 305 pgs: 1 unknown, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 24 19:50:47 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 91 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=90/91 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=90) [2]/[0] async=[2] r=0 lpr=90 pi=[54,90)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:47 compute-0 sudo[105718]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:50:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:50:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:47 compute-0 sudo[105976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:47 compute-0 sudo[105976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:47 compute-0 sudo[105976]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:47 compute-0 sudo[106001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:50:47 compute-0 sudo[106001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:47 compute-0 sudo[106001]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:47 compute-0 sudo[106026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:47 compute-0 sudo[106026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:47 compute-0 sudo[106026]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:47 compute-0 sudo[106051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 19:50:47 compute-0 sudo[106051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 24 19:50:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 24 19:50:48 compute-0 sudo[106051]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:50:48 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:50:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 24 19:50:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:50:48 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:50:48 compute-0 ceph-mon[75677]: 7.1d deep-scrub starts
Nov 24 19:50:48 compute-0 ceph-mon[75677]: 7.1d deep-scrub ok
Nov 24 19:50:48 compute-0 ceph-mon[75677]: osdmap e91: 3 total, 3 up, 3 in
Nov 24 19:50:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:50:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:50:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 24 19:50:48 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 24 19:50:48 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 92 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=90/54 les/c/f=91/55/0 sis=92) [2] r=0 lpr=92 pi=[54,92)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:48 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 92 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=90/91 n=5 ec=47/32 lis/c=90/54 les/c/f=91/55/0 sis=92 pruub=15.031126976s) [2] async=[2] r=-1 lpr=92 pi=[54,92)/1 crt=38'385 mlcod 38'385 active pruub 159.843185425s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:48 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 92 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=90/91 n=5 ec=47/32 lis/c=90/54 les/c/f=91/55/0 sis=92 pruub=15.031017303s) [2] r=-1 lpr=92 pi=[54,92)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 159.843185425s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:48 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 92 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=90/54 les/c/f=91/55/0 sis=92) [2] r=0 lpr=92 pi=[54,92)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:48 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:48 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c60b1d17-46e7-43fd-956e-ed4148381179 does not exist
Nov 24 19:50:48 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6836b8d3-e4e5-411d-b9a0-e060a24f4bfe does not exist
Nov 24 19:50:48 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6c1e7537-4094-4bf8-8100-41a464d61272 does not exist
Nov 24 19:50:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:50:48 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:50:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:50:48 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:50:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:50:48 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:50:48 compute-0 sudo[106108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:48 compute-0 sudo[106108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:48 compute-0 sudo[106108]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:48 compute-0 sudo[106133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:50:48 compute-0 sudo[106133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:48 compute-0 sudo[106133]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:48 compute-0 sudo[106158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:48 compute-0 sudo[106158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:48 compute-0 sudo[106158]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:48 compute-0 sudo[106183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:50:48 compute-0 sudo[106183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 24 19:50:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 24 19:50:48 compute-0 podman[106250]: 2025-11-24 19:50:48.820325178 +0000 UTC m=+0.048058700 container create f132c1258fcfb3da9337395ea0dabd945c44d128d17d7001907650f93ed6dd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swartz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:50:48 compute-0 systemd[1]: Started libpod-conmon-f132c1258fcfb3da9337395ea0dabd945c44d128d17d7001907650f93ed6dd4b.scope.
Nov 24 19:50:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:50:48 compute-0 podman[106250]: 2025-11-24 19:50:48.896904186 +0000 UTC m=+0.124637738 container init f132c1258fcfb3da9337395ea0dabd945c44d128d17d7001907650f93ed6dd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swartz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 19:50:48 compute-0 podman[106250]: 2025-11-24 19:50:48.805870806 +0000 UTC m=+0.033604348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:50:48 compute-0 podman[106250]: 2025-11-24 19:50:48.902634495 +0000 UTC m=+0.130368017 container start f132c1258fcfb3da9337395ea0dabd945c44d128d17d7001907650f93ed6dd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:50:48 compute-0 podman[106250]: 2025-11-24 19:50:48.905563667 +0000 UTC m=+0.133297229 container attach f132c1258fcfb3da9337395ea0dabd945c44d128d17d7001907650f93ed6dd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swartz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 19:50:48 compute-0 crazy_swartz[106266]: 167 167
Nov 24 19:50:48 compute-0 systemd[1]: libpod-f132c1258fcfb3da9337395ea0dabd945c44d128d17d7001907650f93ed6dd4b.scope: Deactivated successfully.
Nov 24 19:50:48 compute-0 conmon[106266]: conmon f132c1258fcfb3da9337 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f132c1258fcfb3da9337395ea0dabd945c44d128d17d7001907650f93ed6dd4b.scope/container/memory.events
Nov 24 19:50:48 compute-0 podman[106250]: 2025-11-24 19:50:48.908932814 +0000 UTC m=+0.136666346 container died f132c1258fcfb3da9337395ea0dabd945c44d128d17d7001907650f93ed6dd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-65b57a4e902603de6b5e2526f71dfceab9bc0cdf0a1a0941ad4588e595c07b37-merged.mount: Deactivated successfully.
Nov 24 19:50:48 compute-0 podman[106250]: 2025-11-24 19:50:48.952518837 +0000 UTC m=+0.180252389 container remove f132c1258fcfb3da9337395ea0dabd945c44d128d17d7001907650f93ed6dd4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swartz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:50:48 compute-0 systemd[1]: libpod-conmon-f132c1258fcfb3da9337395ea0dabd945c44d128d17d7001907650f93ed6dd4b.scope: Deactivated successfully.
Nov 24 19:50:49 compute-0 ceph-mon[75677]: 7.1e scrub starts
Nov 24 19:50:49 compute-0 ceph-mon[75677]: 7.1e scrub ok
Nov 24 19:50:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:50:49 compute-0 ceph-mon[75677]: osdmap e92: 3 total, 3 up, 3 in
Nov 24 19:50:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:50:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:50:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:50:49 compute-0 ceph-mon[75677]: pgmap v185: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 24 19:50:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 24 19:50:49 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 24 19:50:49 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 93 pg[9.13( v 38'385 (0'0,38'385] local-lis/les=92/93 n=5 ec=47/32 lis/c=90/54 les/c/f=91/55/0 sis=92) [2] r=0 lpr=92 pi=[54,92)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:49 compute-0 podman[106290]: 2025-11-24 19:50:49.176692249 +0000 UTC m=+0.056899406 container create 06c79f4155caba15e0ce64d74ce946489adcff8493665e3a87fb8a305e83bee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 19:50:49 compute-0 systemd[1]: Started libpod-conmon-06c79f4155caba15e0ce64d74ce946489adcff8493665e3a87fb8a305e83bee1.scope.
Nov 24 19:50:49 compute-0 podman[106290]: 2025-11-24 19:50:49.145852928 +0000 UTC m=+0.026060125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:50:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08e42b3cb80cff15225399d22d11188dbd81abace96732c3d2acb3bfd822139/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08e42b3cb80cff15225399d22d11188dbd81abace96732c3d2acb3bfd822139/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08e42b3cb80cff15225399d22d11188dbd81abace96732c3d2acb3bfd822139/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08e42b3cb80cff15225399d22d11188dbd81abace96732c3d2acb3bfd822139/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08e42b3cb80cff15225399d22d11188dbd81abace96732c3d2acb3bfd822139/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:49 compute-0 podman[106290]: 2025-11-24 19:50:49.291449792 +0000 UTC m=+0.171656989 container init 06c79f4155caba15e0ce64d74ce946489adcff8493665e3a87fb8a305e83bee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 19:50:49 compute-0 podman[106290]: 2025-11-24 19:50:49.306832926 +0000 UTC m=+0.187040073 container start 06c79f4155caba15e0ce64d74ce946489adcff8493665e3a87fb8a305e83bee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 19:50:49 compute-0 podman[106290]: 2025-11-24 19:50:49.310864506 +0000 UTC m=+0.191071623 container attach 06c79f4155caba15e0ce64d74ce946489adcff8493665e3a87fb8a305e83bee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elgamal, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:50:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 24 19:50:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 24 19:50:50 compute-0 ceph-mon[75677]: 10.1e scrub starts
Nov 24 19:50:50 compute-0 ceph-mon[75677]: 10.1e scrub ok
Nov 24 19:50:50 compute-0 ceph-mon[75677]: osdmap e93: 3 total, 3 up, 3 in
Nov 24 19:50:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:50 compute-0 hungry_elgamal[106307]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:50:50 compute-0 hungry_elgamal[106307]: --> relative data size: 1.0
Nov 24 19:50:50 compute-0 hungry_elgamal[106307]: --> All data devices are unavailable
Nov 24 19:50:50 compute-0 systemd[1]: libpod-06c79f4155caba15e0ce64d74ce946489adcff8493665e3a87fb8a305e83bee1.scope: Deactivated successfully.
Nov 24 19:50:50 compute-0 systemd[1]: libpod-06c79f4155caba15e0ce64d74ce946489adcff8493665e3a87fb8a305e83bee1.scope: Consumed 1.069s CPU time.
Nov 24 19:50:50 compute-0 podman[106290]: 2025-11-24 19:50:50.423425497 +0000 UTC m=+1.303632614 container died 06c79f4155caba15e0ce64d74ce946489adcff8493665e3a87fb8a305e83bee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elgamal, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d08e42b3cb80cff15225399d22d11188dbd81abace96732c3d2acb3bfd822139-merged.mount: Deactivated successfully.
Nov 24 19:50:50 compute-0 podman[106290]: 2025-11-24 19:50:50.469799446 +0000 UTC m=+1.350006563 container remove 06c79f4155caba15e0ce64d74ce946489adcff8493665e3a87fb8a305e83bee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elgamal, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:50:50 compute-0 systemd[1]: libpod-conmon-06c79f4155caba15e0ce64d74ce946489adcff8493665e3a87fb8a305e83bee1.scope: Deactivated successfully.
Nov 24 19:50:50 compute-0 sudo[106183]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:50 compute-0 sudo[106350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:50 compute-0 sudo[106350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:50 compute-0 sudo[106350]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:50 compute-0 sudo[106375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:50:50 compute-0 sudo[106375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:50 compute-0 sudo[106375]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:50 compute-0 sudo[106400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:50 compute-0 sudo[106400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:50 compute-0 sudo[106400]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:50 compute-0 sudo[106425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:50:50 compute-0 sudo[106425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:51 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 24 19:50:51 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 24 19:50:51 compute-0 podman[106491]: 2025-11-24 19:50:51.071147461 +0000 UTC m=+0.040917302 container create aba3719a6da4561bd05635319d2fc32cf88fae7a93384f60a22ad3a1d7abdf60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williams, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:50:51 compute-0 systemd[1]: Started libpod-conmon-aba3719a6da4561bd05635319d2fc32cf88fae7a93384f60a22ad3a1d7abdf60.scope.
Nov 24 19:50:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:50:51 compute-0 podman[106491]: 2025-11-24 19:50:51.144845569 +0000 UTC m=+0.114615430 container init aba3719a6da4561bd05635319d2fc32cf88fae7a93384f60a22ad3a1d7abdf60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:50:51 compute-0 podman[106491]: 2025-11-24 19:50:51.052580096 +0000 UTC m=+0.022350027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:50:51 compute-0 podman[106491]: 2025-11-24 19:50:51.152396151 +0000 UTC m=+0.122166002 container start aba3719a6da4561bd05635319d2fc32cf88fae7a93384f60a22ad3a1d7abdf60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williams, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 19:50:51 compute-0 ceph-mon[75677]: 8.1 scrub starts
Nov 24 19:50:51 compute-0 ceph-mon[75677]: 8.1 scrub ok
Nov 24 19:50:51 compute-0 ceph-mon[75677]: pgmap v187: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:51 compute-0 podman[106491]: 2025-11-24 19:50:51.157471128 +0000 UTC m=+0.127240969 container attach aba3719a6da4561bd05635319d2fc32cf88fae7a93384f60a22ad3a1d7abdf60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williams, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 19:50:51 compute-0 pensive_williams[106507]: 167 167
Nov 24 19:50:51 compute-0 systemd[1]: libpod-aba3719a6da4561bd05635319d2fc32cf88fae7a93384f60a22ad3a1d7abdf60.scope: Deactivated successfully.
Nov 24 19:50:51 compute-0 podman[106491]: 2025-11-24 19:50:51.160113709 +0000 UTC m=+0.129883620 container died aba3719a6da4561bd05635319d2fc32cf88fae7a93384f60a22ad3a1d7abdf60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williams, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 19:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-fca4a56c46bb4efe33ae82c010d503715acbf4a0af0ac10d754ebf7941723de5-merged.mount: Deactivated successfully.
Nov 24 19:50:51 compute-0 podman[106491]: 2025-11-24 19:50:51.196273955 +0000 UTC m=+0.166043796 container remove aba3719a6da4561bd05635319d2fc32cf88fae7a93384f60a22ad3a1d7abdf60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williams, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 19:50:51 compute-0 systemd[1]: libpod-conmon-aba3719a6da4561bd05635319d2fc32cf88fae7a93384f60a22ad3a1d7abdf60.scope: Deactivated successfully.
Nov 24 19:50:51 compute-0 podman[106530]: 2025-11-24 19:50:51.390335321 +0000 UTC m=+0.037901066 container create 7e5810718b636fd625d713dd2969c93ffbd6158e86b09657a6cce40490a52251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 19:50:51 compute-0 systemd[1]: Started libpod-conmon-7e5810718b636fd625d713dd2969c93ffbd6158e86b09657a6cce40490a52251.scope.
Nov 24 19:50:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30787568402e27e12ab1fbbf7b78eac1f5e5c41a155ad2abc7529e230c139b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30787568402e27e12ab1fbbf7b78eac1f5e5c41a155ad2abc7529e230c139b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30787568402e27e12ab1fbbf7b78eac1f5e5c41a155ad2abc7529e230c139b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30787568402e27e12ab1fbbf7b78eac1f5e5c41a155ad2abc7529e230c139b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:51 compute-0 podman[106530]: 2025-11-24 19:50:51.453994081 +0000 UTC m=+0.101559856 container init 7e5810718b636fd625d713dd2969c93ffbd6158e86b09657a6cce40490a52251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:50:51 compute-0 podman[106530]: 2025-11-24 19:50:51.460239798 +0000 UTC m=+0.107805533 container start 7e5810718b636fd625d713dd2969c93ffbd6158e86b09657a6cce40490a52251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:50:51 compute-0 podman[106530]: 2025-11-24 19:50:51.462743695 +0000 UTC m=+0.110309480 container attach 7e5810718b636fd625d713dd2969c93ffbd6158e86b09657a6cce40490a52251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:50:51 compute-0 podman[106530]: 2025-11-24 19:50:51.374234412 +0000 UTC m=+0.021800187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:50:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:52 compute-0 strange_ganguly[106547]: {
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:     "0": [
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:         {
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "devices": [
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "/dev/loop3"
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             ],
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_name": "ceph_lv0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_size": "21470642176",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "name": "ceph_lv0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "tags": {
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.cluster_name": "ceph",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.crush_device_class": "",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.encrypted": "0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.osd_id": "0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.type": "block",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.vdo": "0"
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             },
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "type": "block",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "vg_name": "ceph_vg0"
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:         }
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:     ],
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:     "1": [
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:         {
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "devices": [
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "/dev/loop4"
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             ],
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_name": "ceph_lv1",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_size": "21470642176",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "name": "ceph_lv1",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "tags": {
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.cluster_name": "ceph",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.crush_device_class": "",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.encrypted": "0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.osd_id": "1",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.type": "block",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.vdo": "0"
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             },
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "type": "block",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "vg_name": "ceph_vg1"
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:         }
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:     ],
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:     "2": [
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:         {
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "devices": [
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "/dev/loop5"
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             ],
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_name": "ceph_lv2",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_size": "21470642176",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "name": "ceph_lv2",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "tags": {
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.cluster_name": "ceph",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.crush_device_class": "",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.encrypted": "0",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.osd_id": "2",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.type": "block",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:                 "ceph.vdo": "0"
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             },
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "type": "block",
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:             "vg_name": "ceph_vg2"
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:         }
Nov 24 19:50:52 compute-0 strange_ganguly[106547]:     ]
Nov 24 19:50:52 compute-0 strange_ganguly[106547]: }
Nov 24 19:50:52 compute-0 ceph-mon[75677]: 10.1b scrub starts
Nov 24 19:50:52 compute-0 ceph-mon[75677]: 10.1b scrub ok
Nov 24 19:50:52 compute-0 systemd[1]: libpod-7e5810718b636fd625d713dd2969c93ffbd6158e86b09657a6cce40490a52251.scope: Deactivated successfully.
Nov 24 19:50:52 compute-0 podman[106530]: 2025-11-24 19:50:52.220658424 +0000 UTC m=+0.868224199 container died 7e5810718b636fd625d713dd2969c93ffbd6158e86b09657a6cce40490a52251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d30787568402e27e12ab1fbbf7b78eac1f5e5c41a155ad2abc7529e230c139b2-merged.mount: Deactivated successfully.
Nov 24 19:50:52 compute-0 podman[106530]: 2025-11-24 19:50:52.295065417 +0000 UTC m=+0.942631162 container remove 7e5810718b636fd625d713dd2969c93ffbd6158e86b09657a6cce40490a52251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 19:50:52 compute-0 systemd[1]: libpod-conmon-7e5810718b636fd625d713dd2969c93ffbd6158e86b09657a6cce40490a52251.scope: Deactivated successfully.
Nov 24 19:50:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 24 19:50:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 24 19:50:52 compute-0 sudo[106425]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 19:50:52 compute-0 sudo[106569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:52 compute-0 sudo[106569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:52 compute-0 sudo[106569]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:52 compute-0 sudo[106594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:50:52 compute-0 sudo[106594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:52 compute-0 sudo[106594]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:52 compute-0 sudo[106619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:52 compute-0 sudo[106619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:52 compute-0 sudo[106619]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:52 compute-0 sudo[106644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:50:52 compute-0 sudo[106644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:52 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 24 19:50:52 compute-0 podman[106709]: 2025-11-24 19:50:52.996008989 +0000 UTC m=+0.056875345 container create 5e67c5b6f57b27605455808b37a6c7e67d116725ae8a7880d7fef73d3034e0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_villani, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:50:53 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 24 19:50:53 compute-0 systemd[1]: Started libpod-conmon-5e67c5b6f57b27605455808b37a6c7e67d116725ae8a7880d7fef73d3034e0ba.scope.
Nov 24 19:50:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:50:53 compute-0 podman[106709]: 2025-11-24 19:50:52.976100208 +0000 UTC m=+0.036966594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:50:53 compute-0 podman[106709]: 2025-11-24 19:50:53.086080275 +0000 UTC m=+0.146946681 container init 5e67c5b6f57b27605455808b37a6c7e67d116725ae8a7880d7fef73d3034e0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_villani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 19:50:53 compute-0 podman[106709]: 2025-11-24 19:50:53.095437441 +0000 UTC m=+0.156303817 container start 5e67c5b6f57b27605455808b37a6c7e67d116725ae8a7880d7fef73d3034e0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_villani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 19:50:53 compute-0 podman[106709]: 2025-11-24 19:50:53.098915421 +0000 UTC m=+0.159781817 container attach 5e67c5b6f57b27605455808b37a6c7e67d116725ae8a7880d7fef73d3034e0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_villani, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:50:53 compute-0 vigilant_villani[106725]: 167 167
Nov 24 19:50:53 compute-0 systemd[1]: libpod-5e67c5b6f57b27605455808b37a6c7e67d116725ae8a7880d7fef73d3034e0ba.scope: Deactivated successfully.
Nov 24 19:50:53 compute-0 podman[106709]: 2025-11-24 19:50:53.101484391 +0000 UTC m=+0.162350767 container died 5e67c5b6f57b27605455808b37a6c7e67d116725ae8a7880d7fef73d3034e0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_villani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 19:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aec722e931a6e950969dbd41383ed95e2c93aaab2c8c0835bbcfbda231d704b-merged.mount: Deactivated successfully.
Nov 24 19:50:53 compute-0 podman[106709]: 2025-11-24 19:50:53.152045086 +0000 UTC m=+0.212911452 container remove 5e67c5b6f57b27605455808b37a6c7e67d116725ae8a7880d7fef73d3034e0ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_villani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 19:50:53 compute-0 systemd[1]: libpod-conmon-5e67c5b6f57b27605455808b37a6c7e67d116725ae8a7880d7fef73d3034e0ba.scope: Deactivated successfully.
Nov 24 19:50:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 24 19:50:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 24 19:50:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 24 19:50:53 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 24 19:50:53 compute-0 ceph-mon[75677]: pgmap v188: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 24 19:50:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 19:50:53 compute-0 podman[106749]: 2025-11-24 19:50:53.37312239 +0000 UTC m=+0.057734076 container create 0ef9f74eee552200a06bba043969bc16cd82100006b2877601c3d771e885d908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:50:53 compute-0 systemd[1]: Started libpod-conmon-0ef9f74eee552200a06bba043969bc16cd82100006b2877601c3d771e885d908.scope.
Nov 24 19:50:53 compute-0 podman[106749]: 2025-11-24 19:50:53.346697592 +0000 UTC m=+0.031309328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:50:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801341a0bc61c2f16bb88b5f1becaae7ba4948d9dd466d686c0a64e814edeb01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801341a0bc61c2f16bb88b5f1becaae7ba4948d9dd466d686c0a64e814edeb01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801341a0bc61c2f16bb88b5f1becaae7ba4948d9dd466d686c0a64e814edeb01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801341a0bc61c2f16bb88b5f1becaae7ba4948d9dd466d686c0a64e814edeb01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:50:53 compute-0 podman[106749]: 2025-11-24 19:50:53.473453143 +0000 UTC m=+0.158064799 container init 0ef9f74eee552200a06bba043969bc16cd82100006b2877601c3d771e885d908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:50:53 compute-0 podman[106749]: 2025-11-24 19:50:53.483663887 +0000 UTC m=+0.168275533 container start 0ef9f74eee552200a06bba043969bc16cd82100006b2877601c3d771e885d908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:50:53 compute-0 podman[106749]: 2025-11-24 19:50:53.488153853 +0000 UTC m=+0.172765499 container attach 0ef9f74eee552200a06bba043969bc16cd82100006b2877601c3d771e885d908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:50:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 24 19:50:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 24 19:50:53 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.1d deep-scrub starts
Nov 24 19:50:54 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.1d deep-scrub ok
Nov 24 19:50:54 compute-0 ceph-mon[75677]: 10.1c scrub starts
Nov 24 19:50:54 compute-0 ceph-mon[75677]: 10.1c scrub ok
Nov 24 19:50:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 24 19:50:54 compute-0 ceph-mon[75677]: osdmap e94: 3 total, 3 up, 3 in
Nov 24 19:50:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 5 op/s; 35 B/s, 1 objects/s recovering
Nov 24 19:50:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 24 19:50:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 19:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:50:54 compute-0 pedantic_williams[106766]: {
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "osd_id": 2,
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "type": "bluestore"
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:     },
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "osd_id": 1,
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "type": "bluestore"
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:     },
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "osd_id": 0,
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:         "type": "bluestore"
Nov 24 19:50:54 compute-0 pedantic_williams[106766]:     }
Nov 24 19:50:54 compute-0 pedantic_williams[106766]: }
Nov 24 19:50:54 compute-0 systemd[1]: libpod-0ef9f74eee552200a06bba043969bc16cd82100006b2877601c3d771e885d908.scope: Deactivated successfully.
Nov 24 19:50:54 compute-0 podman[106749]: 2025-11-24 19:50:54.44679275 +0000 UTC m=+1.131404416 container died 0ef9f74eee552200a06bba043969bc16cd82100006b2877601c3d771e885d908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 19:50:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-801341a0bc61c2f16bb88b5f1becaae7ba4948d9dd466d686c0a64e814edeb01-merged.mount: Deactivated successfully.
Nov 24 19:50:54 compute-0 podman[106749]: 2025-11-24 19:50:54.509439114 +0000 UTC m=+1.194050770 container remove 0ef9f74eee552200a06bba043969bc16cd82100006b2877601c3d771e885d908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williams, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:50:54 compute-0 systemd[1]: libpod-conmon-0ef9f74eee552200a06bba043969bc16cd82100006b2877601c3d771e885d908.scope: Deactivated successfully.
Nov 24 19:50:54 compute-0 sudo[106644]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:50:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:50:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:54 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a436cf68-ad06-434e-9ddc-47ab61d1e01a does not exist
Nov 24 19:50:54 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3c5a37b0-3b1b-4f81-98c9-feaffe72bcc4 does not exist
Nov 24 19:50:54 compute-0 sudo[106814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:50:54 compute-0 sudo[106814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:54 compute-0 sudo[106814]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:54 compute-0 sudo[106839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:50:54 compute-0 sudo[106839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:50:54 compute-0 sudo[106839]: pam_unix(sudo:session): session closed for user root
Nov 24 19:50:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 24 19:50:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 24 19:50:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 24 19:50:55 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 24 19:50:55 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 95 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=95 pruub=12.098227501s) [1] r=-1 lpr=95 pi=[54,95)/1 crt=38'385 mlcod 0'0 active pruub 163.994995117s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:55 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 95 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=95 pruub=12.097904205s) [1] r=-1 lpr=95 pi=[54,95)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 163.994995117s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:55 compute-0 ceph-mon[75677]: 5.14 scrub starts
Nov 24 19:50:55 compute-0 ceph-mon[75677]: 5.14 scrub ok
Nov 24 19:50:55 compute-0 ceph-mon[75677]: 10.1d deep-scrub starts
Nov 24 19:50:55 compute-0 ceph-mon[75677]: 10.1d deep-scrub ok
Nov 24 19:50:55 compute-0 ceph-mon[75677]: pgmap v190: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 5 op/s; 35 B/s, 1 objects/s recovering
Nov 24 19:50:55 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 19:50:55 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:55 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:50:55 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=95) [1] r=0 lpr=95 pi=[54,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Nov 24 19:50:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Nov 24 19:50:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 24 19:50:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 24 19:50:56 compute-0 ceph-mon[75677]: osdmap e95: 3 total, 3 up, 3 in
Nov 24 19:50:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 24 19:50:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 24 19:50:56 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[54,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:56 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[54,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:56 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 96 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=0 lpr=96 pi=[54,96)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:56 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 96 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=0 lpr=96 pi=[54,96)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 24 19:50:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 24 19:50:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 19:50:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:50:57 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 24 19:50:57 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 24 19:50:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 24 19:50:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 24 19:50:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 24 19:50:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 24 19:50:57 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 97 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=66/67 n=5 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=97 pruub=15.537429810s) [0] r=-1 lpr=97 pi=[66,97)/1 crt=38'385 mlcod 0'0 active pruub 158.361175537s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:57 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 97 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=66/67 n=5 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=97 pruub=15.537227631s) [0] r=-1 lpr=97 pi=[66,97)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 158.361175537s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:57 compute-0 ceph-mon[75677]: 2.19 deep-scrub starts
Nov 24 19:50:57 compute-0 ceph-mon[75677]: 2.19 deep-scrub ok
Nov 24 19:50:57 compute-0 ceph-mon[75677]: osdmap e96: 3 total, 3 up, 3 in
Nov 24 19:50:57 compute-0 ceph-mon[75677]: pgmap v193: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 24 19:50:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 19:50:57 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=97) [0] r=0 lpr=97 pi=[66,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:58 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 97 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=96/97 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[54,96)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 24 19:50:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 24 19:50:58 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 24 19:50:58 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 98 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=96/54 les/c/f=97/55/0 sis=98) [1] r=0 lpr=98 pi=[54,98)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:58 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 98 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=96/54 les/c/f=97/55/0 sis=98) [1] r=0 lpr=98 pi=[54,98)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:58 compute-0 ceph-mon[75677]: 10.1f scrub starts
Nov 24 19:50:58 compute-0 ceph-mon[75677]: 10.1f scrub ok
Nov 24 19:50:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 24 19:50:58 compute-0 ceph-mon[75677]: osdmap e97: 3 total, 3 up, 3 in
Nov 24 19:50:58 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 98 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=96/97 n=5 ec=47/32 lis/c=96/54 les/c/f=97/55/0 sis=98 pruub=15.701755524s) [1] async=[1] r=-1 lpr=98 pi=[54,98)/1 crt=38'385 mlcod 38'385 active pruub 170.696411133s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:58 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 98 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=96/97 n=5 ec=47/32 lis/c=96/54 les/c/f=97/55/0 sis=98 pruub=15.701659203s) [1] r=-1 lpr=98 pi=[54,98)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 170.696411133s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:58 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[66,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:58 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[66,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:50:58 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 98 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=66/67 n=5 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=98) [0]/[2] r=0 lpr=98 pi=[66,98)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:50:58 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 98 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=66/67 n=5 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=98) [0]/[2] r=0 lpr=98 pi=[66,98)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:50:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 24 19:50:58 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 19:50:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 24 19:50:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 24 19:50:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.3 deep-scrub starts
Nov 24 19:50:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.3 deep-scrub ok
Nov 24 19:50:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 24 19:50:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 19:50:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 24 19:50:59 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 24 19:50:59 compute-0 ceph-mon[75677]: osdmap e98: 3 total, 3 up, 3 in
Nov 24 19:50:59 compute-0 ceph-mon[75677]: pgmap v196: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:50:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 19:50:59 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 99 pg[9.15( v 38'385 (0'0,38'385] local-lis/les=98/99 n=5 ec=47/32 lis/c=96/54 les/c/f=97/55/0 sis=98) [1] r=0 lpr=98 pi=[54,98)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:59 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 99 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=98/99 n=5 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=98) [0]/[2] async=[0] r=0 lpr=98 pi=[66,98)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:50:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 24 19:50:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 24 19:51:00 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 24 19:51:00 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 24 19:51:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 24 19:51:00 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 19:51:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 24 19:51:00 compute-0 ceph-mon[75677]: 5.15 scrub starts
Nov 24 19:51:00 compute-0 ceph-mon[75677]: 5.15 scrub ok
Nov 24 19:51:00 compute-0 ceph-mon[75677]: 8.3 deep-scrub starts
Nov 24 19:51:00 compute-0 ceph-mon[75677]: 8.3 deep-scrub ok
Nov 24 19:51:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 19:51:00 compute-0 ceph-mon[75677]: osdmap e99: 3 total, 3 up, 3 in
Nov 24 19:51:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 19:51:00 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 24 19:51:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 24 19:51:00 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 24 19:51:00 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 100 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=98/66 les/c/f=99/67/0 sis=100) [0] r=0 lpr=100 pi=[66,100)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:51:00 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 100 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=98/66 les/c/f=99/67/0 sis=100) [0] r=0 lpr=100 pi=[66,100)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:51:00 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 100 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=98/99 n=5 ec=47/32 lis/c=98/66 les/c/f=99/67/0 sis=100 pruub=14.984184265s) [0] async=[0] r=-1 lpr=100 pi=[66,100)/1 crt=38'385 mlcod 38'385 active pruub 160.874328613s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:51:00 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 100 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=98/99 n=5 ec=47/32 lis/c=98/66 les/c/f=99/67/0 sis=100 pruub=14.983349800s) [0] r=-1 lpr=100 pi=[66,100)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 160.874328613s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:51:00 compute-0 sshd-session[106864]: Accepted publickey for zuul from 192.168.122.30 port 37770 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:51:00 compute-0 systemd-logind[795]: New session 35 of user zuul.
Nov 24 19:51:00 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 24 19:51:00 compute-0 sshd-session[106864]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:51:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 24 19:51:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 24 19:51:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 24 19:51:01 compute-0 ceph-mon[75677]: 2.11 scrub starts
Nov 24 19:51:01 compute-0 ceph-mon[75677]: 2.11 scrub ok
Nov 24 19:51:01 compute-0 ceph-mon[75677]: 4.18 scrub starts
Nov 24 19:51:01 compute-0 ceph-mon[75677]: 4.18 scrub ok
Nov 24 19:51:01 compute-0 ceph-mon[75677]: pgmap v198: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 24 19:51:01 compute-0 ceph-mon[75677]: osdmap e100: 3 total, 3 up, 3 in
Nov 24 19:51:01 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 101 pg[9.16( v 38'385 (0'0,38'385] local-lis/les=100/101 n=5 ec=47/32 lis/c=98/66 les/c/f=99/67/0 sis=100) [0] r=0 lpr=100 pi=[66,100)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:51:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:51:01 compute-0 python3.9[107017]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 19:51:02 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Nov 24 19:51:02 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Nov 24 19:51:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 24 19:51:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 24 19:51:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 24 19:51:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 24 19:51:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 19:51:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 24 19:51:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 24 19:51:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 24 19:51:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 24 19:51:02 compute-0 ceph-mon[75677]: osdmap e101: 3 total, 3 up, 3 in
Nov 24 19:51:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 19:51:02 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 102 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=102 pruub=12.930553436s) [2] r=-1 lpr=102 pi=[54,102)/1 crt=38'385 mlcod 0'0 active pruub 171.994873047s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:51:02 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 102 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=102 pruub=12.929779053s) [2] r=-1 lpr=102 pi=[54,102)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 171.994873047s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:51:02 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=102) [2] r=0 lpr=102 pi=[54,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:51:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Nov 24 19:51:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Nov 24 19:51:02 compute-0 python3.9[107191]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:51:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 24 19:51:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 24 19:51:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 24 19:51:03 compute-0 ceph-mon[75677]: 4.1b deep-scrub starts
Nov 24 19:51:03 compute-0 ceph-mon[75677]: 4.1b deep-scrub ok
Nov 24 19:51:03 compute-0 ceph-mon[75677]: 8.5 scrub starts
Nov 24 19:51:03 compute-0 ceph-mon[75677]: 8.5 scrub ok
Nov 24 19:51:03 compute-0 ceph-mon[75677]: pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 24 19:51:03 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 24 19:51:03 compute-0 ceph-mon[75677]: osdmap e102: 3 total, 3 up, 3 in
Nov 24 19:51:03 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=103) [2]/[0] r=-1 lpr=103 pi=[54,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:51:03 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=103) [2]/[0] r=-1 lpr=103 pi=[54,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:51:03 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 103 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=103) [2]/[0] r=0 lpr=103 pi=[54,103)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:51:03 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 103 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=54/55 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=103) [2]/[0] r=0 lpr=103 pi=[54,103)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:51:04 compute-0 sudo[107345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxvohsehnplaagxzmrxdulbmoegdkezo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013863.4996867-45-202707834587292/AnsiballZ_command.py'
Nov 24 19:51:04 compute-0 sudo[107345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:51:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 24 19:51:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 24 19:51:04 compute-0 python3.9[107347]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:51:04 compute-0 sudo[107345]: pam_unix(sudo:session): session closed for user root
Nov 24 19:51:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 24 19:51:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 24 19:51:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 19:51:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 24 19:51:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 24 19:51:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 24 19:51:04 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 24 19:51:04 compute-0 ceph-mon[75677]: 2.13 deep-scrub starts
Nov 24 19:51:04 compute-0 ceph-mon[75677]: 2.13 deep-scrub ok
Nov 24 19:51:04 compute-0 ceph-mon[75677]: osdmap e103: 3 total, 3 up, 3 in
Nov 24 19:51:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 19:51:05 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 104 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=103/104 n=5 ec=47/32 lis/c=54/54 les/c/f=55/55/0 sis=103) [2]/[0] async=[2] r=0 lpr=103 pi=[54,103)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:51:05 compute-0 sudo[107498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgcxiujooishmcvqagwfgwvwvfytnucf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013864.7050862-57-147031288035859/AnsiballZ_stat.py'
Nov 24 19:51:05 compute-0 sudo[107498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:51:05 compute-0 python3.9[107500]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:51:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 24 19:51:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 24 19:51:05 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 24 19:51:05 compute-0 sudo[107498]: pam_unix(sudo:session): session closed for user root
Nov 24 19:51:05 compute-0 ceph-mon[75677]: 8.7 scrub starts
Nov 24 19:51:05 compute-0 ceph-mon[75677]: 8.7 scrub ok
Nov 24 19:51:05 compute-0 ceph-mon[75677]: pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 24 19:51:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 24 19:51:05 compute-0 ceph-mon[75677]: osdmap e104: 3 total, 3 up, 3 in
Nov 24 19:51:05 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 105 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=103/104 n=5 ec=47/32 lis/c=103/54 les/c/f=104/55/0 sis=105 pruub=15.594258308s) [2] async=[2] r=-1 lpr=105 pi=[54,105)/1 crt=38'385 mlcod 38'385 active pruub 177.764862061s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:51:05 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 105 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=103/104 n=5 ec=47/32 lis/c=103/54 les/c/f=104/55/0 sis=105 pruub=15.594059944s) [2] r=-1 lpr=105 pi=[54,105)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 177.764862061s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:51:05 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 105 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=103/54 les/c/f=104/55/0 sis=105) [2] r=0 lpr=105 pi=[54,105)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:51:05 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 105 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=103/54 les/c/f=104/55/0 sis=105) [2] r=0 lpr=105 pi=[54,105)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:51:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 24 19:51:06 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 24 19:51:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 24 19:51:06 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 24 19:51:06 compute-0 sudo[107652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhtwiwtbtempwkvynqopwzydwsfcijlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013865.7991083-68-52471109260564/AnsiballZ_file.py'
Nov 24 19:51:06 compute-0 sudo[107652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:51:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 24 19:51:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 24 19:51:06 compute-0 python3.9[107654]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:51:06 compute-0 ceph-mon[75677]: osdmap e105: 3 total, 3 up, 3 in
Nov 24 19:51:06 compute-0 ceph-mon[75677]: 8.8 deep-scrub starts
Nov 24 19:51:06 compute-0 ceph-mon[75677]: 8.8 deep-scrub ok
Nov 24 19:51:06 compute-0 sudo[107652]: pam_unix(sudo:session): session closed for user root
Nov 24 19:51:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 24 19:51:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 24 19:51:06 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 106 pg[9.19( v 38'385 (0'0,38'385] local-lis/les=105/106 n=5 ec=47/32 lis/c=103/54 les/c/f=104/55/0 sis=105) [2] r=0 lpr=105 pi=[54,105)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:51:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:51:07 compute-0 sudo[107804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoaxizstcxcvummxsqcwaqlyaaqdocex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013866.7974436-77-181820882052283/AnsiballZ_file.py'
Nov 24 19:51:07 compute-0 sudo[107804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:51:07 compute-0 python3.9[107806]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:51:07 compute-0 sudo[107804]: pam_unix(sudo:session): session closed for user root
Nov 24 19:51:07 compute-0 ceph-mon[75677]: 4.1a scrub starts
Nov 24 19:51:07 compute-0 ceph-mon[75677]: 4.1a scrub ok
Nov 24 19:51:07 compute-0 ceph-mon[75677]: pgmap v207: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 24 19:51:07 compute-0 ceph-mon[75677]: osdmap e106: 3 total, 3 up, 3 in
Nov 24 19:51:08 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 24 19:51:08 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 24 19:51:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 2 objects/s recovering
Nov 24 19:51:08 compute-0 python3.9[107956]: ansible-ansible.builtin.service_facts Invoked
Nov 24 19:51:08 compute-0 network[107973]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 19:51:08 compute-0 network[107974]: 'network-scripts' will be removed from distribution in near future.
Nov 24 19:51:08 compute-0 network[107975]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 19:51:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 24 19:51:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 24 19:51:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 24 19:51:10 compute-0 sshd-session[108009]: Invalid user admin from 27.79.44.141 port 37734
Nov 24 19:51:11 compute-0 sshd-session[108009]: Connection closed by invalid user admin 27.79.44.141 port 37734 [preauth]
Nov 24 19:51:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Nov 24 19:51:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Nov 24 19:51:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:51:11 compute-0 ceph-mon[75677]: 4.e scrub starts
Nov 24 19:51:11 compute-0 ceph-mon[75677]: 4.e scrub ok
Nov 24 19:51:11 compute-0 ceph-mon[75677]: pgmap v209: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 2 objects/s recovering
Nov 24 19:51:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 24 19:51:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 24 19:51:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 24 19:51:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 24 19:51:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 24 19:51:13 compute-0 python3.9[108237]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:51:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 24 19:51:14 compute-0 python3.9[108387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:51:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 24 19:51:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 24 19:51:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 24 19:51:15 compute-0 sshd-session[108392]: Invalid user test from 27.79.44.141 port 59484
Nov 24 19:51:15 compute-0 python3.9[108543]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:51:15 compute-0 sshd-session[108392]: Connection closed by invalid user test 27.79.44.141 port 59484 [preauth]
Nov 24 19:51:15 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 24 19:51:15 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 24 19:51:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:16 compute-0 sudo[108699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyelhnzitrjszaruasuqmzfflvhbibqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013876.0754974-125-92351729770416/AnsiballZ_setup.py'
Nov 24 19:51:16 compute-0 sudo[108699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:51:16 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 24 19:51:16 compute-0 python3.9[108701]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:51:17 compute-0 sudo[108699]: pam_unix(sudo:session): session closed for user root
Nov 24 19:51:17 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:17 compute-0 sudo[108784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krobhqfcgjzhwzcibezewnkjaomgiddm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764013876.0754974-125-92351729770416/AnsiballZ_dnf.py'
Nov 24 19:51:17 compute-0 sudo[108784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:51:17 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 24 19:51:18 compute-0 python3.9[108786]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:51:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:21 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:51:24
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.data']
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:51:24 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:51:24 compute-0 sshd-session[108791]: Invalid user guest from 27.79.44.141 port 37312
Nov 24 19:51:25 compute-0 sshd-session[108791]: Connection closed by invalid user guest 27.79.44.141 port 37312 [preauth]
Nov 24 19:51:25 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp MDS connection to Monitors appears to be laggy; 15.6455s since last acked beacon
Nov 24 19:51:25 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:51:25 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:29 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:30 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:51:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:33 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:33 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:33 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 19:51:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:35 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:51:36 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:37 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:40 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:51:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:41 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:41 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:41 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:43 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:43 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:45 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:51:45 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:45 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:49 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:49 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:51:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:50 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:51 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:52 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:52 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:53 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:51:54 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:54 compute-0 sudo[108794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:51:54 compute-0 sudo[108794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:51:54 compute-0 sudo[108794]: pam_unix(sudo:session): session closed for user root
Nov 24 19:51:54 compute-0 sudo[108819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:51:54 compute-0 sudo[108819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:51:54 compute-0 sudo[108819]: pam_unix(sudo:session): session closed for user root
Nov 24 19:51:55 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:55 compute-0 sudo[108844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:51:55 compute-0 sudo[108844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:51:55 compute-0 sudo[108844]: pam_unix(sudo:session): session closed for user root
Nov 24 19:51:55 compute-0 sudo[108869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 19:51:55 compute-0 sudo[108869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:51:55 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:51:55 compute-0 sudo[108869]: pam_unix(sudo:session): session closed for user root
Nov 24 19:51:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:51:56 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:51:56 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:51:57 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:51:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:00 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:52:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:00 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:00 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:01 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:52:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:02 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:02 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:03 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:04 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:04 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:52:05 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:05 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:52:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:06 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:06 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:09 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:09 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:52:10 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:52:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:11 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:11 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:13 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:13 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:52:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:14 compute-0 sshd-session[108925]: Connection closed by authenticating user root 80.94.95.116 port 60240 [preauth]
Nov 24 19:52:15 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:52:15 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:15 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:17 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:52:18 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:19 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-mds[102499]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 24 19:52:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:20 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:20 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 69.799 seconds
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 get_health_metrics reporting 4 slow ops, oldest is monmgrreport(gid 14130, 0 checks, 0 progress events)
Nov 24 19:52:21 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp  MDS is no longer laggy
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0[75673]: 2025-11-24T19:52:21.487+0000 7fba9712d640 -1 mon.compute-0@0(leader) e1 get_health_metrics reporting 4 slow ops, oldest is monmgrreport(gid 14130, 0 checks, 0 progress events)
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 24 19:52:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:22 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:23 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:23 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:52:24
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', 'vms', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:24 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:24 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:24 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:26 compute-0 sshd-session[108927]: Invalid user admin from 27.79.44.141 port 40150
Nov 24 19:52:26 compute-0 sshd-session[108927]: Connection closed by invalid user admin 27.79.44.141 port 40150 [preauth]
Nov 24 19:52:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 get_health_metrics reporting 6 slow ops, oldest is monmgrreport(gid 14130, 0 checks, 0 progress events)
Nov 24 19:52:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0[75673]: 2025-11-24T19:52:26.488+0000 7fba9712d640 -1 mon.compute-0@0(leader) e1 get_health_metrics reporting 6 slow ops, oldest is monmgrreport(gid 14130, 0 checks, 0 progress events)
Nov 24 19:52:27 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:27 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:28 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:29 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:30 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 get_health_metrics reporting 6 slow ops, oldest is monmgrreport(gid 14130, 0 checks, 0 progress events)
Nov 24 19:52:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0[75673]: 2025-11-24T19:52:31.488+0000 7fba9712d640 -1 mon.compute-0@0(leader) e1 get_health_metrics reporting 6 slow ops, oldest is monmgrreport(gid 14130, 0 checks, 0 progress events)
Nov 24 19:52:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:34 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:35 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 get_health_metrics reporting 6 slow ops, oldest is monmgrreport(gid 14130, 0 checks, 0 progress events)
Nov 24 19:52:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0[75673]: 2025-11-24T19:52:36.490+0000 7fba9712d640 -1 mon.compute-0@0(leader) e1 get_health_metrics reporting 6 slow ops, oldest is monmgrreport(gid 14130, 0 checks, 0 progress events)
Nov 24 19:52:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:36 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:37 compute-0 ceph-osd[89640]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 19:52:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[90884]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:39 compute-0 systemd[77249]: Created slice User Background Tasks Slice.
Nov 24 19:52:39 compute-0 systemd[77249]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 19:52:39 compute-0 systemd[77249]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _do_read, latency = 75.772224426s, num_ios = 1
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for read, latency = 75.772300720s
Nov 24 19:52:40 compute-0 ceph-osd[88624]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f2c90ea3640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:52:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-rgw-rgw-compute-0-dgkdrf[99823]: 2025-11-24T19:52:40.157+0000 7f73c8f8d640 -1 rgw watcher librados: RGWWatcher::handle_error cookie 93906793316352 err (107) Transport endpoint is not connected
Nov 24 19:52:40 compute-0 radosgw[99827]: rgw watcher librados: RGWWatcher::handle_error cookie 93906793316352 err (107) Transport endpoint is not connected
Nov 24 19:52:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-rgw-rgw-compute-0-dgkdrf[99823]: 2025-11-24T19:52:40.157+0000 7f73c8f8d640 -1 rgw watcher librados: RGWWatcher::handle_error cookie 93906793319808 err (107) Transport endpoint is not connected
Nov 24 19:52:40 compute-0 radosgw[99827]: rgw watcher librados: RGWWatcher::handle_error cookie 93906793319808 err (107) Transport endpoint is not connected
Nov 24 19:52:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-rgw-rgw-compute-0-dgkdrf[99823]: 2025-11-24T19:52:40.157+0000 7f73c8f8d640 -1 rgw watcher librados: RGWWatcher::handle_error cookie 93906793320960 err (107) Transport endpoint is not connected
Nov 24 19:52:40 compute-0 radosgw[99827]: rgw watcher librados: RGWWatcher::handle_error cookie 93906793320960 err (107) Transport endpoint is not connected
Nov 24 19:52:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-rgw-rgw-compute-0-dgkdrf[99823]: 2025-11-24T19:52:40.157+0000 7f73c8f8d640 -1 rgw watcher librados: RGWWatcher::handle_error cookie 93906793324416 err (107) Transport endpoint is not connected
Nov 24 19:52:40 compute-0 radosgw[99827]: rgw watcher librados: RGWWatcher::handle_error cookie 93906793324416 err (107) Transport endpoint is not connected
Nov 24 19:52:40 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 88.336517334s, txc = 0x55ba3cf1c000
Nov 24 19:52:40 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 88.336456299s
Nov 24 19:52:40 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 88.336456299s
Nov 24 19:52:40 compute-0 ceph-osd[89640]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f1a53924640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[89640]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f1a53123640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[89640]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f1a52922640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[89640]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[89640]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f1a54125640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-rgw-rgw-compute-0-dgkdrf[99823]: 2025-11-24T19:52:40.261+0000 7f73c978e640 -1 rgw watcher librados: RGWWatcher::handle_error cookie 93906793323264 err (107) Transport endpoint is not connected
Nov 24 19:52:40 compute-0 radosgw[99827]: rgw watcher librados: RGWWatcher::handle_error cookie 93906793323264 err (107) Transport endpoint is not connected
Nov 24 19:52:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 87.950485229s
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 87.950485229s
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 87.950714111s, txc = 0x560fd344af00
Nov 24 19:52:40 compute-0 ceph-osd[88624]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[88624]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f2c8fea1640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[88624]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f2c906a2640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[88624]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f2c8f6a0640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 24 19:52:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 24 19:52:40 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 84.377876282s
Nov 24 19:52:40 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 84.377876282s
Nov 24 19:52:40 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 84.378097534s, txc = 0x557d34e97200
Nov 24 19:52:40 compute-0 ceph-osd[90884]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7fcc2637f640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[90884]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7fcc2437b640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[90884]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7fcc24b7c640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[90884]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7fcc2537d640' had timed out after 15.000000954s
Nov 24 19:52:40 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 24 19:52:40 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 24 19:52:40 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 88.431373596s, txc = 0x55ba3c75ac00
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 85.405204773s, txc = 0x55ba3d0c1500
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:40 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 24 19:52:40 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 84.448860168s, txc = 0x557d353b8300
Nov 24 19:52:40 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 83.429649353s, txc = 0x557d353fa300
Nov 24 19:52:40 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 82.396560669s, txc = 0x557d35416900
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 88.036712646s, txc = 0x560fd344ac00
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 87.035087585s, txc = 0x560fd35b6f00
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 86.066055298s, txc = 0x560fd3761b00
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 47.587009430s, txc = 0x560fd0e9c600
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 47.586860657s, txc = 0x560fd2ea6900
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 47.586788177s, txc = 0x560fd0e9cc00
Nov 24 19:52:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 24 19:52:40 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 47.586734772s, txc = 0x560fd3748300
Nov 24 19:52:40 compute-0 ceph-mon[75677]: pgmap v210: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 24 19:52:40 compute-0 ceph-mon[75677]: 9.2 scrub starts
Nov 24 19:52:40 compute-0 ceph-mon[75677]: 9.2 scrub ok
Nov 24 19:52:40 compute-0 ceph-mon[75677]: 2.f deep-scrub starts
Nov 24 19:52:40 compute-0 ceph-mon[75677]: 2.f deep-scrub ok
Nov 24 19:52:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:52:40 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f256d5f2-cff0-4a9b-9b93-8fba8cbcd89e does not exist
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 24 19:52:40 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 08dcf8db-32ef-4d8d-88a7-a54a95d603bc does not exist
Nov 24 19:52:40 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 19:52:40 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 645f1fad-cec7-48b4-984c-f3d39b3ce002 does not exist
Nov 24 19:52:40 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:52:40 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:52:40 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:52:40 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:52:40 compute-0 sudo[108934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:52:40 compute-0 sudo[108934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:52:40 compute-0 sudo[108934]: pam_unix(sudo:session): session closed for user root
Nov 24 19:52:40 compute-0 sudo[108960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:52:40 compute-0 sudo[108960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:52:40 compute-0 sudo[108960]: pam_unix(sudo:session): session closed for user root
Nov 24 19:52:40 compute-0 sudo[108987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:52:40 compute-0 sudo[108987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:52:40 compute-0 sudo[108987]: pam_unix(sudo:session): session closed for user root
Nov 24 19:52:40 compute-0 sudo[109012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:52:40 compute-0 sudo[109012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:52:41 compute-0 podman[109087]: 2025-11-24 19:52:41.143074833 +0000 UTC m=+0.050625648 container create bf4d5f37a14dc7c9fe5aebeec6d9c70ea1bbb35c9366e36d5b9751f2fda02e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 19:52:41 compute-0 systemd[1]: Started libpod-conmon-bf4d5f37a14dc7c9fe5aebeec6d9c70ea1bbb35c9366e36d5b9751f2fda02e36.scope.
Nov 24 19:52:41 compute-0 podman[109087]: 2025-11-24 19:52:41.119043857 +0000 UTC m=+0.026594682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:52:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:52:41 compute-0 podman[109087]: 2025-11-24 19:52:41.234134722 +0000 UTC m=+0.141685557 container init bf4d5f37a14dc7c9fe5aebeec6d9c70ea1bbb35c9366e36d5b9751f2fda02e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 19:52:41 compute-0 ceph-osd[89640]: osd.1 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:24.474012+0000 front 2025-11-24T19:51:24.473943+0000 (oldest deadline 2025-11-24T19:51:48.573787+0000)
Nov 24 19:52:41 compute-0 ceph-osd[89640]: osd.1 107 heartbeat_check: no reply from 192.168.122.100:6812 osd.2 since back 2025-11-24T19:51:28.574221+0000 front 2025-11-24T19:51:28.574319+0000 (oldest deadline 2025-11-24T19:51:53.874222+0000)
Nov 24 19:52:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e107 prepare_failure osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232] from osd.1 is reporting failure:1
Nov 24 19:52:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:52:41.234+0000 7f1a67169640 -1 osd.1 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:24.474012+0000 front 2025-11-24T19:51:24.473943+0000 (oldest deadline 2025-11-24T19:51:48.573787+0000)
Nov 24 19:52:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:52:41.234+0000 7f1a67169640 -1 osd.1 107 heartbeat_check: no reply from 192.168.122.100:6812 osd.2 since back 2025-11-24T19:51:28.574221+0000 front 2025-11-24T19:51:28.574319+0000 (oldest deadline 2025-11-24T19:51:53.874222+0000)
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osd.0 reported failed by osd.1
Nov 24 19:52:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e107 prepare_failure osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471] from osd.1 is reporting failure:1
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osd.2 reported failed by osd.1
Nov 24 19:52:41 compute-0 podman[109087]: 2025-11-24 19:52:41.239884162 +0000 UTC m=+0.147434967 container start bf4d5f37a14dc7c9fe5aebeec6d9c70ea1bbb35c9366e36d5b9751f2fda02e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check failed: 6 slow ops, oldest one blocked for 86 sec, mon.compute-0 has slow ops (SLOW_OPS)
Nov 24 19:52:41 compute-0 systemd[1]: libpod-bf4d5f37a14dc7c9fe5aebeec6d9c70ea1bbb35c9366e36d5b9751f2fda02e36.scope: Deactivated successfully.
Nov 24 19:52:41 compute-0 affectionate_bohr[109103]: 167 167
Nov 24 19:52:41 compute-0 conmon[109103]: conmon bf4d5f37a14dc7c9fe5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf4d5f37a14dc7c9fe5aebeec6d9c70ea1bbb35c9366e36d5b9751f2fda02e36.scope/container/memory.events
Nov 24 19:52:41 compute-0 podman[109087]: 2025-11-24 19:52:41.276097693 +0000 UTC m=+0.183648528 container attach bf4d5f37a14dc7c9fe5aebeec6d9c70ea1bbb35c9366e36d5b9751f2fda02e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 19:52:41 compute-0 podman[109087]: 2025-11-24 19:52:41.276947616 +0000 UTC m=+0.184498421 container died bf4d5f37a14dc7c9fe5aebeec6d9c70ea1bbb35c9366e36d5b9751f2fda02e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 19:52:41 compute-0 ceph-osd[90884]: osd.2 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:28.014652+0000 front 2025-11-24T19:51:28.014729+0000 (oldest deadline 2025-11-24T19:51:53.314671+0000)
Nov 24 19:52:41 compute-0 ceph-osd[90884]: osd.2 107 heartbeat_check: no reply from 192.168.122.100:6808 osd.1 since back 2025-11-24T19:51:28.014529+0000 front 2025-11-24T19:51:28.014677+0000 (oldest deadline 2025-11-24T19:51:53.314671+0000)
Nov 24 19:52:41 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 24 19:52:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e107 prepare_failure osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232] from osd.2 is reporting failure:1
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osd.0 reported failed by osd.2
Nov 24 19:52:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e107 prepare_failure osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308] from osd.2 is reporting failure:1
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osd.1 reported failed by osd.2
Nov 24 19:52:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2[90880]: 2025-11-24T19:52:41.299+0000 7fcc393c3640 -1 osd.2 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:28.014652+0000 front 2025-11-24T19:51:28.014729+0000 (oldest deadline 2025-11-24T19:51:53.314671+0000)
Nov 24 19:52:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2[90880]: 2025-11-24T19:52:41.299+0000 7fcc393c3640 -1 osd.2 107 heartbeat_check: no reply from 192.168.122.100:6808 osd.1 since back 2025-11-24T19:51:28.014529+0000 front 2025-11-24T19:51:28.014677+0000 (oldest deadline 2025-11-24T19:51:53.314671+0000)
Nov 24 19:52:41 compute-0 ceph-osd[88624]: osd.0 107 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:52:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:52:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:52:41.310+0000 7f2ca3ee7640 -1 osd.0 107 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:52:41 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 24 19:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ccda2f25ff6eb27740701d79d48228b4a07e730433323dd6fcd955b79d20de5-merged.mount: Deactivated successfully.
Nov 24 19:52:41 compute-0 podman[109087]: 2025-11-24 19:52:41.333792795 +0000 UTC m=+0.241343600 container remove bf4d5f37a14dc7c9fe5aebeec6d9c70ea1bbb35c9366e36d5b9751f2fda02e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 19:52:41 compute-0 systemd[1]: libpod-conmon-bf4d5f37a14dc7c9fe5aebeec6d9c70ea1bbb35c9366e36d5b9751f2fda02e36.scope: Deactivated successfully.
Nov 24 19:52:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 24 19:52:41 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 108 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=78/79 n=5 ec=47/32 lis/c=78/78 les/c/f=79/79/0 sis=108 pruub=15.260021210s) [0] r=-1 lpr=108 pi=[78,108)/1 crt=38'385 mlcod 0'0 active pruub 262.165344238s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:52:41 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 108 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=78/79 n=5 ec=47/32 lis/c=78/78 les/c/f=79/79/0 sis=108 pruub=15.259968758s) [0] r=-1 lpr=108 pi=[78,108)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 262.165344238s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:52:41 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=78/78 les/c/f=79/79/0 sis=108) [0] r=0 lpr=108 pi=[78,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 8.a scrub starts
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 8.a scrub ok
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 10.7 scrub starts
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v211: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 10.7 scrub ok
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 2.b scrub starts
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 2.8 scrub starts
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v212: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 9.4 scrub starts
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 4.13 scrub starts
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 4.13 scrub ok
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 4.11 scrub starts
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 8.15 scrub starts
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v214: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v216: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v218: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v220: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v221: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v223: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v224: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v225: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v227: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 2.8 scrub ok
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 2.b scrub ok
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 4.11 scrub ok
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 8.15 scrub ok
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:41 compute-0 ceph-mon[75677]: 9.4 scrub ok
Nov 24 19:52:41 compute-0 ceph-mon[75677]: pgmap v256: 305 pgs: 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:52:41 compute-0 ceph-mon[75677]: osdmap e107: 3 total, 3 up, 3 in
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:52:41 compute-0 ceph-mon[75677]: osd.0 reported failed by osd.1
Nov 24 19:52:41 compute-0 ceph-mon[75677]: osd.2 reported failed by osd.1
Nov 24 19:52:41 compute-0 ceph-mon[75677]: Health check failed: 6 slow ops, oldest one blocked for 86 sec, mon.compute-0 has slow ops (SLOW_OPS)
Nov 24 19:52:41 compute-0 ceph-mon[75677]: osd.0 reported failed by osd.2
Nov 24 19:52:41 compute-0 ceph-mon[75677]: osd.1 reported failed by osd.2
Nov 24 19:52:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:52:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 24 19:52:41 compute-0 podman[109136]: 2025-11-24 19:52:41.470137182 +0000 UTC m=+0.021654024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:52:41 compute-0 podman[109136]: 2025-11-24 19:52:41.580668177 +0000 UTC m=+0.132185049 container create 5b2145316f2bb072a9b8285cb85a15e980827bd07b93a2dc37d3d055aa9b11ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 19:52:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 24 19:52:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 24 19:52:41 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=78/78 les/c/f=79/79/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[78,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:52:41 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=78/78 les/c/f=79/79/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[78,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:52:41 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 109 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=78/79 n=5 ec=47/32 lis/c=78/78 les/c/f=79/79/0 sis=109) [0]/[2] r=0 lpr=109 pi=[78,109)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:52:41 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 109 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=78/79 n=5 ec=47/32 lis/c=78/78 les/c/f=79/79/0 sis=109) [0]/[2] r=0 lpr=109 pi=[78,109)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:52:41 compute-0 systemd[1]: Started libpod-conmon-5b2145316f2bb072a9b8285cb85a15e980827bd07b93a2dc37d3d055aa9b11ff.scope.
Nov 24 19:52:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f82ab4aa20cb7ae791bdb201606102e0665ccd825d871715c36ea2ff2068c3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f82ab4aa20cb7ae791bdb201606102e0665ccd825d871715c36ea2ff2068c3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f82ab4aa20cb7ae791bdb201606102e0665ccd825d871715c36ea2ff2068c3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f82ab4aa20cb7ae791bdb201606102e0665ccd825d871715c36ea2ff2068c3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f82ab4aa20cb7ae791bdb201606102e0665ccd825d871715c36ea2ff2068c3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:52:41 compute-0 podman[109136]: 2025-11-24 19:52:41.700852664 +0000 UTC m=+0.252369576 container init 5b2145316f2bb072a9b8285cb85a15e980827bd07b93a2dc37d3d055aa9b11ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:52:41 compute-0 podman[109136]: 2025-11-24 19:52:41.716484001 +0000 UTC m=+0.268000863 container start 5b2145316f2bb072a9b8285cb85a15e980827bd07b93a2dc37d3d055aa9b11ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hugle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:52:41 compute-0 podman[109136]: 2025-11-24 19:52:41.876265058 +0000 UTC m=+0.427781930 container attach 5b2145316f2bb072a9b8285cb85a15e980827bd07b93a2dc37d3d055aa9b11ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 19:52:42 compute-0 ceph-osd[89640]: osd.1 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:24.474012+0000 front 2025-11-24T19:51:24.473943+0000 (oldest deadline 2025-11-24T19:51:48.573787+0000)
Nov 24 19:52:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:52:42.237+0000 7f1a67169640 -1 osd.1 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:24.474012+0000 front 2025-11-24T19:51:24.473943+0000 (oldest deadline 2025-11-24T19:51:48.573787+0000)
Nov 24 19:52:42 compute-0 ceph-osd[89640]: osd.1 107 heartbeat_check: no reply from 192.168.122.100:6812 osd.2 since back 2025-11-24T19:51:28.574221+0000 front 2025-11-24T19:51:28.574319+0000 (oldest deadline 2025-11-24T19:51:53.874222+0000)
Nov 24 19:52:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:52:42.238+0000 7f1a67169640 -1 osd.1 107 heartbeat_check: no reply from 192.168.122.100:6812 osd.2 since back 2025-11-24T19:51:28.574221+0000 front 2025-11-24T19:51:28.574319+0000 (oldest deadline 2025-11-24T19:51:53.874222+0000)
Nov 24 19:52:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 24 19:52:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 24 19:52:42 compute-0 ceph-osd[90884]: osd.2 109 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:28.014652+0000 front 2025-11-24T19:51:28.014729+0000 (oldest deadline 2025-11-24T19:51:53.314671+0000)
Nov 24 19:52:42 compute-0 ceph-osd[90884]: osd.2 109 heartbeat_check: no reply from 192.168.122.100:6808 osd.1 since back 2025-11-24T19:51:28.014529+0000 front 2025-11-24T19:51:28.014677+0000 (oldest deadline 2025-11-24T19:51:53.314671+0000)
Nov 24 19:52:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2[90880]: 2025-11-24T19:52:42.331+0000 7fcc393c3640 -1 osd.2 109 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:28.014652+0000 front 2025-11-24T19:51:28.014729+0000 (oldest deadline 2025-11-24T19:51:53.314671+0000)
Nov 24 19:52:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-2[90880]: 2025-11-24T19:52:42.331+0000 7fcc393c3640 -1 osd.2 109 heartbeat_check: no reply from 192.168.122.100:6808 osd.1 since back 2025-11-24T19:51:28.014529+0000 front 2025-11-24T19:51:28.014677+0000 (oldest deadline 2025-11-24T19:51:53.314671+0000)
Nov 24 19:52:42 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 24 19:52:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 24 19:52:42 compute-0 ceph-osd[88624]: osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:52:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:52:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:52:42.339+0000 7f2ca3ee7640 -1 osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:52:42 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 24 19:52:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 24 19:52:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:43 compute-0 ceph-osd[89640]: osd.1 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:24.474012+0000 front 2025-11-24T19:51:24.473943+0000 (oldest deadline 2025-11-24T19:51:48.573787+0000)
Nov 24 19:52:43 compute-0 ceph-osd[89640]: osd.1 107 heartbeat_check: no reply from 192.168.122.100:6812 osd.2 since back 2025-11-24T19:51:28.574221+0000 front 2025-11-24T19:51:28.574319+0000 (oldest deadline 2025-11-24T19:51:53.874222+0000)
Nov 24 19:52:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:52:43.288+0000 7f1a67169640 -1 osd.1 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:24.474012+0000 front 2025-11-24T19:51:24.473943+0000 (oldest deadline 2025-11-24T19:51:48.573787+0000)
Nov 24 19:52:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:52:43.288+0000 7f1a67169640 -1 osd.1 107 heartbeat_check: no reply from 192.168.122.100:6812 osd.2 since back 2025-11-24T19:51:28.574221+0000 front 2025-11-24T19:51:28.574319+0000 (oldest deadline 2025-11-24T19:51:53.874222+0000)
Nov 24 19:52:43 compute-0 ceph-osd[88624]: osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:52:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:52:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 24 19:52:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:52:43.301+0000 7f2ca3ee7640 -1 osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:52:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:52:44.253+0000 7f1a67169640 -1 osd.1 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:24.474012+0000 front 2025-11-24T19:51:24.473943+0000 (oldest deadline 2025-11-24T19:51:48.573787+0000)
Nov 24 19:52:44 compute-0 ceph-osd[89640]: osd.1 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:24.474012+0000 front 2025-11-24T19:51:24.473943+0000 (oldest deadline 2025-11-24T19:51:48.573787+0000)
Nov 24 19:52:44 compute-0 ceph-osd[89640]: osd.1 107 heartbeat_check: no reply from 192.168.122.100:6812 osd.2 since back 2025-11-24T19:51:28.574221+0000 front 2025-11-24T19:51:28.574319+0000 (oldest deadline 2025-11-24T19:51:53.874222+0000)
Nov 24 19:52:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:52:44.253+0000 7f1a67169640 -1 osd.1 107 heartbeat_check: no reply from 192.168.122.100:6812 osd.2 since back 2025-11-24T19:51:28.574221+0000 front 2025-11-24T19:51:28.574319+0000 (oldest deadline 2025-11-24T19:51:53.874222+0000)
Nov 24 19:52:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 24 19:52:44 compute-0 ceph-osd[88624]: osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:52:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:52:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:52:44.263+0000 7f2ca3ee7640 -1 osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:52:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:45 compute-0 ceph-osd[89640]: osd.1 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:24.474012+0000 front 2025-11-24T19:51:24.473943+0000 (oldest deadline 2025-11-24T19:51:48.573787+0000)
Nov 24 19:52:45 compute-0 ceph-osd[89640]: osd.1 107 heartbeat_check: no reply from 192.168.122.100:6812 osd.2 since back 2025-11-24T19:51:28.574221+0000 front 2025-11-24T19:51:28.574319+0000 (oldest deadline 2025-11-24T19:51:53.874222+0000)
Nov 24 19:52:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:52:45.221+0000 7f1a67169640 -1 osd.1 107 heartbeat_check: no reply from 192.168.122.100:6804 osd.0 since back 2025-11-24T19:51:24.474012+0000 front 2025-11-24T19:51:24.473943+0000 (oldest deadline 2025-11-24T19:51:48.573787+0000)
Nov 24 19:52:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:52:45.221+0000 7f1a67169640 -1 osd.1 107 heartbeat_check: no reply from 192.168.122.100:6812 osd.2 since back 2025-11-24T19:51:28.574221+0000 front 2025-11-24T19:51:28.574319+0000 (oldest deadline 2025-11-24T19:51:53.874222+0000)
Nov 24 19:52:45 compute-0 ceph-osd[88624]: osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:52:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:52:45.309+0000 7f2ca3ee7640 -1 osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:52:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:52:45 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 24 19:52:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 24 19:52:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:47 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 24 19:52:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:49 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Nov 24 19:52:49 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:52:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:53 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:52:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:57 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 19:52:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:52:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 24 19:52:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 17.5408 seconds
Nov 24 19:52:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 24 19:52:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 19:52:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 24 19:52:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 19:52:59 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 6 slow ops, oldest one blocked for 86 sec, mon.compute-0 has slow ops)
Nov 24 19:52:59 compute-0 ceph-mon[75677]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 19:52:59 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 17.023162842s
Nov 24 19:52:59 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 17.023164749s
Nov 24 19:52:59 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.023366928s, txc = 0x55ba3cfa0f00
Nov 24 19:52:59 compute-0 ceph-osd[89640]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f1a52121640' had timed out after 15.000000954s
Nov 24 19:52:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 24 19:52:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 24 19:52:59 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:59 compute-0 ceph-osd[88624]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:59 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 17.062349319s
Nov 24 19:52:59 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 17.055347443s
Nov 24 19:52:59 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 17.055347443s
Nov 24 19:52:59 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.055603027s, txc = 0x560fd3760300
Nov 24 19:52:59 compute-0 ceph-osd[88624]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f2c8ee9f640' had timed out after 15.000000954s
Nov 24 19:52:59 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 17.062349319s
Nov 24 19:52:59 compute-0 ceph-mon[75677]: 4.a scrub starts
Nov 24 19:52:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:52:59 compute-0 ceph-mon[75677]: 4.a scrub ok
Nov 24 19:52:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 19:52:59 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 24 19:52:59 compute-0 ceph-mon[75677]: osdmap e108: 3 total, 3 up, 3 in
Nov 24 19:52:59 compute-0 ceph-mon[75677]: osdmap e109: 3 total, 3 up, 3 in
Nov 24 19:52:59 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.062553406s, txc = 0x557d353b9800
Nov 24 19:52:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 24 19:52:59 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Nov 24 19:52:59 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 24 19:52:59 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 24 19:52:59 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.463224411s, txc = 0x55ba3cfa6c00
Nov 24 19:52:59 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.292156219s, txc = 0x55ba3bb36000
Nov 24 19:52:59 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 15.461666107s, txc = 0x55ba3d7b3500
Nov 24 19:52:59 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.526354790s, txc = 0x55ba3b651b00
Nov 24 19:52:59 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.565292358s, txc = 0x560fd3746c00
Nov 24 19:52:59 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 16.614114761s, txc = 0x560fd3708f00
Nov 24 19:53:00 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.717786789s, txc = 0x557d353b8000
Nov 24 19:53:00 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.687219620s, txc = 0x557d353bac00
Nov 24 19:53:00 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.649962425s, txc = 0x557d3536d500
Nov 24 19:53:00 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.648659706s, txc = 0x557d35417800
Nov 24 19:53:00 compute-0 ceph-osd[89640]: osd.1 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:53:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:00.236+0000 7f1a67169640 -1 osd.1 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 5 active+clean+scrubbing, 2 active+clean+laggy, 1 remapped+peering, 1 active+clean+scrubbing+deep, 296 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:00 compute-0 ceph-osd[88624]: osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:00.422+0000 7f2ca3ee7640 -1 osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:00 compute-0 competent_hugle[109155]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:53:00 compute-0 competent_hugle[109155]: --> relative data size: 1.0
Nov 24 19:53:00 compute-0 competent_hugle[109155]: --> All data devices are unavailable
Nov 24 19:53:00 compute-0 systemd[1]: libpod-5b2145316f2bb072a9b8285cb85a15e980827bd07b93a2dc37d3d055aa9b11ff.scope: Deactivated successfully.
Nov 24 19:53:00 compute-0 systemd[1]: libpod-5b2145316f2bb072a9b8285cb85a15e980827bd07b93a2dc37d3d055aa9b11ff.scope: Consumed 1.082s CPU time.
Nov 24 19:53:00 compute-0 podman[109206]: 2025-11-24 19:53:00.659327293 +0000 UTC m=+0.059290474 container died 5b2145316f2bb072a9b8285cb85a15e980827bd07b93a2dc37d3d055aa9b11ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hugle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 19:53:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 24 19:53:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e110 prepare_failure osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232] from osd.2 is reporting failure:0
Nov 24 19:53:00 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osd.0 failure report canceled by osd.2
Nov 24 19:53:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e110 prepare_failure osd.1 [v2:192.168.122.100:6806/326699308,v1:192.168.122.100:6807/326699308] from osd.2 is reporting failure:0
Nov 24 19:53:00 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osd.1 failure report canceled by osd.2
Nov 24 19:53:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e110 prepare_failure osd.0 [v2:192.168.122.100:6802/1291375232,v1:192.168.122.100:6803/1291375232] from osd.1 is reporting failure:0
Nov 24 19:53:00 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osd.0 failure report canceled by osd.1
Nov 24 19:53:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e110 prepare_failure osd.2 [v2:192.168.122.100:6810/682082471,v1:192.168.122.100:6811/682082471] from osd.1 is reporting failure:0
Nov 24 19:53:00 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osd.2 failure report canceled by osd.1
Nov 24 19:53:00 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 24 19:53:01 compute-0 ceph-osd[89640]: osd.1 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:53:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:01.196+0000 7f1a67169640 -1 osd.1 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:01 compute-0 ceph-osd[88624]: osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:01.466+0000 7f2ca3ee7640 -1 osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 24 19:53:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check failed: 1 slow ops, oldest one blocked for 87 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 8.13 scrub starts
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 8.13 scrub ok
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 4.1c scrub starts
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 5.7 scrub starts
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 4.1c scrub ok
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 5.7 scrub ok
Nov 24 19:53:01 compute-0 ceph-mon[75677]: pgmap v259: 305 pgs: 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 5.5 scrub starts
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 8.16 scrub starts
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:01 compute-0 ceph-mon[75677]: pgmap v260: 305 pgs: 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 11.15 scrub starts
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 8.17 scrub starts
Nov 24 19:53:01 compute-0 ceph-mon[75677]: pgmap v261: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 3.1e scrub starts
Nov 24 19:53:01 compute-0 ceph-mon[75677]: pgmap v262: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:01 compute-0 ceph-mon[75677]: 7.1a deep-scrub starts
Nov 24 19:53:01 compute-0 ceph-mon[75677]: pgmap v263: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:01 compute-0 ceph-mon[75677]: pgmap v264: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:01 compute-0 ceph-mon[75677]: pgmap v265: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:01 compute-0 ceph-mon[75677]: pgmap v266: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:01 compute-0 ceph-mon[75677]: pgmap v267: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing+laggy, 1 active+clean+laggy, 2 active+clean+scrubbing, 300 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 19:53:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 19:53:01 compute-0 ceph-mon[75677]: Health check cleared: SLOW_OPS (was: 6 slow ops, oldest one blocked for 86 sec, mon.compute-0 has slow ops)
Nov 24 19:53:01 compute-0 ceph-mon[75677]: Cluster is now healthy
Nov 24 19:53:02 compute-0 ceph-osd[89640]: osd.1 110 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:53:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:02.233+0000 7f1a67169640 -1 osd.1 110 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:02 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 24 19:53:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 5 active+clean+scrubbing, 2 active+clean+laggy, 1 remapped+peering, 1 active+clean+scrubbing+deep, 296 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:02 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 24 19:53:02 compute-0 ceph-osd[88624]: osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:02.439+0000 7f2ca3ee7640 -1 osd.0 109 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 24 19:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f82ab4aa20cb7ae791bdb201606102e0665ccd825d871715c36ea2ff2068c3f-merged.mount: Deactivated successfully.
Nov 24 19:53:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 24 19:53:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 24 19:53:03 compute-0 ceph-osd[89640]: osd.1 110 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 19:53:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:03.220+0000 7f1a67169640 -1 osd.1 110 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 24 19:53:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 19:53:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 19:53:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 24 19:53:03 compute-0 ceph-osd[88624]: osd.0 110 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:03.453+0000 7f2ca3ee7640 -1 osd.0 110 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 24 19:53:03 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 110 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=109/110 n=5 ec=47/32 lis/c=78/78 les/c/f=79/79/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[78,109)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:53:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 24 19:53:04 compute-0 ceph-osd[89640]: osd.1 110 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 19:53:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:04.200+0000 7f1a67169640 -1 osd.1 110 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 5 active+clean+scrubbing, 2 active+clean+laggy, 1 remapped+peering, 1 active+clean+scrubbing+deep, 296 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:53:04 compute-0 ceph-osd[88624]: osd.0 110 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:04.408+0000 7f2ca3ee7640 -1 osd.0 110 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 24 19:53:05 compute-0 ceph-mon[75677]: 8.17 scrub ok
Nov 24 19:53:05 compute-0 ceph-mon[75677]: 8.16 scrub ok
Nov 24 19:53:05 compute-0 ceph-mon[75677]: 5.5 scrub ok
Nov 24 19:53:05 compute-0 ceph-mon[75677]: 7.1a deep-scrub ok
Nov 24 19:53:05 compute-0 ceph-mon[75677]: 3.1e scrub ok
Nov 24 19:53:05 compute-0 ceph-mon[75677]: 11.15 scrub ok
Nov 24 19:53:05 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:53:05 compute-0 ceph-mon[75677]: pgmap v268: 305 pgs: 5 active+clean+scrubbing, 2 active+clean+laggy, 1 remapped+peering, 1 active+clean+scrubbing+deep, 296 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:05 compute-0 ceph-mon[75677]: osd.0 failure report canceled by osd.2
Nov 24 19:53:05 compute-0 ceph-mon[75677]: osd.1 failure report canceled by osd.2
Nov 24 19:53:05 compute-0 ceph-mon[75677]: osd.0 failure report canceled by osd.1
Nov 24 19:53:05 compute-0 ceph-mon[75677]: osd.2 failure report canceled by osd.1
Nov 24 19:53:05 compute-0 ceph-mon[75677]: osdmap e110: 3 total, 3 up, 3 in
Nov 24 19:53:05 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:53:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:05 compute-0 ceph-mon[75677]: Health check failed: 1 slow ops, oldest one blocked for 87 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 24 19:53:05 compute-0 ceph-osd[89640]: osd.1 110 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 19:53:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:05.225+0000 7f1a67169640 -1 osd.1 110 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:05 compute-0 podman[109206]: 2025-11-24 19:53:05.271705522 +0000 UTC m=+4.671668703 container remove 5b2145316f2bb072a9b8285cb85a15e980827bd07b93a2dc37d3d055aa9b11ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 19:53:05 compute-0 systemd[1]: libpod-conmon-5b2145316f2bb072a9b8285cb85a15e980827bd07b93a2dc37d3d055aa9b11ff.scope: Deactivated successfully.
Nov 24 19:53:05 compute-0 sudo[109012]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:05 compute-0 sudo[109237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:53:05 compute-0 sudo[109237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:53:05 compute-0 sudo[109237]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:05 compute-0 ceph-osd[88624]: osd.0 111 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:05.423+0000 7f2ca3ee7640 -1 osd.0 111 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:05 compute-0 sudo[109262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:53:05 compute-0 sudo[109262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:53:05 compute-0 sudo[109262]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:05 compute-0 sudo[109287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:53:05 compute-0 sudo[109287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:53:05 compute-0 sudo[109287]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 24 19:53:05 compute-0 sudo[109312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:53:05 compute-0 sudo[109312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:53:06 compute-0 podman[109377]: 2025-11-24 19:53:05.949885985 +0000 UTC m=+0.055778292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:53:06 compute-0 ceph-osd[89640]: osd.1 110 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 24 19:53:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 19:53:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:06.207+0000 7f1a67169640 -1 osd.1 110 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 1 active+recovering+remapped, 1 active+clean+scrubbing, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 7/216 objects misplaced (3.241%)
Nov 24 19:53:06 compute-0 ceph-osd[88624]: osd.0 111 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:06.407+0000 7f2ca3ee7640 -1 osd.0 111 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:06 compute-0 podman[109377]: 2025-11-24 19:53:06.522981116 +0000 UTC m=+0.628873433 container create 4b52dca5750ee23d7b1e1f745bf48df42a3324abb894902f9cd8b601d7fbc554 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 19:53:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 24 19:53:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 24 19:53:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 11.12 scrub starts
Nov 24 19:53:07 compute-0 ceph-mon[75677]: pgmap v270: 305 pgs: 5 active+clean+scrubbing, 2 active+clean+laggy, 1 remapped+peering, 1 active+clean+scrubbing+deep, 296 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 11.12 scrub ok
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 5.4 scrub starts
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 5.4 scrub ok
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 8.19 scrub starts
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 8.19 scrub ok
Nov 24 19:53:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 19:53:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:07 compute-0 ceph-mon[75677]: osdmap e111: 3 total, 3 up, 3 in
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 9.a scrub starts
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 19:53:07 compute-0 ceph-mon[75677]: pgmap v272: 305 pgs: 5 active+clean+scrubbing, 2 active+clean+laggy, 1 remapped+peering, 1 active+clean+scrubbing+deep, 296 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 9.a scrub ok
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 8.1e scrub starts
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 19:53:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:07 compute-0 systemd[1]: Started libpod-conmon-4b52dca5750ee23d7b1e1f745bf48df42a3324abb894902f9cd8b601d7fbc554.scope.
Nov 24 19:53:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:53:07 compute-0 ceph-osd[89640]: osd.1 110 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 19:53:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:07.235+0000 7f1a67169640 -1 osd.1 110 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 106 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:07 compute-0 ceph-osd[88624]: osd.0 111 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:07.414+0000 7f2ca3ee7640 -1 osd.0 111 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:07 compute-0 podman[109377]: 2025-11-24 19:53:07.520274123 +0000 UTC m=+1.626166460 container init 4b52dca5750ee23d7b1e1f745bf48df42a3324abb894902f9cd8b601d7fbc554 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 19:53:07 compute-0 podman[109377]: 2025-11-24 19:53:07.535355375 +0000 UTC m=+1.641247672 container start 4b52dca5750ee23d7b1e1f745bf48df42a3324abb894902f9cd8b601d7fbc554 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 19:53:07 compute-0 hardcore_hertz[109396]: 167 167
Nov 24 19:53:07 compute-0 systemd[1]: libpod-4b52dca5750ee23d7b1e1f745bf48df42a3324abb894902f9cd8b601d7fbc554.scope: Deactivated successfully.
Nov 24 19:53:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 24 19:53:07 compute-0 sshd-session[109227]: Invalid user admin from 27.79.44.141 port 51208
Nov 24 19:53:08 compute-0 ceph-osd[89640]: osd.1 110 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:08.264+0000 7f1a67169640 -1 osd.1 110 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'default.rgw.log' : 6 ])
Nov 24 19:53:08 compute-0 ceph-osd[88624]: osd.0 111 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:08.378+0000 7f2ca3ee7640 -1 osd.0 111 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 1 active+recovering+remapped, 1 active+clean+scrubbing, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 7/216 objects misplaced (3.241%)
Nov 24 19:53:08 compute-0 podman[109377]: 2025-11-24 19:53:08.460770222 +0000 UTC m=+2.566662519 container attach 4b52dca5750ee23d7b1e1f745bf48df42a3324abb894902f9cd8b601d7fbc554 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 19:53:08 compute-0 podman[109377]: 2025-11-24 19:53:08.461209913 +0000 UTC m=+2.567102210 container died 4b52dca5750ee23d7b1e1f745bf48df42a3324abb894902f9cd8b601d7fbc554 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:53:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 24 19:53:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 19:53:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 19:53:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 24 19:53:08 compute-0 sshd-session[109227]: Connection closed by invalid user admin 27.79.44.141 port 51208 [preauth]
Nov 24 19:53:09 compute-0 ceph-mon[75677]: 8.1e scrub ok
Nov 24 19:53:09 compute-0 ceph-mon[75677]: 9.10 scrub starts
Nov 24 19:53:09 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 19:53:09 compute-0 ceph-mon[75677]: pgmap v273: 305 pgs: 1 active+recovering+remapped, 1 active+clean+scrubbing, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 7/216 objects misplaced (3.241%)
Nov 24 19:53:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:09 compute-0 ceph-mon[75677]: 9.10 scrub ok
Nov 24 19:53:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 19:53:09 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 19:53:09 compute-0 ceph-mon[75677]: Health check update: 5 slow ops, oldest one blocked for 106 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:09 compute-0 ceph-osd[89640]: osd.1 111 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:09.220+0000 7f1a67169640 -1 osd.1 111 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'default.rgw.log' : 6 ])
Nov 24 19:53:09 compute-0 ceph-osd[88624]: osd.0 111 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:09.400+0000 7f2ca3ee7640 -1 osd.0 111 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:09 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 24 19:53:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:53:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 24 19:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff66e15076f5db15c702362380c8b7da238b8bb556855d17a2916d2eb92b6492-merged.mount: Deactivated successfully.
Nov 24 19:53:10 compute-0 ceph-osd[89640]: osd.1 112 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'default.rgw.log' : 6 ])
Nov 24 19:53:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:10.261+0000 7f1a67169640 -1 osd.1 112 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:10 compute-0 ceph-osd[88624]: osd.0 112 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 24 19:53:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:10.363+0000 7f2ca3ee7640 -1 osd.0 112 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 24 19:53:11 compute-0 ceph-osd[89640]: osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:11.276+0000 7f1a67169640 -1 osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:11 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 24 19:53:11 compute-0 ceph-osd[88624]: osd.0 112 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:11.360+0000 7f2ca3ee7640 -1 osd.0 112 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:12 compute-0 ceph-osd[89640]: osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:12.251+0000 7f1a67169640 -1 osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:12 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 24 19:53:12 compute-0 ceph-osd[88624]: osd.0 112 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:12.317+0000 7f2ca3ee7640 -1 osd.0 112 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 24 19:53:12 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 112 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=66/67 n=5 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=112 pruub=10.863128662s) [0] r=-1 lpr=112 pi=[66,112)/1 crt=38'385 mlcod 0'0 active pruub 288.929260254s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:12 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 112 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=66/67 n=5 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=112 pruub=10.862995148s) [0] r=-1 lpr=112 pi=[66,112)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 288.929260254s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:53:12 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=112) [0] r=0 lpr=112 pi=[66,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:53:12 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 24 19:53:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 19:53:12 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:53:12 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 24 19:53:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 19:53:12 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:53:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 24 19:53:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 111 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:12 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 19:53:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 24 19:53:13 compute-0 ceph-mon[75677]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'default.rgw.log' : 6 ])
Nov 24 19:53:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:13 compute-0 ceph-mon[75677]: pgmap v274: 305 pgs: 1 active+recovering+remapped, 1 active+clean+scrubbing, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 7/216 objects misplaced (3.241%)
Nov 24 19:53:13 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 19:53:13 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 19:53:13 compute-0 ceph-mon[75677]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'default.rgw.log' : 6 ])
Nov 24 19:53:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:13 compute-0 ceph-mon[75677]: osdmap e112: 3 total, 3 up, 3 in
Nov 24 19:53:13 compute-0 podman[109377]: 2025-11-24 19:53:13.115266967 +0000 UTC m=+7.221159304 container remove 4b52dca5750ee23d7b1e1f745bf48df42a3324abb894902f9cd8b601d7fbc554 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hertz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 19:53:13 compute-0 systemd[1]: libpod-conmon-4b52dca5750ee23d7b1e1f745bf48df42a3324abb894902f9cd8b601d7fbc554.scope: Deactivated successfully.
Nov 24 19:53:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 24 19:53:13 compute-0 ceph-osd[89640]: osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:13.222+0000 7f1a67169640 -1 osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:13 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 24 19:53:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 24 19:53:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 24 19:53:13 compute-0 ceph-osd[88624]: osd.0 112 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:13.324+0000 7f2ca3ee7640 -1 osd.0 112 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 24 19:53:13 compute-0 podman[109431]: 2025-11-24 19:53:13.306908542 +0000 UTC m=+0.038958923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:53:13 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 113 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=109/110 n=5 ec=47/32 lis/c=109/78 les/c/f=110/79/0 sis=113 pruub=13.942919731s) [0] async=[0] r=-1 lpr=113 pi=[78,113)/1 crt=38'385 mlcod 38'385 active pruub 293.163543701s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:13 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 113 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=109/110 n=5 ec=47/32 lis/c=109/78 les/c/f=110/79/0 sis=113 pruub=13.942816734s) [0] r=-1 lpr=113 pi=[78,113)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 293.163543701s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:53:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 24 19:53:13 compute-0 podman[109431]: 2025-11-24 19:53:13.941205646 +0000 UTC m=+0.673256037 container create 3d0aa899f69e5b1a5411a2b10514a4f03a25dedd8326eb0b1a35ad8356a17e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 19:53:14 compute-0 ceph-osd[89640]: osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:14.191+0000 7f1a67169640 -1 osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:14 compute-0 ceph-osd[88624]: osd.0 112 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:14.294+0000 7f2ca3ee7640 -1 osd.0 112 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 1 unknown, 1 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 24 19:53:14 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 113 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=109/78 les/c/f=110/79/0 sis=113) [0] r=0 lpr=113 pi=[78,113)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:14 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 113 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=109/78 les/c/f=110/79/0 sis=113) [0] r=0 lpr=113 pi=[78,113)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:53:14 compute-0 systemd[1]: Started libpod-conmon-3d0aa899f69e5b1a5411a2b10514a4f03a25dedd8326eb0b1a35ad8356a17e93.scope.
Nov 24 19:53:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e417ff2924b167e29f2830dee6f1aac2c7423c5deec752884235b5446449e518/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e417ff2924b167e29f2830dee6f1aac2c7423c5deec752884235b5446449e518/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e417ff2924b167e29f2830dee6f1aac2c7423c5deec752884235b5446449e518/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e417ff2924b167e29f2830dee6f1aac2c7423c5deec752884235b5446449e518/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:53:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:53:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:53:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'default.rgw.log' : 6 ])
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 10.d scrub starts
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:14 compute-0 ceph-mon[75677]: pgmap v276: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 8.11 scrub starts
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 11.11 scrub starts
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:14 compute-0 ceph-mon[75677]: pgmap v277: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 11.11 scrub ok
Nov 24 19:53:14 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 8.11 scrub ok
Nov 24 19:53:14 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 10.d scrub ok
Nov 24 19:53:14 compute-0 ceph-mon[75677]: Health check update: 7 slow ops, oldest one blocked for 111 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:14 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 9.12 scrub starts
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:14 compute-0 ceph-mon[75677]: osdmap e113: 3 total, 3 up, 3 in
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 9.12 scrub ok
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 10.9 scrub starts
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:14 compute-0 ceph-mon[75677]: 10.9 scrub ok
Nov 24 19:53:15 compute-0 podman[109431]: 2025-11-24 19:53:15.034771217 +0000 UTC m=+1.766821638 container init 3d0aa899f69e5b1a5411a2b10514a4f03a25dedd8326eb0b1a35ad8356a17e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 19:53:15 compute-0 podman[109431]: 2025-11-24 19:53:15.045579298 +0000 UTC m=+1.777629679 container start 3d0aa899f69e5b1a5411a2b10514a4f03a25dedd8326eb0b1a35ad8356a17e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 19:53:15 compute-0 ceph-osd[89640]: osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:15.193+0000 7f1a67169640 -1 osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:15 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 24 19:53:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:53:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:15.343+0000 7f2ca3ee7640 -1 osd.0 113 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:15 compute-0 ceph-osd[88624]: osd.0 113 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:15 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 24 19:53:15 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 24 19:53:15 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[66,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:15 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[66,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:53:15 compute-0 podman[109431]: 2025-11-24 19:53:15.590985508 +0000 UTC m=+2.323035889 container attach 3d0aa899f69e5b1a5411a2b10514a4f03a25dedd8326eb0b1a35ad8356a17e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:53:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 24 19:53:16 compute-0 ceph-osd[89640]: osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:16.176+0000 7f1a67169640 -1 osd.1 112 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:16.328+0000 7f2ca3ee7640 -1 osd.0 114 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:16 compute-0 ceph-osd[88624]: osd.0 114 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 1 unknown, 1 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 24 19:53:16 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 114 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=66/67 n=5 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=114) [0]/[2] r=0 lpr=114 pi=[66,114)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:16 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 114 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=67/68 n=5 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=114 pruub=14.629545212s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=38'385 mlcod 0'0 active pruub 296.927093506s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:16 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 114 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=66/67 n=5 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=114) [0]/[2] r=0 lpr=114 pi=[66,114)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:53:16 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 114 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=67/68 n=5 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=114 pruub=14.629084587s) [1] r=-1 lpr=114 pi=[67,114)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 296.927093506s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:53:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 24 19:53:16 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]: {
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:     "0": [
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:         {
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "devices": [
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "/dev/loop3"
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             ],
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_name": "ceph_lv0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_size": "21470642176",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "name": "ceph_lv0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "tags": {
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.cluster_name": "ceph",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.crush_device_class": "",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.encrypted": "0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.osd_id": "0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.type": "block",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.vdo": "0"
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             },
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "type": "block",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "vg_name": "ceph_vg0"
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:         }
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:     ],
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:     "1": [
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:         {
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "devices": [
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "/dev/loop4"
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             ],
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_name": "ceph_lv1",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_size": "21470642176",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "name": "ceph_lv1",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "tags": {
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.cluster_name": "ceph",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.crush_device_class": "",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.encrypted": "0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.osd_id": "1",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.type": "block",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.vdo": "0"
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             },
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "type": "block",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "vg_name": "ceph_vg1"
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:         }
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:     ],
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:     "2": [
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:         {
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "devices": [
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "/dev/loop5"
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             ],
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_name": "ceph_lv2",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_size": "21470642176",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "name": "ceph_lv2",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "tags": {
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.cluster_name": "ceph",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.crush_device_class": "",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.encrypted": "0",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.osd_id": "2",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.type": "block",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:                 "ceph.vdo": "0"
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             },
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "type": "block",
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:             "vg_name": "ceph_vg2"
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:         }
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]:     ]
Nov 24 19:53:16 compute-0 peaceful_jackson[109449]: }
Nov 24 19:53:17 compute-0 systemd[1]: libpod-3d0aa899f69e5b1a5411a2b10514a4f03a25dedd8326eb0b1a35ad8356a17e93.scope: Deactivated successfully.
Nov 24 19:53:17 compute-0 podman[109431]: 2025-11-24 19:53:17.022180904 +0000 UTC m=+3.754231295 container died 3d0aa899f69e5b1a5411a2b10514a4f03a25dedd8326eb0b1a35ad8356a17e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:53:17 compute-0 ceph-osd[89640]: osd.1 114 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:17.131+0000 7f1a67169640 -1 osd.1 114 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:17 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:17 compute-0 ceph-mon[75677]: pgmap v279: 305 pgs: 1 unknown, 1 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 24 19:53:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:53:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 19:53:17 compute-0 ceph-osd[88624]: osd.0 114 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:17.309+0000 7f2ca3ee7640 -1 osd.0 114 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 24 19:53:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 116 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:18.101+0000 7f1a67169640 -1 osd.1 114 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:18 compute-0 ceph-osd[89640]: osd.1 114 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'default.rgw.log' : 8 ])
Nov 24 19:53:18 compute-0 ceph-osd[88624]: osd.0 114 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:18.298+0000 7f2ca3ee7640 -1 osd.0 114 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:18 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 24 19:53:18 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 24 19:53:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 24 19:53:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 1 unknown, 1 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e417ff2924b167e29f2830dee6f1aac2c7423c5deec752884235b5446449e518-merged.mount: Deactivated successfully.
Nov 24 19:53:19 compute-0 ceph-osd[89640]: osd.1 114 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'default.rgw.log' : 8 ])
Nov 24 19:53:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:19.119+0000 7f1a67169640 -1 osd.1 114 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:19 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 115 pg[9.1c( v 38'385 (0'0,38'385] local-lis/les=113/115 n=5 ec=47/32 lis/c=109/78 les/c/f=110/79/0 sis=113) [0] r=0 lpr=113 pi=[78,113)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:53:19 compute-0 ceph-osd[88624]: osd.0 115 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:19.347+0000 7f2ca3ee7640 -1 osd.0 115 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 24 19:53:19 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:19 compute-0 ceph-mon[75677]: osdmap e114: 3 total, 3 up, 3 in
Nov 24 19:53:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:19 compute-0 ceph-mon[75677]: 8.12 scrub starts
Nov 24 19:53:19 compute-0 ceph-mon[75677]: 8.12 scrub ok
Nov 24 19:53:19 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:19 compute-0 ceph-mon[75677]: pgmap v281: 305 pgs: 1 unknown, 1 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 24 19:53:19 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:19 compute-0 ceph-mon[75677]: osdmap e115: 3 total, 3 up, 3 in
Nov 24 19:53:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 24 19:53:20 compute-0 ceph-osd[89640]: osd.1 115 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'default.rgw.log' : 8 ])
Nov 24 19:53:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:20.073+0000 7f1a67169640 -1 osd.1 115 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:20 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 116 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=67/68 n=5 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=116) [1]/[2] r=0 lpr=116 pi=[67,116)/1 crt=38'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:20 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 116 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=67/68 n=5 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=116) [1]/[2] r=0 lpr=116 pi=[67,116)/1 crt=38'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 19:53:20 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:20 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 19:53:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 24 19:53:20 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 24 19:53:20 compute-0 ceph-osd[88624]: osd.0 115 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:20.354+0000 7f2ca3ee7640 -1 osd.0 115 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 1 unknown, 1 remapped+peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:21 compute-0 ceph-osd[89640]: osd.1 116 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:21.060+0000 7f1a67169640 -1 osd.1 116 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:53:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 24 19:53:21 compute-0 ceph-osd[88624]: osd.0 116 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:21.334+0000 7f2ca3ee7640 -1 osd.0 116 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:21 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 24 19:53:21 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 24 19:53:21 compute-0 podman[109431]: 2025-11-24 19:53:21.636517934 +0000 UTC m=+8.368568305 container remove 3d0aa899f69e5b1a5411a2b10514a4f03a25dedd8326eb0b1a35ad8356a17e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 19:53:21 compute-0 systemd[1]: libpod-conmon-3d0aa899f69e5b1a5411a2b10514a4f03a25dedd8326eb0b1a35ad8356a17e93.scope: Deactivated successfully.
Nov 24 19:53:21 compute-0 sudo[109312]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:21 compute-0 sudo[109488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:53:21 compute-0 sudo[109488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:53:21 compute-0 sudo[109488]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:21 compute-0 sudo[109513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:53:21 compute-0 sudo[109513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:53:21 compute-0 sudo[109513]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:21 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 116 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=114/116 n=5 ec=47/32 lis/c=66/66 les/c/f=67/67/0 sis=114) [0]/[2] async=[0] r=0 lpr=114 pi=[66,114)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:53:21 compute-0 sudo[109538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:53:21 compute-0 sudo[109538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:53:21 compute-0 sudo[109538]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 24 19:53:22 compute-0 sudo[109564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:53:22 compute-0 sudo[109564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:53:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 24 19:53:22 compute-0 ceph-osd[89640]: osd.1 116 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:22.089+0000 7f1a67169640 -1 osd.1 116 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 24 19:53:22 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 116 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:22 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'default.rgw.log' : 8 ])
Nov 24 19:53:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:22 compute-0 ceph-mon[75677]: 3.1d scrub starts
Nov 24 19:53:22 compute-0 ceph-mon[75677]: 3.1d scrub ok
Nov 24 19:53:22 compute-0 ceph-mon[75677]: pgmap v283: 305 pgs: 1 unknown, 1 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:22 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'default.rgw.log' : 8 ])
Nov 24 19:53:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:22 compute-0 ceph-mon[75677]: 9.14 scrub starts
Nov 24 19:53:22 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'default.rgw.log' : 8 ])
Nov 24 19:53:22 compute-0 ceph-mon[75677]: 9.14 scrub ok
Nov 24 19:53:22 compute-0 ceph-mon[75677]: osdmap e116: 3 total, 3 up, 3 in
Nov 24 19:53:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:22 compute-0 ceph-mon[75677]: pgmap v285: 305 pgs: 1 unknown, 1 remapped+peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:22 compute-0 ceph-osd[88624]: osd.0 116 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:22.341+0000 7f2ca3ee7640 -1 osd.0 116 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 1 unknown, 1 remapped+peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 24 19:53:22 compute-0 podman[109629]: 2025-11-24 19:53:22.449171746 +0000 UTC m=+0.026727186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:53:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 24 19:53:23 compute-0 ceph-osd[89640]: osd.1 116 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:23.125+0000 7f1a67169640 -1 osd.1 116 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:23 compute-0 podman[109629]: 2025-11-24 19:53:23.188301084 +0000 UTC m=+0.765856484 container create f27a93984f8520077ff904ad910bd5d91e57262a39e5ae0513334e7635dbabd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 24 19:53:23 compute-0 ceph-osd[88624]: osd.0 116 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:23.321+0000 7f2ca3ee7640 -1 osd.0 116 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 24 19:53:23 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:23 compute-0 ceph-mon[75677]: 7.1c scrub starts
Nov 24 19:53:23 compute-0 ceph-mon[75677]: 7.1c scrub ok
Nov 24 19:53:23 compute-0 ceph-mon[75677]: 9.1a scrub starts
Nov 24 19:53:23 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:23 compute-0 ceph-mon[75677]: 9.1a scrub ok
Nov 24 19:53:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:23 compute-0 ceph-mon[75677]: pgmap v287: 305 pgs: 1 unknown, 1 remapped+peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:23 compute-0 ceph-mon[75677]: osdmap e117: 3 total, 3 up, 3 in
Nov 24 19:53:24 compute-0 systemd[1]: Started libpod-conmon-f27a93984f8520077ff904ad910bd5d91e57262a39e5ae0513334e7635dbabd9.scope.
Nov 24 19:53:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:53:24 compute-0 ceph-osd[89640]: osd.1 116 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:24.086+0000 7f1a67169640 -1 osd.1 116 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:53:24
Nov 24 19:53:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:53:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Nov 24 19:53:24 compute-0 ceph-osd[88624]: osd.0 117 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:24.335+0000 7f2ca3ee7640 -1 osd.0 117 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:53:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 1 activating+remapped, 1 active+recovering+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 142 B/s wr, 4 op/s; 8/217 objects misplaced (3.687%); 0 B/s, 0 objects/s recovering
Nov 24 19:53:24 compute-0 podman[109629]: 2025-11-24 19:53:24.54917918 +0000 UTC m=+2.126734570 container init f27a93984f8520077ff904ad910bd5d91e57262a39e5ae0513334e7635dbabd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 19:53:24 compute-0 podman[109629]: 2025-11-24 19:53:24.561757797 +0000 UTC m=+2.139313177 container start f27a93984f8520077ff904ad910bd5d91e57262a39e5ae0513334e7635dbabd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:53:24 compute-0 crazy_goldberg[109647]: 167 167
Nov 24 19:53:24 compute-0 systemd[1]: libpod-f27a93984f8520077ff904ad910bd5d91e57262a39e5ae0513334e7635dbabd9.scope: Deactivated successfully.
Nov 24 19:53:25 compute-0 podman[109629]: 2025-11-24 19:53:25.056521577 +0000 UTC m=+2.634077017 container attach f27a93984f8520077ff904ad910bd5d91e57262a39e5ae0513334e7635dbabd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:53:25 compute-0 podman[109629]: 2025-11-24 19:53:25.057827 +0000 UTC m=+2.635382390 container died f27a93984f8520077ff904ad910bd5d91e57262a39e5ae0513334e7635dbabd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:53:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 24 19:53:25 compute-0 ceph-osd[89640]: osd.1 117 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:25.122+0000 7f1a67169640 -1 osd.1 117 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:25 compute-0 ceph-osd[88624]: osd.0 117 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:25.312+0000 7f2ca3ee7640 -1 osd.0 117 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:25 compute-0 ceph-mon[75677]: 11.5 scrub starts
Nov 24 19:53:25 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:25 compute-0 ceph-mon[75677]: 11.5 scrub ok
Nov 24 19:53:25 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:25 compute-0 ceph-mon[75677]: pgmap v288: 305 pgs: 1 activating+remapped, 1 active+recovering+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 142 B/s wr, 4 op/s; 8/217 objects misplaced (3.687%); 0 B/s, 0 objects/s recovering
Nov 24 19:53:25 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 117 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=116/117 n=5 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[67,116)/1 crt=38'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:53:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 24 19:53:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:26.133+0000 7f1a67169640 -1 osd.1 117 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:26 compute-0 ceph-osd[89640]: osd.1 117 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:26.359+0000 7f2ca3ee7640 -1 osd.0 117 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:26 compute-0 ceph-osd[88624]: osd.0 117 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 1 activating+remapped, 1 active+recovering+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 8/216 objects misplaced (3.704%); 0 B/s, 0 objects/s recovering
Nov 24 19:53:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 9 slow ops, oldest one blocked for 121 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:53:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 24 19:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-21d26416685de9d6a3683c26aaae209a496cdf57c270d26999f02519530de550-merged.mount: Deactivated successfully.
Nov 24 19:53:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:27.124+0000 7f1a67169640 -1 osd.1 117 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:27 compute-0 ceph-osd[89640]: osd.1 117 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 24 19:53:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:27.340+0000 7f2ca3ee7640 -1 osd.0 117 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:27 compute-0 ceph-osd[88624]: osd.0 117 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:27 compute-0 ceph-mon[75677]: 11.7 scrub starts
Nov 24 19:53:27 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:27 compute-0 ceph-mon[75677]: 11.7 scrub ok
Nov 24 19:53:27 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:27 compute-0 ceph-mon[75677]: Health check update: 9 slow ops, oldest one blocked for 121 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:27 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 118 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=114/66 les/c/f=116/67/0 sis=118) [0] r=0 lpr=118 pi=[66,118)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:27 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 118 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=114/66 les/c/f=116/67/0 sis=118) [0] r=0 lpr=118 pi=[66,118)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:53:28 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 24 19:53:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:28.149+0000 7f1a67169640 -1 osd.1 117 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:28 compute-0 ceph-osd[89640]: osd.1 117 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:28 compute-0 podman[109629]: 2025-11-24 19:53:28.152228664 +0000 UTC m=+5.729784064 container remove f27a93984f8520077ff904ad910bd5d91e57262a39e5ae0513334e7635dbabd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 19:53:28 compute-0 systemd[1]: libpod-conmon-f27a93984f8520077ff904ad910bd5d91e57262a39e5ae0513334e7635dbabd9.scope: Deactivated successfully.
Nov 24 19:53:28 compute-0 ceph-osd[88624]: osd.0 118 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:28.381+0000 7f2ca3ee7640 -1 osd.0 118 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 24 19:53:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 1 activating+remapped, 1 active+recovering+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 8/216 objects misplaced (3.704%); 27 B/s, 1 objects/s recovering
Nov 24 19:53:28 compute-0 podman[109693]: 2025-11-24 19:53:28.399724391 +0000 UTC m=+0.070421812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:53:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 118 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=114/116 n=5 ec=47/32 lis/c=114/66 les/c/f=116/67/0 sis=118 pruub=9.306977272s) [0] async=[0] r=-1 lpr=118 pi=[66,118)/1 crt=38'385 mlcod 38'385 active pruub 303.446960449s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:28 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 118 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=114/116 n=5 ec=47/32 lis/c=114/66 les/c/f=116/67/0 sis=118 pruub=9.306814194s) [0] r=-1 lpr=118 pi=[66,118)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 303.446960449s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:53:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:29.193+0000 7f1a67169640 -1 osd.1 117 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:29 compute-0 ceph-osd[89640]: osd.1 117 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:29 compute-0 podman[109693]: 2025-11-24 19:53:29.242524902 +0000 UTC m=+0.913222243 container create a69bd5d06bfa30f02f75c02ea82387d4e37712969e94fc2c91247f431b64f072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 19:53:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:29.405+0000 7f2ca3ee7640 -1 osd.0 118 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:29 compute-0 ceph-osd[88624]: osd.0 118 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 24 19:53:29 compute-0 ceph-mon[75677]: pgmap v289: 305 pgs: 1 activating+remapped, 1 active+recovering+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 8/216 objects misplaced (3.704%); 0 B/s, 0 objects/s recovering
Nov 24 19:53:29 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:29 compute-0 ceph-mon[75677]: osdmap e118: 3 total, 3 up, 3 in
Nov 24 19:53:29 compute-0 systemd[1]: Started libpod-conmon-a69bd5d06bfa30f02f75c02ea82387d4e37712969e94fc2c91247f431b64f072.scope.
Nov 24 19:53:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1d4d7392092f4d1ecc93ce456b6e875a67836b43bee4b36a33ea436930090c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1d4d7392092f4d1ecc93ce456b6e875a67836b43bee4b36a33ea436930090c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1d4d7392092f4d1ecc93ce456b6e875a67836b43bee4b36a33ea436930090c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1d4d7392092f4d1ecc93ce456b6e875a67836b43bee4b36a33ea436930090c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:53:30 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 24 19:53:30 compute-0 podman[109693]: 2025-11-24 19:53:30.147439459 +0000 UTC m=+1.818136800 container init a69bd5d06bfa30f02f75c02ea82387d4e37712969e94fc2c91247f431b64f072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:53:30 compute-0 podman[109693]: 2025-11-24 19:53:30.159905753 +0000 UTC m=+1.830603124 container start a69bd5d06bfa30f02f75c02ea82387d4e37712969e94fc2c91247f431b64f072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 19:53:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 24 19:53:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:30.190+0000 7f1a67169640 -1 osd.1 119 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:30 compute-0 ceph-osd[89640]: osd.1 119 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 24 19:53:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:30.405+0000 7f2ca3ee7640 -1 osd.0 118 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:30 compute-0 ceph-osd[88624]: osd.0 118 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:30 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 24 19:53:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 1 active+recovering+remapped, 1 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1/216 objects misplaced (0.463%); 41 B/s, 1 objects/s recovering
Nov 24 19:53:30 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 24 19:53:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 24 19:53:31 compute-0 podman[109693]: 2025-11-24 19:53:31.057032658 +0000 UTC m=+2.727730019 container attach a69bd5d06bfa30f02f75c02ea82387d4e37712969e94fc2c91247f431b64f072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 19:53:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 24 19:53:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:31.166+0000 7f1a67169640 -1 osd.1 119 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:31 compute-0 ceph-osd[89640]: osd.1 119 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:31 compute-0 interesting_haslett[109716]: {
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "osd_id": 2,
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "type": "bluestore"
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:     },
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "osd_id": 1,
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "type": "bluestore"
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:     },
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "osd_id": 0,
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:         "type": "bluestore"
Nov 24 19:53:31 compute-0 interesting_haslett[109716]:     }
Nov 24 19:53:31 compute-0 interesting_haslett[109716]: }
Nov 24 19:53:31 compute-0 systemd[1]: libpod-a69bd5d06bfa30f02f75c02ea82387d4e37712969e94fc2c91247f431b64f072.scope: Deactivated successfully.
Nov 24 19:53:31 compute-0 podman[109693]: 2025-11-24 19:53:31.276576968 +0000 UTC m=+2.947274349 container died a69bd5d06bfa30f02f75c02ea82387d4e37712969e94fc2c91247f431b64f072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:53:31 compute-0 systemd[1]: libpod-a69bd5d06bfa30f02f75c02ea82387d4e37712969e94fc2c91247f431b64f072.scope: Consumed 1.098s CPU time.
Nov 24 19:53:31 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 24 19:53:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 126 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:31 compute-0 ceph-osd[88624]: osd.0 pg_epoch: 119 pg[9.1e( v 38'385 (0'0,38'385] local-lis/les=118/119 n=5 ec=47/32 lis/c=114/66 les/c/f=116/67/0 sis=118) [0] r=0 lpr=118 pi=[66,118)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:53:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:31.453+0000 7f2ca3ee7640 -1 osd.0 119 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:31 compute-0 ceph-osd[88624]: osd.0 119 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:31 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 24 19:53:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 24 19:53:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 24 19:53:31 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:31 compute-0 ceph-mon[75677]: pgmap v291: 305 pgs: 1 activating+remapped, 1 active+recovering+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 8/216 objects misplaced (3.704%); 27 B/s, 1 objects/s recovering
Nov 24 19:53:31 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:31 compute-0 ceph-mon[75677]: osdmap e119: 3 total, 3 up, 3 in
Nov 24 19:53:31 compute-0 ceph-mon[75677]: 11.a scrub starts
Nov 24 19:53:31 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:31 compute-0 ceph-mon[75677]: 11.a scrub ok
Nov 24 19:53:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:31 compute-0 ceph-mon[75677]: 3.7 scrub starts
Nov 24 19:53:31 compute-0 ceph-mon[75677]: pgmap v293: 305 pgs: 1 active+recovering+remapped, 1 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1/216 objects misplaced (0.463%); 41 B/s, 1 objects/s recovering
Nov 24 19:53:31 compute-0 ceph-mon[75677]: 3.7 scrub ok
Nov 24 19:53:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:32.130+0000 7f1a67169640 -1 osd.1 119 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:32 compute-0 ceph-osd[89640]: osd.1 119 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 1 active+recovering+remapped, 1 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1/216 objects misplaced (0.463%); 18 B/s, 0 objects/s recovering
Nov 24 19:53:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:32.495+0000 7f2ca3ee7640 -1 osd.0 119 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:32 compute-0 ceph-osd[88624]: osd.0 119 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:32 compute-0 sshd-session[109725]: Invalid user admin from 27.79.44.141 port 54776
Nov 24 19:53:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 24 19:53:32 compute-0 sshd-session[109725]: Connection closed by invalid user admin 27.79.44.141 port 54776 [preauth]
Nov 24 19:53:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-db1d4d7392092f4d1ecc93ce456b6e875a67836b43bee4b36a33ea436930090c-merged.mount: Deactivated successfully.
Nov 24 19:53:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 24 19:53:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:33.105+0000 7f1a67169640 -1 osd.1 119 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:33 compute-0 ceph-osd[89640]: osd.1 119 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:33 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 120 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=116/117 n=5 ec=47/32 lis/c=116/67 les/c/f=117/68/0 sis=120 pruub=8.253113747s) [1] async=[1] r=-1 lpr=120 pi=[67,120)/1 crt=38'385 mlcod 38'385 active pruub 307.103485107s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:33 compute-0 ceph-osd[90884]: osd.2 pg_epoch: 120 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=116/117 n=5 ec=47/32 lis/c=116/67 les/c/f=117/68/0 sis=120 pruub=8.253015518s) [1] r=-1 lpr=120 pi=[67,120)/1 crt=38'385 mlcod 0'0 unknown NOTIFY pruub 307.103485107s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 19:53:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 24 19:53:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:33.500+0000 7f2ca3ee7640 -1 osd.0 120 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:33 compute-0 ceph-osd[88624]: osd.0 120 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:33 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 120 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=116/67 les/c/f=117/68/0 sis=120) [1] r=0 lpr=120 pi=[67,120)/1 luod=0'0 crt=38'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 19:53:33 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 120 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=0/0 n=5 ec=47/32 lis/c=116/67 les/c/f=117/68/0 sis=120) [1] r=0 lpr=120 pi=[67,120)/1 crt=38'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 19:53:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 24 19:53:34 compute-0 ceph-osd[89640]: osd.1 120 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:34.136+0000 7f1a67169640 -1 osd.1 120 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 19:53:34 compute-0 ceph-mon[75677]: 11.c scrub starts
Nov 24 19:53:34 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:34 compute-0 ceph-mon[75677]: 11.d scrub starts
Nov 24 19:53:34 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 126 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:34 compute-0 ceph-mon[75677]: 11.d scrub ok
Nov 24 19:53:34 compute-0 ceph-mon[75677]: 11.c scrub ok
Nov 24 19:53:34 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:34 compute-0 ceph-mon[75677]: pgmap v295: 305 pgs: 1 active+recovering+remapped, 1 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1/216 objects misplaced (0.463%); 18 B/s, 0 objects/s recovering
Nov 24 19:53:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:34 compute-0 ceph-mon[75677]: osdmap e120: 3 total, 3 up, 3 in
Nov 24 19:53:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 1 active+recovering+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1/216 objects misplaced (0.463%); 15 B/s, 1 objects/s recovering
Nov 24 19:53:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 24 19:53:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:34.486+0000 7f2ca3ee7640 -1 osd.0 120 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:34 compute-0 ceph-osd[88624]: osd.0 120 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 24 19:53:35 compute-0 ceph-osd[89640]: osd.1 120 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 24 19:53:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:35.165+0000 7f1a67169640 -1 osd.1 120 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:35 compute-0 podman[109693]: 2025-11-24 19:53:35.226518733 +0000 UTC m=+6.897216094 container remove a69bd5d06bfa30f02f75c02ea82387d4e37712969e94fc2c91247f431b64f072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 19:53:35 compute-0 systemd[1]: libpod-conmon-a69bd5d06bfa30f02f75c02ea82387d4e37712969e94fc2c91247f431b64f072.scope: Deactivated successfully.
Nov 24 19:53:35 compute-0 sudo[109564]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:53:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 24 19:53:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:35.448+0000 7f2ca3ee7640 -1 osd.0 120 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:35 compute-0 ceph-osd[88624]: osd.0 120 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 24 19:53:35 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 24 19:53:36 compute-0 ceph-mon[75677]: 11.13 scrub starts
Nov 24 19:53:36 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:36 compute-0 ceph-mon[75677]: 11.13 scrub ok
Nov 24 19:53:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:36 compute-0 ceph-mon[75677]: 11.16 scrub starts
Nov 24 19:53:36 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:36 compute-0 ceph-mon[75677]: pgmap v296: 305 pgs: 1 active+recovering+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1/216 objects misplaced (0.463%); 15 B/s, 1 objects/s recovering
Nov 24 19:53:36 compute-0 ceph-mon[75677]: 11.16 scrub ok
Nov 24 19:53:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:53:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:53:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:36.127+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'default.rgw.log' : 11 ])
Nov 24 19:53:36 compute-0 ceph-osd[89640]: osd.1 pg_epoch: 121 pg[9.1f( v 38'385 (0'0,38'385] local-lis/les=120/121 n=5 ec=47/32 lis/c=116/67 les/c/f=117/68/0 sis=120) [1] r=0 lpr=120 pi=[67,120)/1 crt=38'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 19:53:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:53:36 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 62cdb402-e73b-4c76-a575-d1c85b744757 does not exist
Nov 24 19:53:36 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev bf365617-def2-4221-9bf2-e05b2b4c3f51 does not exist
Nov 24 19:53:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 131 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 19:53:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:53:36 compute-0 sudo[109788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:53:36 compute-0 sudo[109788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:53:36 compute-0 sudo[109788]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:36 compute-0 ceph-osd[88624]: osd.0 120 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:36.448+0000 7f2ca3ee7640 -1 osd.0 120 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 24 19:53:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 24 19:53:36 compute-0 sudo[109814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:53:36 compute-0 sudo[109814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:53:36 compute-0 sudo[109814]: pam_unix(sudo:session): session closed for user root
Nov 24 19:53:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:37.078+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:37 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 24 19:53:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:37.444+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:37 compute-0 sshd-session[109783]: Invalid user admin from 27.79.44.141 port 40310
Nov 24 19:53:37 compute-0 ceph-mon[75677]: 11.1d scrub starts
Nov 24 19:53:37 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:37 compute-0 ceph-mon[75677]: 11.1d scrub ok
Nov 24 19:53:37 compute-0 ceph-mon[75677]: osdmap e121: 3 total, 3 up, 3 in
Nov 24 19:53:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:53:37 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'default.rgw.log' : 11 ])
Nov 24 19:53:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:53:37 compute-0 ceph-mon[75677]: pgmap v298: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 19:53:37 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 131 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:37 compute-0 ceph-mon[75677]: 5.2 scrub starts
Nov 24 19:53:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:37 compute-0 ceph-mon[75677]: 5.2 scrub ok
Nov 24 19:53:37 compute-0 sshd-session[109783]: Connection closed by invalid user admin 27.79.44.141 port 40310 [preauth]
Nov 24 19:53:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 24 19:53:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:38.103+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 24 19:53:38 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 24 19:53:38 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Nov 24 19:53:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 19:53:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:38.455+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:39.110+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:39 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Nov 24 19:53:39 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 24 19:53:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:39.435+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:40.129+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:53:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:53:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:53:40 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 24 19:53:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:53:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:53:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 19:53:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:40.452+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:41 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:41 compute-0 ceph-mon[75677]: 8.d scrub starts
Nov 24 19:53:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:41 compute-0 ceph-mon[75677]: 4.10 scrub starts
Nov 24 19:53:41 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:41 compute-0 ceph-mon[75677]: 4.10 scrub ok
Nov 24 19:53:41 compute-0 ceph-mon[75677]: 8.d scrub ok
Nov 24 19:53:41 compute-0 ceph-mon[75677]: 7.1 deep-scrub starts
Nov 24 19:53:41 compute-0 ceph-mon[75677]: pgmap v299: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 19:53:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:41.134+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 141 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:53:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:41.493+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 24 19:53:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 24 19:53:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 24 19:53:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:42.117+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 7.1 deep-scrub ok
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 11.b scrub starts
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 11.b scrub ok
Nov 24 19:53:42 compute-0 ceph-mon[75677]: pgmap v300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:42 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 141 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 10.1 scrub starts
Nov 24 19:53:42 compute-0 ceph-mon[75677]: 10.1 scrub ok
Nov 24 19:53:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 19:53:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Nov 24 19:53:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:42.523+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Nov 24 19:53:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:43.160+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:43 compute-0 ceph-mon[75677]: 5.1d scrub starts
Nov 24 19:53:43 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:43 compute-0 ceph-mon[75677]: 5.1d scrub ok
Nov 24 19:53:43 compute-0 ceph-mon[75677]: pgmap v301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 19:53:43 compute-0 ceph-mon[75677]: 10.e deep-scrub starts
Nov 24 19:53:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:43 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 24 19:53:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 24 19:53:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:43.500+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:43 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 24 19:53:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 24 19:53:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:44.156+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:44 compute-0 ceph-mon[75677]: 10.e deep-scrub ok
Nov 24 19:53:44 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:44 compute-0 ceph-mon[75677]: 11.9 scrub starts
Nov 24 19:53:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:44 compute-0 ceph-mon[75677]: 5.3 scrub starts
Nov 24 19:53:44 compute-0 ceph-mon[75677]: 11.9 scrub ok
Nov 24 19:53:44 compute-0 ceph-mon[75677]: 5.3 scrub ok
Nov 24 19:53:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:44 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 24 19:53:44 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 24 19:53:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:44.532+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:45.184+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 24 19:53:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 24 19:53:45 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:45 compute-0 ceph-mon[75677]: pgmap v302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:45 compute-0 ceph-mon[75677]: 7.5 scrub starts
Nov 24 19:53:45 compute-0 ceph-mon[75677]: 7.5 scrub ok
Nov 24 19:53:45 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 24 19:53:45 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 24 19:53:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:45.494+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:46.155+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 24 19:53:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 24 19:53:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:46 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:46 compute-0 ceph-mon[75677]: 10.12 scrub starts
Nov 24 19:53:46 compute-0 ceph-mon[75677]: 10.12 scrub ok
Nov 24 19:53:46 compute-0 ceph-mon[75677]: 3.8 scrub starts
Nov 24 19:53:46 compute-0 ceph-mon[75677]: 3.8 scrub ok
Nov 24 19:53:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 24 19:53:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:53:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:46.540+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:47.134+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:47 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 24 19:53:47 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 24 19:53:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 146 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:47 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:47 compute-0 ceph-mon[75677]: 10.10 scrub starts
Nov 24 19:53:47 compute-0 ceph-mon[75677]: 10.10 scrub ok
Nov 24 19:53:47 compute-0 ceph-mon[75677]: pgmap v303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 24 19:53:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:47.550+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:48.088+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:48 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 24 19:53:48 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 24 19:53:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 24 19:53:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:48 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:48 compute-0 ceph-mon[75677]: 11.2 scrub starts
Nov 24 19:53:48 compute-0 ceph-mon[75677]: 11.2 scrub ok
Nov 24 19:53:48 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 146 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:48.544+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 24 19:53:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 24 19:53:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:49.071+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:49.517+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 24 19:53:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 24 19:53:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:49 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:49 compute-0 ceph-mon[75677]: 7.c scrub starts
Nov 24 19:53:49 compute-0 ceph-mon[75677]: 7.c scrub ok
Nov 24 19:53:49 compute-0 ceph-mon[75677]: pgmap v304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 24 19:53:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:50.024+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 24 19:53:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:50.537+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:50 compute-0 ceph-mon[75677]: 10.4 scrub starts
Nov 24 19:53:50 compute-0 ceph-mon[75677]: 10.4 scrub ok
Nov 24 19:53:50 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:50 compute-0 ceph-mon[75677]: 2.1d scrub starts
Nov 24 19:53:50 compute-0 ceph-mon[75677]: 2.1d scrub ok
Nov 24 19:53:50 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:50 compute-0 ceph-mon[75677]: pgmap v305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 24 19:53:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:51.054+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:53:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.16 deep-scrub starts
Nov 24 19:53:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:51.517+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.16 deep-scrub ok
Nov 24 19:53:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:51 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:51 compute-0 ceph-mon[75677]: 10.16 deep-scrub starts
Nov 24 19:53:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:51 compute-0 ceph-mon[75677]: 10.16 deep-scrub ok
Nov 24 19:53:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:52.052+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:52.554+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:52 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:52 compute-0 ceph-mon[75677]: pgmap v306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 24 19:53:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:53.027+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 24 19:53:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:53.506+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:54.024+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:54 compute-0 ceph-mon[75677]: 2.17 scrub starts
Nov 24 19:53:54 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:54 compute-0 ceph-mon[75677]: 2.17 scrub ok
Nov 24 19:53:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:54 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.3 deep-scrub starts
Nov 24 19:53:54 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.3 deep-scrub ok
Nov 24 19:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:53:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:54.493+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:55.032+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:55 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:55 compute-0 ceph-mon[75677]: 11.3 deep-scrub starts
Nov 24 19:53:55 compute-0 ceph-mon[75677]: 11.3 deep-scrub ok
Nov 24 19:53:55 compute-0 ceph-mon[75677]: pgmap v307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:55 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Nov 24 19:53:55 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Nov 24 19:53:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:55.511+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:56.054+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:56 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:56 compute-0 ceph-mon[75677]: 11.8 deep-scrub starts
Nov 24 19:53:56 compute-0 ceph-mon[75677]: 11.8 deep-scrub ok
Nov 24 19:53:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 151 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:53:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:56.512+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:57.067+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:57 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:57 compute-0 ceph-mon[75677]: pgmap v308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:57 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 151 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:53:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:57.553+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:58.090+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:58 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:58.535+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 24 19:53:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:53:59.054+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:53:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 24 19:53:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:59 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:53:59 compute-0 ceph-mon[75677]: pgmap v309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:53:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:53:59.547+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:53:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:53:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 24 19:53:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 24 19:54:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:00.069+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:00 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.18 deep-scrub starts
Nov 24 19:54:00 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.18 deep-scrub ok
Nov 24 19:54:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:00 compute-0 ceph-mon[75677]: 5.11 scrub starts
Nov 24 19:54:00 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:00 compute-0 ceph-mon[75677]: 5.11 scrub ok
Nov 24 19:54:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:00.534+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 24 19:54:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 24 19:54:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:01.119+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 161 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:01.519+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:01 compute-0 ceph-mon[75677]: 10.15 scrub starts
Nov 24 19:54:01 compute-0 ceph-mon[75677]: 10.15 scrub ok
Nov 24 19:54:01 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:01 compute-0 ceph-mon[75677]: 3.18 deep-scrub starts
Nov 24 19:54:01 compute-0 ceph-mon[75677]: 3.18 deep-scrub ok
Nov 24 19:54:01 compute-0 ceph-mon[75677]: pgmap v310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:01 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 161 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 24 19:54:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:02.152+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 24 19:54:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:02.508+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:02 compute-0 ceph-mon[75677]: 10.17 scrub starts
Nov 24 19:54:02 compute-0 ceph-mon[75677]: 10.17 scrub ok
Nov 24 19:54:02 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Nov 24 19:54:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:03.163+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Nov 24 19:54:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:03.487+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:03 compute-0 ceph-mon[75677]: 10.11 scrub starts
Nov 24 19:54:03 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:03 compute-0 ceph-mon[75677]: 10.11 scrub ok
Nov 24 19:54:03 compute-0 ceph-mon[75677]: pgmap v311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:03 compute-0 ceph-mon[75677]: 5.13 deep-scrub starts
Nov 24 19:54:03 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:03 compute-0 ceph-mon[75677]: 5.13 deep-scrub ok
Nov 24 19:54:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:04.123+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:04 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 24 19:54:04 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 24 19:54:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:04.473+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:04 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:04 compute-0 ceph-mon[75677]: 3.5 scrub starts
Nov 24 19:54:04 compute-0 ceph-mon[75677]: 3.5 scrub ok
Nov 24 19:54:04 compute-0 ceph-mon[75677]: pgmap v312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:05.138+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:05.509+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:05 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:06.114+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:06.530+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 166 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:06 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:06 compute-0 ceph-mon[75677]: pgmap v313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:07.153+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 24 19:54:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 24 19:54:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:07.516+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:07 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 166 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:07 compute-0 ceph-mon[75677]: 5.16 scrub starts
Nov 24 19:54:07 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:07 compute-0 ceph-mon[75677]: 5.16 scrub ok
Nov 24 19:54:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:08.176+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:08.490+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:08 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:08 compute-0 ceph-mon[75677]: pgmap v314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:09.155+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:09.511+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 24 19:54:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 24 19:54:09 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:09 compute-0 ceph-mon[75677]: 2.1f scrub starts
Nov 24 19:54:09 compute-0 ceph-mon[75677]: 2.1f scrub ok
Nov 24 19:54:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:10.142+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:10.504+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:11 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:11 compute-0 ceph-mon[75677]: pgmap v315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 24 19:54:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:11.163+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 24 19:54:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:11.478+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:12 compute-0 ceph-mon[75677]: 5.9 scrub starts
Nov 24 19:54:12 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:12 compute-0 ceph-mon[75677]: 5.9 scrub ok
Nov 24 19:54:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:12.197+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:12.522+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 24 19:54:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 24 19:54:13 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:13 compute-0 ceph-mon[75677]: pgmap v316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:13 compute-0 ceph-mon[75677]: 2.1c scrub starts
Nov 24 19:54:13 compute-0 ceph-mon[75677]: 2.1c scrub ok
Nov 24 19:54:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:13.221+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 24 19:54:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:13.528+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 24 19:54:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:14.245+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:14 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:14 compute-0 ceph-mon[75677]: 10.8 scrub starts
Nov 24 19:54:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:14.516+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:15.213+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:15 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.2 deep-scrub starts
Nov 24 19:54:15 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.2 deep-scrub ok
Nov 24 19:54:15 compute-0 ceph-mon[75677]: 10.8 scrub ok
Nov 24 19:54:15 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:15 compute-0 ceph-mon[75677]: pgmap v317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:15.545+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 24 19:54:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 24 19:54:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:16.171+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:16 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:16 compute-0 ceph-mon[75677]: 8.2 deep-scrub starts
Nov 24 19:54:16 compute-0 ceph-mon[75677]: 8.2 deep-scrub ok
Nov 24 19:54:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 171 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:16.570+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 24 19:54:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 24 19:54:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:17.125+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:17 compute-0 ceph-mon[75677]: 8.14 scrub starts
Nov 24 19:54:17 compute-0 ceph-mon[75677]: 8.14 scrub ok
Nov 24 19:54:17 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:17 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 171 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:17 compute-0 ceph-mon[75677]: pgmap v318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 24 19:54:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:17.568+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 24 19:54:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:18.137+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:18 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 24 19:54:18 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 24 19:54:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:18 compute-0 ceph-mon[75677]: 11.17 scrub starts
Nov 24 19:54:18 compute-0 ceph-mon[75677]: 11.17 scrub ok
Nov 24 19:54:18 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:18.587+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:19.183+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:19.603+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:19 compute-0 ceph-mon[75677]: 11.14 scrub starts
Nov 24 19:54:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:19 compute-0 ceph-mon[75677]: 11.14 scrub ok
Nov 24 19:54:19 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:19 compute-0 ceph-mon[75677]: 7.8 scrub starts
Nov 24 19:54:19 compute-0 ceph-mon[75677]: 7.8 scrub ok
Nov 24 19:54:19 compute-0 ceph-mon[75677]: pgmap v319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:19 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:20.179+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:20.597+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:20 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:20 compute-0 ceph-mon[75677]: pgmap v320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:21.142+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 181 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:21.610+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:21 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:21 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 181 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:22.115+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:22 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 24 19:54:22 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 24 19:54:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:22.621+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:22 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:22 compute-0 ceph-mon[75677]: 7.a scrub starts
Nov 24 19:54:22 compute-0 ceph-mon[75677]: 7.a scrub ok
Nov 24 19:54:22 compute-0 ceph-mon[75677]: pgmap v321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 24 19:54:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:23.138+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 24 19:54:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:23.590+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:24.107+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:24 compute-0 ceph-mon[75677]: 5.12 scrub starts
Nov 24 19:54:24 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:24 compute-0 ceph-mon[75677]: 5.12 scrub ok
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:54:24
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.data']
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:54:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:24.560+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 24 19:54:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 24 19:54:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:25.088+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:25 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:25 compute-0 ceph-mon[75677]: pgmap v322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:25.568+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 24 19:54:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:26.050+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 24 19:54:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:26 compute-0 ceph-mon[75677]: 7.1f scrub starts
Nov 24 19:54:26 compute-0 ceph-mon[75677]: 7.1f scrub ok
Nov 24 19:54:26 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 24 19:54:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:26.574+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 24 19:54:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:27.032+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 186 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:27 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 24 19:54:27 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 24 19:54:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:27.562+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:27 compute-0 ceph-mon[75677]: 2.d scrub starts
Nov 24 19:54:27 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:27 compute-0 ceph-mon[75677]: 2.d scrub ok
Nov 24 19:54:27 compute-0 ceph-mon[75677]: pgmap v323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 24 19:54:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:28.069+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 24 19:54:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:28 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 24 19:54:28 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 24 19:54:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:28.566+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 3.1f scrub starts
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 3.1f scrub ok
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:28 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 186 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 3.e scrub starts
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 3.e scrub ok
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 5.c scrub starts
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 5.c scrub ok
Nov 24 19:54:28 compute-0 ceph-mon[75677]: pgmap v324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 7.15 scrub starts
Nov 24 19:54:28 compute-0 ceph-mon[75677]: 7.15 scrub ok
Nov 24 19:54:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:29.051+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:29.569+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 24 19:54:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:30.094+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 24 19:54:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:30 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:30.569+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:31.107+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:31 compute-0 ceph-mon[75677]: 10.f scrub starts
Nov 24 19:54:31 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:31 compute-0 ceph-mon[75677]: 10.f scrub ok
Nov 24 19:54:31 compute-0 ceph-mon[75677]: pgmap v325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:31 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 24 19:54:31 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 24 19:54:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:31.586+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:32.118+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:32 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:32 compute-0 ceph-mon[75677]: 7.e scrub starts
Nov 24 19:54:32 compute-0 ceph-mon[75677]: 7.e scrub ok
Nov 24 19:54:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:32.580+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Nov 24 19:54:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Nov 24 19:54:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:33.150+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:33.601+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:33 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:33 compute-0 ceph-mon[75677]: pgmap v326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 24 19:54:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:34.137+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 19:54:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:34.627+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:34 compute-0 ceph-mon[75677]: 8.10 deep-scrub starts
Nov 24 19:54:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:34 compute-0 ceph-mon[75677]: 8.10 deep-scrub ok
Nov 24 19:54:34 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:34 compute-0 ceph-mon[75677]: 10.19 deep-scrub starts
Nov 24 19:54:34 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:34 compute-0 ceph-mon[75677]: 10.19 deep-scrub ok
Nov 24 19:54:34 compute-0 ceph-mon[75677]: pgmap v327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:35.120+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:35 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.1b deep-scrub starts
Nov 24 19:54:35 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.1b deep-scrub ok
Nov 24 19:54:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:35.578+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:35 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:35 compute-0 ceph-mon[75677]: 8.1b deep-scrub starts
Nov 24 19:54:35 compute-0 ceph-mon[75677]: 8.1b deep-scrub ok
Nov 24 19:54:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:36.168+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:36 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 24 19:54:36 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 24 19:54:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 191 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:36.556+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:36 compute-0 sudo[110089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:54:36 compute-0 sudo[110089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:36 compute-0 sudo[110089]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:36 compute-0 sudo[110114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:54:36 compute-0 sudo[110114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:36 compute-0 sudo[110114]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:36 compute-0 sudo[110139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:54:36 compute-0 sudo[110139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:36 compute-0 sudo[110139]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:36 compute-0 sudo[110164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 19:54:36 compute-0 sudo[110164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:37 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:37 compute-0 ceph-mon[75677]: 8.4 scrub starts
Nov 24 19:54:37 compute-0 ceph-mon[75677]: 8.4 scrub ok
Nov 24 19:54:37 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 191 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:37 compute-0 ceph-mon[75677]: pgmap v328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:37.125+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:37 compute-0 sudo[110164]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:54:37 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:54:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:54:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:54:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:54:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 24 19:54:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:37.549+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 24 19:54:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:54:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d9886055-d027-4b2b-80ec-31c58190d46d does not exist
Nov 24 19:54:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c4b13fb8-3c33-431b-9959-ecb8e762843f does not exist
Nov 24 19:54:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev bb4b93cd-a5c2-4704-9800-85493fcf044a does not exist
Nov 24 19:54:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:54:37 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:54:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:54:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:54:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:54:37 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:54:37 compute-0 sudo[110220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:54:37 compute-0 sudo[110220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:37 compute-0 sudo[110220]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:37 compute-0 sudo[110245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:54:37 compute-0 sudo[110245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:37 compute-0 sudo[110245]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:37 compute-0 sudo[110270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:54:37 compute-0 sudo[110270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:37 compute-0 sudo[110270]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:37 compute-0 sudo[110295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:54:37 compute-0 sudo[110295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 24 19:54:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:38.140+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 24 19:54:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:38 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:54:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:54:38 compute-0 ceph-mon[75677]: 11.10 scrub starts
Nov 24 19:54:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:54:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:54:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:54:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:54:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:38 compute-0 podman[110359]: 2025-11-24 19:54:38.494345246 +0000 UTC m=+0.103466668 container create 68261dbce0c0cdf8dd38abd1118b346d2dc80134099281ec6eb28e0562ac7681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 19:54:38 compute-0 podman[110359]: 2025-11-24 19:54:38.432093221 +0000 UTC m=+0.041214693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:54:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:38.580+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:38 compute-0 systemd[1]: Started libpod-conmon-68261dbce0c0cdf8dd38abd1118b346d2dc80134099281ec6eb28e0562ac7681.scope.
Nov 24 19:54:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:54:38 compute-0 podman[110359]: 2025-11-24 19:54:38.67722237 +0000 UTC m=+0.286343862 container init 68261dbce0c0cdf8dd38abd1118b346d2dc80134099281ec6eb28e0562ac7681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 19:54:38 compute-0 podman[110359]: 2025-11-24 19:54:38.689834817 +0000 UTC m=+0.298956239 container start 68261dbce0c0cdf8dd38abd1118b346d2dc80134099281ec6eb28e0562ac7681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:54:38 compute-0 unruffled_hugle[110375]: 167 167
Nov 24 19:54:38 compute-0 systemd[1]: libpod-68261dbce0c0cdf8dd38abd1118b346d2dc80134099281ec6eb28e0562ac7681.scope: Deactivated successfully.
Nov 24 19:54:38 compute-0 podman[110359]: 2025-11-24 19:54:38.727509626 +0000 UTC m=+0.336631048 container attach 68261dbce0c0cdf8dd38abd1118b346d2dc80134099281ec6eb28e0562ac7681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 19:54:38 compute-0 podman[110359]: 2025-11-24 19:54:38.728736858 +0000 UTC m=+0.337858280 container died 68261dbce0c0cdf8dd38abd1118b346d2dc80134099281ec6eb28e0562ac7681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 19:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-74a3fab5a9055efa91c2b71a6407bacf994cd9d080ef93d8a2f6bbfcdcdc1735-merged.mount: Deactivated successfully.
Nov 24 19:54:38 compute-0 podman[110359]: 2025-11-24 19:54:38.998299 +0000 UTC m=+0.607420422 container remove 68261dbce0c0cdf8dd38abd1118b346d2dc80134099281ec6eb28e0562ac7681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:54:39 compute-0 systemd[1]: libpod-conmon-68261dbce0c0cdf8dd38abd1118b346d2dc80134099281ec6eb28e0562ac7681.scope: Deactivated successfully.
Nov 24 19:54:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:39.109+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:39 compute-0 ceph-mon[75677]: 11.10 scrub ok
Nov 24 19:54:39 compute-0 ceph-mon[75677]: 5.f scrub starts
Nov 24 19:54:39 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:39 compute-0 ceph-mon[75677]: 5.f scrub ok
Nov 24 19:54:39 compute-0 ceph-mon[75677]: pgmap v329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:39 compute-0 podman[110400]: 2025-11-24 19:54:39.217855896 +0000 UTC m=+0.039727464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:54:39 compute-0 podman[110400]: 2025-11-24 19:54:39.368783464 +0000 UTC m=+0.190654962 container create 1715bb6f67695fe9a2189788a217c4e5443eae338740303749d920b74651f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meitner, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:54:39 compute-0 systemd[1]: Started libpod-conmon-1715bb6f67695fe9a2189788a217c4e5443eae338740303749d920b74651f132.scope.
Nov 24 19:54:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16e27755b58c7f1f915905ff0b7765e0e591986a80e011280162b0e242d958/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16e27755b58c7f1f915905ff0b7765e0e591986a80e011280162b0e242d958/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16e27755b58c7f1f915905ff0b7765e0e591986a80e011280162b0e242d958/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16e27755b58c7f1f915905ff0b7765e0e591986a80e011280162b0e242d958/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16e27755b58c7f1f915905ff0b7765e0e591986a80e011280162b0e242d958/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 24 19:54:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:39.556+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 24 19:54:39 compute-0 podman[110400]: 2025-11-24 19:54:39.574013885 +0000 UTC m=+0.395885443 container init 1715bb6f67695fe9a2189788a217c4e5443eae338740303749d920b74651f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meitner, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 19:54:39 compute-0 podman[110400]: 2025-11-24 19:54:39.587301681 +0000 UTC m=+0.409173179 container start 1715bb6f67695fe9a2189788a217c4e5443eae338740303749d920b74651f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meitner, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 19:54:39 compute-0 podman[110400]: 2025-11-24 19:54:39.655077494 +0000 UTC m=+0.476949002 container attach 1715bb6f67695fe9a2189788a217c4e5443eae338740303749d920b74651f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:54:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:40.095+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:54:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:54:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:54:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:54:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:54:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:40 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:40.516+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:40 compute-0 confident_meitner[110417]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:54:40 compute-0 confident_meitner[110417]: --> relative data size: 1.0
Nov 24 19:54:40 compute-0 confident_meitner[110417]: --> All data devices are unavailable
Nov 24 19:54:40 compute-0 systemd[1]: libpod-1715bb6f67695fe9a2189788a217c4e5443eae338740303749d920b74651f132.scope: Deactivated successfully.
Nov 24 19:54:40 compute-0 systemd[1]: libpod-1715bb6f67695fe9a2189788a217c4e5443eae338740303749d920b74651f132.scope: Consumed 1.173s CPU time.
Nov 24 19:54:40 compute-0 podman[110400]: 2025-11-24 19:54:40.815731149 +0000 UTC m=+1.637602647 container died 1715bb6f67695fe9a2189788a217c4e5443eae338740303749d920b74651f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 19:54:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:41.077+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 201 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b16e27755b58c7f1f915905ff0b7765e0e591986a80e011280162b0e242d958-merged.mount: Deactivated successfully.
Nov 24 19:54:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:41.526+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:41 compute-0 ceph-mon[75677]: 3.1b scrub starts
Nov 24 19:54:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:41 compute-0 ceph-mon[75677]: 3.1b scrub ok
Nov 24 19:54:41 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:41 compute-0 ceph-mon[75677]: pgmap v330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:41 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 201 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:41 compute-0 podman[110400]: 2025-11-24 19:54:41.744395988 +0000 UTC m=+2.566267486 container remove 1715bb6f67695fe9a2189788a217c4e5443eae338740303749d920b74651f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:54:41 compute-0 sudo[110295]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:41 compute-0 systemd[1]: libpod-conmon-1715bb6f67695fe9a2189788a217c4e5443eae338740303749d920b74651f132.scope: Deactivated successfully.
Nov 24 19:54:41 compute-0 sudo[110459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:54:41 compute-0 sudo[110459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:41 compute-0 sudo[110459]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:41 compute-0 sudo[110486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:54:41 compute-0 sudo[110486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:41 compute-0 sudo[110486]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:42 compute-0 sudo[110511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:54:42 compute-0 sudo[110511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:42 compute-0 sudo[110511]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:42.077+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:42 compute-0 sudo[110536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:54:42 compute-0 sudo[110536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 24 19:54:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:42.479+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 24 19:54:42 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:42 compute-0 podman[110601]: 2025-11-24 19:54:42.658774143 +0000 UTC m=+0.122998612 container create 2db9d7dcdfde0aa046cf905ef635a7255ac315f93fffa1c2afad689d1e4c5228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 19:54:42 compute-0 podman[110601]: 2025-11-24 19:54:42.57795143 +0000 UTC m=+0.042175929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:54:42 compute-0 systemd[1]: Started libpod-conmon-2db9d7dcdfde0aa046cf905ef635a7255ac315f93fffa1c2afad689d1e4c5228.scope.
Nov 24 19:54:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:54:42 compute-0 podman[110601]: 2025-11-24 19:54:42.889443124 +0000 UTC m=+0.353667653 container init 2db9d7dcdfde0aa046cf905ef635a7255ac315f93fffa1c2afad689d1e4c5228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 19:54:42 compute-0 podman[110601]: 2025-11-24 19:54:42.901368434 +0000 UTC m=+0.365592903 container start 2db9d7dcdfde0aa046cf905ef635a7255ac315f93fffa1c2afad689d1e4c5228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:54:42 compute-0 relaxed_blackwell[110617]: 167 167
Nov 24 19:54:42 compute-0 systemd[1]: libpod-2db9d7dcdfde0aa046cf905ef635a7255ac315f93fffa1c2afad689d1e4c5228.scope: Deactivated successfully.
Nov 24 19:54:42 compute-0 podman[110601]: 2025-11-24 19:54:42.915244606 +0000 UTC m=+0.379469075 container attach 2db9d7dcdfde0aa046cf905ef635a7255ac315f93fffa1c2afad689d1e4c5228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:54:42 compute-0 podman[110601]: 2025-11-24 19:54:42.917252989 +0000 UTC m=+0.381477478 container died 2db9d7dcdfde0aa046cf905ef635a7255ac315f93fffa1c2afad689d1e4c5228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:54:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-42fa55169920de19d4f52386c027c63133d33e25be5e381b2357fa5e461a22f0-merged.mount: Deactivated successfully.
Nov 24 19:54:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:43.086+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:43 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 24 19:54:43 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 24 19:54:43 compute-0 podman[110601]: 2025-11-24 19:54:43.452928682 +0000 UTC m=+0.917153161 container remove 2db9d7dcdfde0aa046cf905ef635a7255ac315f93fffa1c2afad689d1e4c5228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:54:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 24 19:54:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:43.460+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:43 compute-0 systemd[1]: libpod-conmon-2db9d7dcdfde0aa046cf905ef635a7255ac315f93fffa1c2afad689d1e4c5228.scope: Deactivated successfully.
Nov 24 19:54:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 24 19:54:43 compute-0 sshd-session[110464]: Invalid user admin from 27.79.44.141 port 58008
Nov 24 19:54:43 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:43 compute-0 ceph-mon[75677]: pgmap v331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:43 compute-0 ceph-mon[75677]: 7.3 scrub starts
Nov 24 19:54:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:43 compute-0 ceph-mon[75677]: 7.3 scrub ok
Nov 24 19:54:43 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:43 compute-0 ceph-mon[75677]: 11.1a scrub starts
Nov 24 19:54:43 compute-0 ceph-mon[75677]: 11.1a scrub ok
Nov 24 19:54:43 compute-0 ceph-mon[75677]: 8.c scrub starts
Nov 24 19:54:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:43 compute-0 ceph-mon[75677]: 8.c scrub ok
Nov 24 19:54:43 compute-0 podman[110644]: 2025-11-24 19:54:43.692974505 +0000 UTC m=+0.046585587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:54:43 compute-0 podman[110644]: 2025-11-24 19:54:43.786419815 +0000 UTC m=+0.140030857 container create bbbacb836fe373901f303369898961543683ae5784e0bfcad54982734d711028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_haibt, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:54:43 compute-0 systemd[1]: Started libpod-conmon-bbbacb836fe373901f303369898961543683ae5784e0bfcad54982734d711028.scope.
Nov 24 19:54:43 compute-0 sshd-session[110464]: Connection closed by invalid user admin 27.79.44.141 port 58008 [preauth]
Nov 24 19:54:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc1508b53f11b88f6f24d6dde8be510281b5c2fb06f60d961ef4b93d92e9c10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc1508b53f11b88f6f24d6dde8be510281b5c2fb06f60d961ef4b93d92e9c10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc1508b53f11b88f6f24d6dde8be510281b5c2fb06f60d961ef4b93d92e9c10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc1508b53f11b88f6f24d6dde8be510281b5c2fb06f60d961ef4b93d92e9c10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:44.049+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:44 compute-0 podman[110644]: 2025-11-24 19:54:44.098271699 +0000 UTC m=+0.451882731 container init bbbacb836fe373901f303369898961543683ae5784e0bfcad54982734d711028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 19:54:44 compute-0 podman[110644]: 2025-11-24 19:54:44.110344132 +0000 UTC m=+0.463955164 container start bbbacb836fe373901f303369898961543683ae5784e0bfcad54982734d711028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_haibt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:54:44 compute-0 podman[110644]: 2025-11-24 19:54:44.224854546 +0000 UTC m=+0.578465578 container attach bbbacb836fe373901f303369898961543683ae5784e0bfcad54982734d711028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 19:54:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 24 19:54:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:44.506+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 24 19:54:44 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:44 compute-0 ceph-mon[75677]: pgmap v332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:44 compute-0 ceph-mon[75677]: 11.f scrub starts
Nov 24 19:54:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:44 compute-0 ceph-mon[75677]: 11.f scrub ok
Nov 24 19:54:44 compute-0 jolly_haibt[110661]: {
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:     "0": [
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:         {
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "devices": [
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "/dev/loop3"
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             ],
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_name": "ceph_lv0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_size": "21470642176",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "name": "ceph_lv0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "tags": {
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.cluster_name": "ceph",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.crush_device_class": "",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.encrypted": "0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.osd_id": "0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.type": "block",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.vdo": "0"
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             },
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "type": "block",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "vg_name": "ceph_vg0"
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:         }
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:     ],
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:     "1": [
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:         {
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "devices": [
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "/dev/loop4"
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             ],
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_name": "ceph_lv1",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_size": "21470642176",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "name": "ceph_lv1",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "tags": {
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.cluster_name": "ceph",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.crush_device_class": "",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.encrypted": "0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.osd_id": "1",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.type": "block",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.vdo": "0"
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             },
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "type": "block",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "vg_name": "ceph_vg1"
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:         }
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:     ],
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:     "2": [
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:         {
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "devices": [
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "/dev/loop5"
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             ],
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_name": "ceph_lv2",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_size": "21470642176",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "name": "ceph_lv2",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "tags": {
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.cluster_name": "ceph",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.crush_device_class": "",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.encrypted": "0",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.osd_id": "2",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.type": "block",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:                 "ceph.vdo": "0"
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             },
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "type": "block",
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:             "vg_name": "ceph_vg2"
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:         }
Nov 24 19:54:44 compute-0 jolly_haibt[110661]:     ]
Nov 24 19:54:44 compute-0 jolly_haibt[110661]: }
Nov 24 19:54:44 compute-0 systemd[1]: libpod-bbbacb836fe373901f303369898961543683ae5784e0bfcad54982734d711028.scope: Deactivated successfully.
Nov 24 19:54:44 compute-0 podman[110644]: 2025-11-24 19:54:44.91418498 +0000 UTC m=+1.267796022 container died bbbacb836fe373901f303369898961543683ae5784e0bfcad54982734d711028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_haibt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 19:54:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:45.022+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fc1508b53f11b88f6f24d6dde8be510281b5c2fb06f60d961ef4b93d92e9c10-merged.mount: Deactivated successfully.
Nov 24 19:54:45 compute-0 podman[110644]: 2025-11-24 19:54:45.275427226 +0000 UTC m=+1.629038268 container remove bbbacb836fe373901f303369898961543683ae5784e0bfcad54982734d711028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_haibt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:54:45 compute-0 systemd[1]: libpod-conmon-bbbacb836fe373901f303369898961543683ae5784e0bfcad54982734d711028.scope: Deactivated successfully.
Nov 24 19:54:45 compute-0 sudo[110536]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:45 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 24 19:54:45 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 24 19:54:45 compute-0 sudo[110684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:54:45 compute-0 sudo[110684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:45 compute-0 sudo[110684]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 24 19:54:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:45.479+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 24 19:54:45 compute-0 sudo[110709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:54:45 compute-0 sudo[110709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:45 compute-0 sudo[110709]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:45 compute-0 sudo[110734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:54:45 compute-0 sudo[110734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:45 compute-0 sudo[110734]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:45 compute-0 sudo[110759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:54:45 compute-0 sudo[110759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:45 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:45 compute-0 ceph-mon[75677]: 3.11 scrub starts
Nov 24 19:54:45 compute-0 ceph-mon[75677]: 3.11 scrub ok
Nov 24 19:54:45 compute-0 ceph-mon[75677]: 11.e scrub starts
Nov 24 19:54:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:45 compute-0 ceph-mon[75677]: 11.e scrub ok
Nov 24 19:54:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 24 19:54:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:45.984+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 24 19:54:46 compute-0 podman[110825]: 2025-11-24 19:54:46.220926825 +0000 UTC m=+0.105863144 container create d59a2d728fbec03a8cef648c58ca26e30ef6b7cebe2633dbf1caad4003701a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 19:54:46 compute-0 podman[110825]: 2025-11-24 19:54:46.157895158 +0000 UTC m=+0.042831527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:54:46 compute-0 systemd[1]: Started libpod-conmon-d59a2d728fbec03a8cef648c58ca26e30ef6b7cebe2633dbf1caad4003701a99.scope.
Nov 24 19:54:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:54:46 compute-0 podman[110825]: 2025-11-24 19:54:46.393283326 +0000 UTC m=+0.278219625 container init d59a2d728fbec03a8cef648c58ca26e30ef6b7cebe2633dbf1caad4003701a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 19:54:46 compute-0 podman[110825]: 2025-11-24 19:54:46.405217515 +0000 UTC m=+0.290153804 container start d59a2d728fbec03a8cef648c58ca26e30ef6b7cebe2633dbf1caad4003701a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:54:46 compute-0 mystifying_mendel[110841]: 167 167
Nov 24 19:54:46 compute-0 systemd[1]: libpod-d59a2d728fbec03a8cef648c58ca26e30ef6b7cebe2633dbf1caad4003701a99.scope: Deactivated successfully.
Nov 24 19:54:46 compute-0 podman[110825]: 2025-11-24 19:54:46.427638975 +0000 UTC m=+0.312575274 container attach d59a2d728fbec03a8cef648c58ca26e30ef6b7cebe2633dbf1caad4003701a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 19:54:46 compute-0 podman[110825]: 2025-11-24 19:54:46.428126119 +0000 UTC m=+0.313062438 container died d59a2d728fbec03a8cef648c58ca26e30ef6b7cebe2633dbf1caad4003701a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:54:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-303fc39eebb876eccff3818aec2cdaa4d1c423386c542d9ba152bb2109d4e31d-merged.mount: Deactivated successfully.
Nov 24 19:54:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 24 19:54:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:46.529+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 24 19:54:46 compute-0 podman[110825]: 2025-11-24 19:54:46.691083204 +0000 UTC m=+0.576019513 container remove d59a2d728fbec03a8cef648c58ca26e30ef6b7cebe2633dbf1caad4003701a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 19:54:46 compute-0 systemd[1]: libpod-conmon-d59a2d728fbec03a8cef648c58ca26e30ef6b7cebe2633dbf1caad4003701a99.scope: Deactivated successfully.
Nov 24 19:54:46 compute-0 sshd-session[110082]: Invalid user userm from 14.63.196.175 port 51540
Nov 24 19:54:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 206 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:46 compute-0 ceph-mon[75677]: 2.7 scrub starts
Nov 24 19:54:46 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:46 compute-0 ceph-mon[75677]: 2.7 scrub ok
Nov 24 19:54:46 compute-0 ceph-mon[75677]: pgmap v333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:46 compute-0 ceph-mon[75677]: 7.1b scrub starts
Nov 24 19:54:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:46 compute-0 ceph-mon[75677]: 7.1b scrub ok
Nov 24 19:54:46 compute-0 sudo[108784]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:46 compute-0 podman[110867]: 2025-11-24 19:54:46.968094846 +0000 UTC m=+0.096988936 container create ec0d606ded880500decdeb3761277f087b53f9eeb4f998e6efc53e17d0b013eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 19:54:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:46.970+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:47 compute-0 podman[110867]: 2025-11-24 19:54:46.909696033 +0000 UTC m=+0.038590183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:54:47 compute-0 systemd[1]: Started libpod-conmon-ec0d606ded880500decdeb3761277f087b53f9eeb4f998e6efc53e17d0b013eb.scope.
Nov 24 19:54:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:54:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94eb7b0afe77f501217dcf9498eb23a42b1adbca3c2f193f1420d622a913fa5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94eb7b0afe77f501217dcf9498eb23a42b1adbca3c2f193f1420d622a913fa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94eb7b0afe77f501217dcf9498eb23a42b1adbca3c2f193f1420d622a913fa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94eb7b0afe77f501217dcf9498eb23a42b1adbca3c2f193f1420d622a913fa5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:54:47 compute-0 podman[110867]: 2025-11-24 19:54:47.125973301 +0000 UTC m=+0.254867461 container init ec0d606ded880500decdeb3761277f087b53f9eeb4f998e6efc53e17d0b013eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:54:47 compute-0 podman[110867]: 2025-11-24 19:54:47.139393839 +0000 UTC m=+0.268287939 container start ec0d606ded880500decdeb3761277f087b53f9eeb4f998e6efc53e17d0b013eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 19:54:47 compute-0 podman[110867]: 2025-11-24 19:54:47.171038006 +0000 UTC m=+0.299932146 container attach ec0d606ded880500decdeb3761277f087b53f9eeb4f998e6efc53e17d0b013eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williamson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:54:47 compute-0 sshd-session[110082]: Received disconnect from 14.63.196.175 port 51540:11: Bye Bye [preauth]
Nov 24 19:54:47 compute-0 sshd-session[110082]: Disconnected from invalid user userm 14.63.196.175 port 51540 [preauth]
Nov 24 19:54:47 compute-0 sudo[111037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlvogprncvpozmlztxokzqvcrlkzojwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014087.1180563-137-122691052109684/AnsiballZ_command.py'
Nov 24 19:54:47 compute-0 sudo[111037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 24 19:54:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:47.506+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 24 19:54:47 compute-0 python3.9[111039]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:54:47 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 206 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:47 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:47 compute-0 ceph-mon[75677]: 8.e scrub starts
Nov 24 19:54:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:47 compute-0 ceph-mon[75677]: 8.e scrub ok
Nov 24 19:54:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:47.994+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]: {
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "osd_id": 2,
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "type": "bluestore"
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:     },
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "osd_id": 1,
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "type": "bluestore"
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:     },
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "osd_id": 0,
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:         "type": "bluestore"
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]:     }
Nov 24 19:54:48 compute-0 sleepy_williamson[110910]: }
Nov 24 19:54:48 compute-0 systemd[1]: libpod-ec0d606ded880500decdeb3761277f087b53f9eeb4f998e6efc53e17d0b013eb.scope: Deactivated successfully.
Nov 24 19:54:48 compute-0 systemd[1]: libpod-ec0d606ded880500decdeb3761277f087b53f9eeb4f998e6efc53e17d0b013eb.scope: Consumed 1.101s CPU time.
Nov 24 19:54:48 compute-0 podman[110867]: 2025-11-24 19:54:48.233249268 +0000 UTC m=+1.362143328 container died ec0d606ded880500decdeb3761277f087b53f9eeb4f998e6efc53e17d0b013eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:54:48 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 24 19:54:48 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 24 19:54:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 24 19:54:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:48.469+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 24 19:54:48 compute-0 sudo[111037]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c94eb7b0afe77f501217dcf9498eb23a42b1adbca3c2f193f1420d622a913fa5-merged.mount: Deactivated successfully.
Nov 24 19:54:48 compute-0 podman[110867]: 2025-11-24 19:54:48.662530714 +0000 UTC m=+1.791424804 container remove ec0d606ded880500decdeb3761277f087b53f9eeb4f998e6efc53e17d0b013eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williamson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:54:48 compute-0 systemd[1]: libpod-conmon-ec0d606ded880500decdeb3761277f087b53f9eeb4f998e6efc53e17d0b013eb.scope: Deactivated successfully.
Nov 24 19:54:48 compute-0 sudo[110759]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:54:48 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:54:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:54:48 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:54:48 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3aa907d8-5d97-428a-8186-f5fc5b9b89d5 does not exist
Nov 24 19:54:48 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c9642fa8-edd3-40dd-8257-dfe6e0b5c9df does not exist
Nov 24 19:54:48 compute-0 sudo[111285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:54:48 compute-0 sudo[111285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:48 compute-0 sudo[111285]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:48 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:48 compute-0 ceph-mon[75677]: 11.1b scrub starts
Nov 24 19:54:48 compute-0 ceph-mon[75677]: 11.1b scrub ok
Nov 24 19:54:48 compute-0 ceph-mon[75677]: pgmap v334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:48 compute-0 ceph-mon[75677]: 3.a scrub starts
Nov 24 19:54:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:48 compute-0 ceph-mon[75677]: 3.a scrub ok
Nov 24 19:54:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:54:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:54:48 compute-0 sudo[111316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:54:48 compute-0 sudo[111316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:54:48 compute-0 sudo[111316]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:48.985+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:49 compute-0 sudo[111414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntqhdxcmqykoboqveyzeqqqiexdbpbtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014088.7577424-145-85825169824606/AnsiballZ_selinux.py'
Nov 24 19:54:49 compute-0 sudo[111414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:49.499+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:49 compute-0 python3.9[111416]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 19:54:49 compute-0 sudo[111414]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:50 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:50.008+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:50 compute-0 sudo[111566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gllihatzuwwzxbbmniztcjlzmeyfgadr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014090.127566-156-41953625507609/AnsiballZ_command.py'
Nov 24 19:54:50 compute-0 sudo[111566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:50.540+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:50 compute-0 python3.9[111568]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 19:54:50 compute-0 sudo[111566]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:51.046+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:51 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:51 compute-0 ceph-mon[75677]: pgmap v335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:51 compute-0 sudo[111718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seplhtprwkbnhrbykrefkrclqlaohjug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014090.9362352-164-90088942443358/AnsiballZ_file.py'
Nov 24 19:54:51 compute-0 sudo[111718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 24 19:54:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:51.539+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 24 19:54:51 compute-0 python3.9[111720]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:54:51 compute-0 sudo[111718]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 24 19:54:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:52.084+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 24 19:54:52 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:52 compute-0 ceph-mon[75677]: 7.f scrub starts
Nov 24 19:54:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:52 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 24 19:54:52 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 24 19:54:52 compute-0 sudo[111870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnkafyqrrxtdxrhoabvlxvcbygczhnza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014091.8207662-172-208761088253271/AnsiballZ_mount.py'
Nov 24 19:54:52 compute-0 sudo[111870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 24 19:54:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:52.542+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 24 19:54:52 compute-0 python3.9[111872]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 19:54:52 compute-0 sudo[111870]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:53.088+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:53 compute-0 ceph-mon[75677]: 7.f scrub ok
Nov 24 19:54:53 compute-0 ceph-mon[75677]: 10.6 scrub starts
Nov 24 19:54:53 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:53 compute-0 ceph-mon[75677]: 10.6 scrub ok
Nov 24 19:54:53 compute-0 ceph-mon[75677]: 7.11 scrub starts
Nov 24 19:54:53 compute-0 ceph-mon[75677]: 7.11 scrub ok
Nov 24 19:54:53 compute-0 ceph-mon[75677]: pgmap v336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:53 compute-0 ceph-mon[75677]: 8.f scrub starts
Nov 24 19:54:53 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 24 19:54:53 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 24 19:54:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:53.549+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:53 compute-0 sudo[112024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-winwfcycyuefiaoppdebclihipdihtjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014093.5009289-200-37100751410778/AnsiballZ_file.py'
Nov 24 19:54:53 compute-0 sudo[112024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 24 19:54:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:54.113+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 24 19:54:54 compute-0 python3.9[112026]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:54:54 compute-0 sudo[112024]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:54 compute-0 sshd-session[111897]: Invalid user user from 27.79.44.141 port 60452
Nov 24 19:54:54 compute-0 ceph-mon[75677]: 8.f scrub ok
Nov 24 19:54:54 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:54 compute-0 ceph-mon[75677]: 11.18 scrub starts
Nov 24 19:54:54 compute-0 ceph-mon[75677]: 11.18 scrub ok
Nov 24 19:54:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:54:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:54.553+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:54 compute-0 sshd-session[111897]: Connection closed by invalid user user 27.79.44.141 port 60452 [preauth]
Nov 24 19:54:54 compute-0 sudo[112176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whuutnevyqjajmuunxgetipiezfnkczp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014094.3776228-208-183739893892156/AnsiballZ_stat.py'
Nov 24 19:54:54 compute-0 sudo[112176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:55 compute-0 python3.9[112178]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:54:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:55.075+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:55 compute-0 sudo[112176]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:55 compute-0 ceph-mon[75677]: 10.b scrub starts
Nov 24 19:54:55 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:55 compute-0 ceph-mon[75677]: 10.b scrub ok
Nov 24 19:54:55 compute-0 ceph-mon[75677]: pgmap v337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:55 compute-0 sudo[112254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qalflzjqefobkjstqxhodpzgfgyebqma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014094.3776228-208-183739893892156/AnsiballZ_file.py'
Nov 24 19:54:55 compute-0 sudo[112254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:55 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 24 19:54:55 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 24 19:54:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:55.562+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:55 compute-0 python3.9[112256]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:54:55 compute-0 sudo[112254]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:56.125+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 211 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:54:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:56.566+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:56 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:56 compute-0 ceph-mon[75677]: 3.16 scrub starts
Nov 24 19:54:56 compute-0 ceph-mon[75677]: 3.16 scrub ok
Nov 24 19:54:56 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 211 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:54:56 compute-0 sudo[112406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqyicxigwiekisqfxqeyvobwvjrqseuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014096.4647985-229-211530714017073/AnsiballZ_stat.py'
Nov 24 19:54:56 compute-0 sudo[112406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:57 compute-0 python3.9[112408]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:54:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 24 19:54:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:57.081+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 24 19:54:57 compute-0 sudo[112406]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:57 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 24 19:54:57 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 24 19:54:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:57.518+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 24 19:54:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:57 compute-0 ceph-mon[75677]: pgmap v338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 10.1a scrub starts
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 10.1a scrub ok
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 11.1e scrub starts
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 11.1e scrub ok
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 7.4 scrub starts
Nov 24 19:54:57 compute-0 ceph-mon[75677]: 7.4 scrub ok
Nov 24 19:54:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 24 19:54:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:58.079+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 24 19:54:58 compute-0 sudo[112560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytukfrrqzbwxvnhcraupnclfzarchepj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014097.589989-242-219223367278243/AnsiballZ_getent.py'
Nov 24 19:54:58 compute-0 sudo[112560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:58 compute-0 python3.9[112562]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 19:54:58 compute-0 sudo[112560]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:58 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Nov 24 19:54:58 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Nov 24 19:54:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 24 19:54:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:58.557+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:54:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 24 19:54:58 compute-0 ceph-mon[75677]: 2.15 deep-scrub starts
Nov 24 19:54:58 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:58 compute-0 ceph-mon[75677]: 2.15 deep-scrub ok
Nov 24 19:54:58 compute-0 ceph-mon[75677]: pgmap v339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:54:58 compute-0 ceph-mon[75677]: 11.1c deep-scrub starts
Nov 24 19:54:58 compute-0 ceph-mon[75677]: 11.1c deep-scrub ok
Nov 24 19:54:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:54:59.100+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:54:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 24 19:54:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:54:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 24 19:54:59 compute-0 sudo[112713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbgvjvakhlictwwxdsmuleabydouyyff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014098.7059593-252-251373200026789/AnsiballZ_getent.py'
Nov 24 19:54:59 compute-0 sudo[112713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:54:59 compute-0 python3.9[112715]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 19:54:59 compute-0 sudo[112713]: pam_unix(sudo:session): session closed for user root
Nov 24 19:54:59 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 24 19:54:59 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 24 19:54:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:54:59.579+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:54:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:00 compute-0 ceph-mon[75677]: 8.9 scrub starts
Nov 24 19:55:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:00 compute-0 ceph-mon[75677]: 8.9 scrub ok
Nov 24 19:55:00 compute-0 ceph-mon[75677]: 2.6 scrub starts
Nov 24 19:55:00 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:00 compute-0 ceph-mon[75677]: 2.6 scrub ok
Nov 24 19:55:00 compute-0 ceph-mon[75677]: 11.1f scrub starts
Nov 24 19:55:00 compute-0 ceph-mon[75677]: 11.1f scrub ok
Nov 24 19:55:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 24 19:55:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:00.101+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 24 19:55:00 compute-0 sudo[112866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgynpfwanxknvrqgdenurnyojmfzrizn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014099.7086828-260-215611474348984/AnsiballZ_group.py'
Nov 24 19:55:00 compute-0 sudo[112866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:00 compute-0 python3.9[112868]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 19:55:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:00 compute-0 sudo[112866]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:00.548+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:01 compute-0 sudo[113018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byigzxdiwiizagfuujrzqtjypzzlsysl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014100.7269354-269-22907249030499/AnsiballZ_file.py'
Nov 24 19:55:01 compute-0 sudo[113018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:01 compute-0 ceph-mon[75677]: 2.9 scrub starts
Nov 24 19:55:01 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:01 compute-0 ceph-mon[75677]: 2.9 scrub ok
Nov 24 19:55:01 compute-0 ceph-mon[75677]: pgmap v340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 24 19:55:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:01.142+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 24 19:55:01 compute-0 python3.9[113020]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 19:55:01 compute-0 sudo[113018]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 221 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:01.536+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 24 19:55:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 24 19:55:01 compute-0 sudo[113170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tngzmljgyiuzwklvnikixcbqnfseuevf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014101.6248899-280-270828663985534/AnsiballZ_dnf.py'
Nov 24 19:55:01 compute-0 sudo[113170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 24 19:55:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:02.157+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 24 19:55:02 compute-0 python3.9[113172]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:55:02 compute-0 ceph-mon[75677]: 2.5 scrub starts
Nov 24 19:55:02 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:02 compute-0 ceph-mon[75677]: 2.5 scrub ok
Nov 24 19:55:02 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 221 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:02 compute-0 ceph-mon[75677]: 11.1 scrub starts
Nov 24 19:55:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:02 compute-0 ceph-mon[75677]: 11.1 scrub ok
Nov 24 19:55:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:02.513+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:03.193+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:03 compute-0 ceph-mon[75677]: 5.1 scrub starts
Nov 24 19:55:03 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:03 compute-0 ceph-mon[75677]: 5.1 scrub ok
Nov 24 19:55:03 compute-0 ceph-mon[75677]: pgmap v341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:03.484+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:03 compute-0 sudo[113170]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:03 compute-0 sudo[113323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfzesnejnonbdeyjmlhbjazbbkpofwks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014103.6613398-288-199064681917392/AnsiballZ_file.py'
Nov 24 19:55:03 compute-0 sudo[113323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:04.159+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:04 compute-0 python3.9[113325]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:55:04 compute-0 sudo[113323]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:04 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:04.496+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:04 compute-0 sudo[113475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aevuktdjplfszxqrnritdxdundvybjzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014104.4440937-296-83305409872001/AnsiballZ_stat.py'
Nov 24 19:55:04 compute-0 sudo[113475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:04 compute-0 python3.9[113477]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:55:04 compute-0 sudo[113475]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 24 19:55:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:05.161+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 24 19:55:05 compute-0 sudo[113553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzbmrifutathxbazqksqngsriqjrdmiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014104.4440937-296-83305409872001/AnsiballZ_file.py'
Nov 24 19:55:05 compute-0 sudo[113553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:05 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:05 compute-0 ceph-mon[75677]: pgmap v342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:05 compute-0 python3.9[113555]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:55:05 compute-0 sudo[113553]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:05.468+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:06 compute-0 sudo[113705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjlodqzkcuotwkrbypdjevudnycdpawz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014105.6802626-309-100124609424728/AnsiballZ_stat.py'
Nov 24 19:55:06 compute-0 sudo[113705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:06.193+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:06 compute-0 python3.9[113707]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:55:06 compute-0 sudo[113705]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:06.459+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 226 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:06 compute-0 ceph-mon[75677]: 2.4 scrub starts
Nov 24 19:55:06 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:06 compute-0 ceph-mon[75677]: 2.4 scrub ok
Nov 24 19:55:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:06 compute-0 sudo[113783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvmfpnnribilfwhxzbiyeixiyuuiqrsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014105.6802626-309-100124609424728/AnsiballZ_file.py'
Nov 24 19:55:06 compute-0 sudo[113783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:06 compute-0 python3.9[113785]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:55:06 compute-0 sudo[113783]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:07.234+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 24 19:55:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:07.450+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 24 19:55:07 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:07 compute-0 ceph-mon[75677]: pgmap v343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:07 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 226 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:07 compute-0 sudo[113935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huxyuxroxsfjlsougwczwdtbkmzpjhuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014107.293279-324-201558957840400/AnsiballZ_dnf.py'
Nov 24 19:55:07 compute-0 sudo[113935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:07 compute-0 python3.9[113937]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:55:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:08.271+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:08.439+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 24 19:55:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 24 19:55:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:08 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:08 compute-0 ceph-mon[75677]: 7.6 scrub starts
Nov 24 19:55:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:08 compute-0 ceph-mon[75677]: 7.6 scrub ok
Nov 24 19:55:08 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:08 compute-0 ceph-mon[75677]: 8.b scrub starts
Nov 24 19:55:08 compute-0 ceph-mon[75677]: 8.b scrub ok
Nov 24 19:55:08 compute-0 ceph-mon[75677]: pgmap v344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:09.229+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:09 compute-0 sudo[113935]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:09.469+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:09 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:10.268+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:10 compute-0 python3.9[114088]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:55:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:10.448+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:10 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:10 compute-0 ceph-mon[75677]: pgmap v345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:11.266+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:11.480+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:11 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 24 19:55:11 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 24 19:55:11 compute-0 python3.9[114240]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 19:55:11 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:11 compute-0 ceph-mon[75677]: 7.2 scrub starts
Nov 24 19:55:11 compute-0 ceph-mon[75677]: 7.2 scrub ok
Nov 24 19:55:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 24 19:55:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:12.222+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 24 19:55:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:12.459+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 24 19:55:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 24 19:55:12 compute-0 sshd-session[71923]: Received disconnect from 38.102.83.75 port 47154:11: disconnected by user
Nov 24 19:55:12 compute-0 sshd-session[71923]: Disconnected from user zuul 38.102.83.75 port 47154
Nov 24 19:55:12 compute-0 sshd-session[71920]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:55:12 compute-0 systemd-logind[795]: Session 18 logged out. Waiting for processes to exit.
Nov 24 19:55:12 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 24 19:55:12 compute-0 systemd[1]: session-18.scope: Consumed 1min 41.703s CPU time.
Nov 24 19:55:12 compute-0 systemd-logind[795]: Removed session 18.
Nov 24 19:55:12 compute-0 python3.9[114390]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:55:12 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 24 19:55:12 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 24 19:55:13 compute-0 ceph-mon[75677]: 10.13 scrub starts
Nov 24 19:55:13 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:13 compute-0 ceph-mon[75677]: 10.13 scrub ok
Nov 24 19:55:13 compute-0 ceph-mon[75677]: 3.c scrub starts
Nov 24 19:55:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:13 compute-0 ceph-mon[75677]: pgmap v346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:13 compute-0 ceph-mon[75677]: 3.c scrub ok
Nov 24 19:55:13 compute-0 ceph-mon[75677]: 8.1c scrub starts
Nov 24 19:55:13 compute-0 ceph-mon[75677]: 8.1c scrub ok
Nov 24 19:55:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 24 19:55:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:13.248+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 24 19:55:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:13.437+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:13 compute-0 sudo[114540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mizlkwwrmysgvvpsdrwuopfafjodebmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014112.9718559-365-184067729513552/AnsiballZ_systemd.py'
Nov 24 19:55:13 compute-0 sudo[114540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:13 compute-0 python3.9[114542]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:55:14 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 19:55:14 compute-0 ceph-mon[75677]: 2.1b scrub starts
Nov 24 19:55:14 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:14 compute-0 ceph-mon[75677]: 2.1b scrub ok
Nov 24 19:55:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:14 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 19:55:14 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 19:55:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:14.212+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:14 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 19:55:14 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 19:55:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:14.470+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 24 19:55:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 24 19:55:14 compute-0 sudo[114540]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:15 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:15 compute-0 ceph-mon[75677]: 3.9 scrub starts
Nov 24 19:55:15 compute-0 ceph-mon[75677]: pgmap v347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:15 compute-0 ceph-mon[75677]: 3.9 scrub ok
Nov 24 19:55:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 24 19:55:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:15.237+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 24 19:55:15 compute-0 python3.9[114704]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 19:55:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:15.444+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:16.209+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:16 compute-0 ceph-mon[75677]: 2.a scrub starts
Nov 24 19:55:16 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:16 compute-0 ceph-mon[75677]: 2.a scrub ok
Nov 24 19:55:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:16.419+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 231 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 24 19:55:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:17.246+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 24 19:55:17 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:17 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 231 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:17 compute-0 ceph-mon[75677]: pgmap v348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:17.432+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 24 19:55:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 24 19:55:17 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Nov 24 19:55:17 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Nov 24 19:55:17 compute-0 sudo[114854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvmiakdnmmmzckcswgdcgxskgeuruuhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014117.3152175-422-82513454819880/AnsiballZ_systemd.py'
Nov 24 19:55:17 compute-0 sudo[114854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:18 compute-0 python3.9[114856]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:55:18 compute-0 sudo[114854]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 24 19:55:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:18.219+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 24 19:55:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:18.397+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:18 compute-0 ceph-mon[75677]: 5.19 scrub starts
Nov 24 19:55:18 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:18 compute-0 ceph-mon[75677]: 5.19 scrub ok
Nov 24 19:55:18 compute-0 ceph-mon[75677]: 8.6 scrub starts
Nov 24 19:55:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:18 compute-0 ceph-mon[75677]: 8.6 scrub ok
Nov 24 19:55:18 compute-0 ceph-mon[75677]: 9.e deep-scrub starts
Nov 24 19:55:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:18 compute-0 sudo[115008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yheiurlbacupuamtfluqzuvipeabwzno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014118.301587-422-34254236099072/AnsiballZ_systemd.py'
Nov 24 19:55:18 compute-0 sudo[115008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:19.180+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 24 19:55:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 24 19:55:19 compute-0 python3.9[115010]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:55:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 24 19:55:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:19.431+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 24 19:55:19 compute-0 sudo[115008]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:19 compute-0 ceph-mon[75677]: 9.e deep-scrub ok
Nov 24 19:55:19 compute-0 ceph-mon[75677]: 5.18 scrub starts
Nov 24 19:55:19 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:19 compute-0 ceph-mon[75677]: 5.18 scrub ok
Nov 24 19:55:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:19 compute-0 ceph-mon[75677]: pgmap v349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:19 compute-0 sshd-session[106867]: Connection closed by 192.168.122.30 port 37770
Nov 24 19:55:19 compute-0 sshd-session[106864]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:55:19 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 24 19:55:19 compute-0 systemd[1]: session-35.scope: Consumed 1min 13.414s CPU time.
Nov 24 19:55:19 compute-0 systemd-logind[795]: Session 35 logged out. Waiting for processes to exit.
Nov 24 19:55:19 compute-0 systemd-logind[795]: Removed session 35.
Nov 24 19:55:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Nov 24 19:55:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:20.176+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Nov 24 19:55:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:20.385+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:20 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.6 deep-scrub starts
Nov 24 19:55:20 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.6 deep-scrub ok
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 10.2 scrub starts
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 10.2 scrub ok
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 3.6 scrub starts
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 3.6 scrub ok
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 5.1a deep-scrub starts
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 5.1a deep-scrub ok
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:21 compute-0 ceph-mon[75677]: pgmap v350: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 9.6 deep-scrub starts
Nov 24 19:55:21 compute-0 ceph-mon[75677]: 9.6 deep-scrub ok
Nov 24 19:55:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 24 19:55:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:21.203+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 24 19:55:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:21.414+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 241 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:21 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 24 19:55:21 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 24 19:55:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:22.156+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:22 compute-0 ceph-mon[75677]: 10.14 scrub starts
Nov 24 19:55:22 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:22 compute-0 ceph-mon[75677]: 10.14 scrub ok
Nov 24 19:55:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:22 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 241 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:22 compute-0 ceph-mon[75677]: 9.7 scrub starts
Nov 24 19:55:22 compute-0 ceph-mon[75677]: 9.7 scrub ok
Nov 24 19:55:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 24 19:55:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:22.464+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 24 19:55:22 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 24 19:55:22 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 24 19:55:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 24 19:55:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:23.141+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 24 19:55:23 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:23 compute-0 ceph-mon[75677]: 7.9 scrub starts
Nov 24 19:55:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:23 compute-0 ceph-mon[75677]: pgmap v351: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:23 compute-0 ceph-mon[75677]: 7.9 scrub ok
Nov 24 19:55:23 compute-0 ceph-mon[75677]: 9.f scrub starts
Nov 24 19:55:23 compute-0 ceph-mon[75677]: 9.f scrub ok
Nov 24 19:55:23 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 24 19:55:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:23.506+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:23 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 24 19:55:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 24 19:55:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:24.160+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 24 19:55:24 compute-0 ceph-mon[75677]: 4.12 scrub starts
Nov 24 19:55:24 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:24 compute-0 ceph-mon[75677]: 4.12 scrub ok
Nov 24 19:55:24 compute-0 ceph-mon[75677]: 9.17 scrub starts
Nov 24 19:55:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:24 compute-0 ceph-mon[75677]: 9.17 scrub ok
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:55:24
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'images', 'volumes']
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:55:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:24 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 24 19:55:24 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 24 19:55:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:24.541+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Nov 24 19:55:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Nov 24 19:55:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:25.120+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:25 compute-0 ceph-mon[75677]: 4.14 scrub starts
Nov 24 19:55:25 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:25 compute-0 ceph-mon[75677]: 4.14 scrub ok
Nov 24 19:55:25 compute-0 ceph-mon[75677]: pgmap v352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:25 compute-0 ceph-mon[75677]: 6.8 scrub starts
Nov 24 19:55:25 compute-0 ceph-mon[75677]: 6.8 scrub ok
Nov 24 19:55:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:25 compute-0 ceph-mon[75677]: 3.f deep-scrub starts
Nov 24 19:55:25 compute-0 ceph-mon[75677]: 3.f deep-scrub ok
Nov 24 19:55:25 compute-0 sshd-session[115038]: Accepted publickey for zuul from 192.168.122.30 port 47072 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:55:25 compute-0 systemd-logind[795]: New session 36 of user zuul.
Nov 24 19:55:25 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 24 19:55:25 compute-0 sshd-session[115038]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:55:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 24 19:55:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:25.556+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 24 19:55:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 24 19:55:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:26.119+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 24 19:55:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:26 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 24 19:55:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 246 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:26 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:26 compute-0 ceph-mon[75677]: 11.6 scrub starts
Nov 24 19:55:26 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 24 19:55:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:26.602+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.1a deep-scrub starts
Nov 24 19:55:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.1a deep-scrub ok
Nov 24 19:55:26 compute-0 python3.9[115191]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:55:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:27.074+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 24 19:55:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 24 19:55:27 compute-0 ceph-mon[75677]: 11.6 scrub ok
Nov 24 19:55:27 compute-0 ceph-mon[75677]: 4.8 scrub starts
Nov 24 19:55:27 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:27 compute-0 ceph-mon[75677]: 4.8 scrub ok
Nov 24 19:55:27 compute-0 ceph-mon[75677]: pgmap v353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:27 compute-0 ceph-mon[75677]: 9.18 scrub starts
Nov 24 19:55:27 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 246 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:27 compute-0 ceph-mon[75677]: 9.18 scrub ok
Nov 24 19:55:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:27.606+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:28 compute-0 sudo[115345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbthhnaqwtbcpetjidldzzzogeunlncf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014127.4638078-36-196209446207871/AnsiballZ_getent.py'
Nov 24 19:55:28 compute-0 sudo[115345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:28.093+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 24 19:55:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 24 19:55:28 compute-0 python3.9[115347]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 19:55:28 compute-0 sudo[115345]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:28.588+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 24 19:55:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 24 19:55:28 compute-0 ceph-mon[75677]: 8.1a deep-scrub starts
Nov 24 19:55:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:28 compute-0 ceph-mon[75677]: 8.1a deep-scrub ok
Nov 24 19:55:28 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:28 compute-0 ceph-mon[75677]: 4.9 scrub starts
Nov 24 19:55:28 compute-0 ceph-mon[75677]: 4.9 scrub ok
Nov 24 19:55:28 compute-0 ceph-mon[75677]: 4.5 scrub starts
Nov 24 19:55:28 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:28 compute-0 ceph-mon[75677]: 4.5 scrub ok
Nov 24 19:55:29 compute-0 sudo[115498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hebrhaszrwopmedsrrshetpalnsliytf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014128.669568-48-73965884765041/AnsiballZ_setup.py'
Nov 24 19:55:29 compute-0 sudo[115498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 24 19:55:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:29.090+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 24 19:55:29 compute-0 python3.9[115500]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:55:29 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 24 19:55:29 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 24 19:55:29 compute-0 sudo[115498]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:29.616+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 24 19:55:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 24 19:55:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:29 compute-0 ceph-mon[75677]: pgmap v354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:29 compute-0 ceph-mon[75677]: 11.19 scrub starts
Nov 24 19:55:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:29 compute-0 ceph-mon[75677]: 11.19 scrub ok
Nov 24 19:55:29 compute-0 ceph-mon[75677]: 4.7 scrub starts
Nov 24 19:55:29 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:29 compute-0 ceph-mon[75677]: 4.7 scrub ok
Nov 24 19:55:29 compute-0 ceph-mon[75677]: 9.8 scrub starts
Nov 24 19:55:29 compute-0 ceph-mon[75677]: 9.8 scrub ok
Nov 24 19:55:30 compute-0 sudo[115582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teikxdqjedksdsuomcxfgwbnmucygngr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014128.669568-48-73965884765041/AnsiballZ_dnf.py'
Nov 24 19:55:30 compute-0 sudo[115582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:30.139+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:30 compute-0 python3.9[115584]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 19:55:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:30.634+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:30 compute-0 ceph-mon[75677]: 3.12 scrub starts
Nov 24 19:55:30 compute-0 ceph-mon[75677]: 3.12 scrub ok
Nov 24 19:55:30 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:30 compute-0 ceph-mon[75677]: pgmap v355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 24 19:55:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:31.166+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 24 19:55:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:31 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 24 19:55:31 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 24 19:55:31 compute-0 sudo[115582]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 24 19:55:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:31.659+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 24 19:55:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:31 compute-0 ceph-mon[75677]: 6.1 scrub starts
Nov 24 19:55:31 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:31 compute-0 ceph-mon[75677]: 6.1 scrub ok
Nov 24 19:55:31 compute-0 ceph-mon[75677]: 9.c scrub starts
Nov 24 19:55:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:32.200+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:32 compute-0 sudo[115735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yygxoprvnkzcpmdfvfgjsgczgirrahbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014131.982521-62-111732667537141/AnsiballZ_dnf.py'
Nov 24 19:55:32 compute-0 sudo[115735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:32 compute-0 python3.9[115737]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:55:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:32.686+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:32 compute-0 ceph-mon[75677]: 9.c scrub ok
Nov 24 19:55:32 compute-0 ceph-mon[75677]: 8.18 scrub starts
Nov 24 19:55:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:32 compute-0 ceph-mon[75677]: 8.18 scrub ok
Nov 24 19:55:32 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:32 compute-0 ceph-mon[75677]: pgmap v356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:33.213+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:33 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 24 19:55:33 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 24 19:55:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:33.689+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:33 compute-0 sudo[115735]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:33 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:33 compute-0 ceph-mon[75677]: 6.f scrub starts
Nov 24 19:55:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 24 19:55:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:34.175+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 19:55:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:34.738+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:34 compute-0 sudo[115888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwgbbtafycquykocrvfmaupsvhuonhoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014134.1431859-70-186943791901443/AnsiballZ_systemd.py'
Nov 24 19:55:34 compute-0 sudo[115888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:34 compute-0 ceph-mon[75677]: 6.f scrub ok
Nov 24 19:55:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:34 compute-0 ceph-mon[75677]: 4.d scrub starts
Nov 24 19:55:34 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:34 compute-0 ceph-mon[75677]: 4.d scrub ok
Nov 24 19:55:34 compute-0 ceph-mon[75677]: pgmap v357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:35 compute-0 python3.9[115890]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 19:55:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 24 19:55:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:35.181+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 24 19:55:35 compute-0 sudo[115888]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:35.726+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 24 19:55:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 24 19:55:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:36 compute-0 ceph-mon[75677]: 4.f scrub starts
Nov 24 19:55:36 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:36 compute-0 ceph-mon[75677]: 4.f scrub ok
Nov 24 19:55:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:36.196+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:36 compute-0 python3.9[116043]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:55:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 251 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:36.719+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:37 compute-0 ceph-mon[75677]: 8.1f scrub starts
Nov 24 19:55:37 compute-0 ceph-mon[75677]: 8.1f scrub ok
Nov 24 19:55:37 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:37 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 251 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:37 compute-0 ceph-mon[75677]: pgmap v358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:37 compute-0 sudo[116193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osszymhfjjjdstqwmnjgwiexdqitkgnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014136.617927-88-263905513556767/AnsiballZ_sefcontext.py'
Nov 24 19:55:37 compute-0 sudo[116193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:37.179+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:37 compute-0 python3.9[116195]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 19:55:37 compute-0 sudo[116193]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:37 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 24 19:55:37 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 24 19:55:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:37.721+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:38 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:38 compute-0 ceph-mon[75677]: 9.13 scrub starts
Nov 24 19:55:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:38.195+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:38 compute-0 python3.9[116345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:55:38 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 24 19:55:38 compute-0 ceph-osd[90884]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 24 19:55:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:38.714+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:39.183+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:39 compute-0 sudo[116501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xupmmrezdfampytzzpsyzakyfrepzhps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014139.048419-106-50201773379750/AnsiballZ_dnf.py'
Nov 24 19:55:39 compute-0 sudo[116501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:39.734+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:39 compute-0 ceph-mon[75677]: 9.13 scrub ok
Nov 24 19:55:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:39 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:39 compute-0 ceph-mon[75677]: pgmap v359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:39 compute-0 python3.9[116503]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:55:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 24 19:55:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:40.188+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 24 19:55:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:55:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:55:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:55:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:55:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:55:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:40.728+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 24 19:55:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 24 19:55:40 compute-0 ceph-mon[75677]: 9.19 scrub starts
Nov 24 19:55:40 compute-0 ceph-mon[75677]: 9.19 scrub ok
Nov 24 19:55:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:40 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:40 compute-0 ceph-mon[75677]: 4.4 scrub starts
Nov 24 19:55:40 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:40 compute-0 ceph-mon[75677]: 4.4 scrub ok
Nov 24 19:55:40 compute-0 ceph-mon[75677]: pgmap v360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:41 compute-0 sudo[116501]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 24 19:55:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:41.149+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 24 19:55:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 261 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:41.703+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 24 19:55:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 24 19:55:41 compute-0 sudo[116654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpogtnuypztyxxutmlcphwalongndwjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014141.2804081-114-13072483701661/AnsiballZ_command.py'
Nov 24 19:55:41 compute-0 sudo[116654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:41 compute-0 ceph-mon[75677]: 3.15 scrub starts
Nov 24 19:55:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:41 compute-0 ceph-mon[75677]: 3.15 scrub ok
Nov 24 19:55:41 compute-0 ceph-mon[75677]: 6.e scrub starts
Nov 24 19:55:41 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:41 compute-0 ceph-mon[75677]: 6.e scrub ok
Nov 24 19:55:41 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 261 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:42 compute-0 python3.9[116656]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:55:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 24 19:55:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:42.155+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 24 19:55:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:42.722+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 24 19:55:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 24 19:55:42 compute-0 sudo[116654]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:43 compute-0 ceph-mon[75677]: 11.4 scrub starts
Nov 24 19:55:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:43 compute-0 ceph-mon[75677]: 11.4 scrub ok
Nov 24 19:55:43 compute-0 ceph-mon[75677]: 6.2 scrub starts
Nov 24 19:55:43 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:43 compute-0 ceph-mon[75677]: 6.2 scrub ok
Nov 24 19:55:43 compute-0 ceph-mon[75677]: pgmap v361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:43.154+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:43 compute-0 sudo[116941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgzbnbqpdcxskpsaqjnnslmiwtrjkfpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014143.125529-122-128486260821520/AnsiballZ_file.py'
Nov 24 19:55:43 compute-0 sudo[116941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:43.701+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:43 compute-0 python3.9[116943]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 19:55:43 compute-0 sudo[116941]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:44 compute-0 ceph-mon[75677]: 3.17 scrub starts
Nov 24 19:55:44 compute-0 ceph-mon[75677]: 3.17 scrub ok
Nov 24 19:55:44 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:44.125+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:44.742+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:44 compute-0 python3.9[117093]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:55:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:45.120+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:45 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:45 compute-0 ceph-mon[75677]: pgmap v362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.234350) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014145234477, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7807, "num_deletes": 251, "total_data_size": 10110257, "memory_usage": 10361136, "flush_reason": "Manual Compaction"}
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014145353351, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 8275988, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7945, "table_properties": {"data_size": 8245345, "index_size": 19968, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 92205, "raw_average_key_size": 24, "raw_value_size": 8171779, "raw_average_value_size": 2130, "num_data_blocks": 868, "num_entries": 3835, "num_filter_entries": 3835, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013601, "oldest_key_time": 1764013601, "file_creation_time": 1764014145, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 119076 microseconds, and 28425 cpu microseconds.
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.353430) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 8275988 bytes OK
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.353458) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.359390) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.359417) EVENT_LOG_v1 {"time_micros": 1764014145359408, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.359443) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10074701, prev total WAL file size 10074701, number of live WAL files 2.
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.363365) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(8082KB) 13(53KB) 8(1944B)]
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014145363629, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8333189, "oldest_snapshot_seqno": -1}
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3651 keys, 8288601 bytes, temperature: kUnknown
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014145481980, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 8288601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8258355, "index_size": 20024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9157, "raw_key_size": 90277, "raw_average_key_size": 24, "raw_value_size": 8186327, "raw_average_value_size": 2242, "num_data_blocks": 872, "num_entries": 3651, "num_filter_entries": 3651, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764014145, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.482482) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 8288601 bytes
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.489022) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 70.3 rd, 69.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.9, 0.0 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3941, records dropped: 290 output_compression: NoCompression
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.489075) EVENT_LOG_v1 {"time_micros": 1764014145489041, "job": 4, "event": "compaction_finished", "compaction_time_micros": 118531, "compaction_time_cpu_micros": 35655, "output_level": 6, "num_output_files": 1, "total_output_size": 8288601, "num_input_records": 3941, "num_output_records": 3651, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014145493019, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014145493400, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014145493815, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 24 19:55:45 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:55:45.363142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:55:45 compute-0 sudo[117246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiovutqnohjdhpjxhxaajcractakhvhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014145.2087822-138-144515626757019/AnsiballZ_dnf.py'
Nov 24 19:55:45 compute-0 sudo[117246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:45.707+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:45 compute-0 python3.9[117248]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:55:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 24 19:55:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:46.085+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 24 19:55:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:46 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:46.753+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:47 compute-0 sudo[117246]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:47.122+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 24 19:55:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 24 19:55:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 266 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:47 compute-0 ceph-mon[75677]: 6.6 scrub starts
Nov 24 19:55:47 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:47 compute-0 ceph-mon[75677]: 6.6 scrub ok
Nov 24 19:55:47 compute-0 ceph-mon[75677]: pgmap v363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:47 compute-0 sudo[117399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vberpbhdwlxaiqaptfosqnboneatuyap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014147.2807217-147-170456592626443/AnsiballZ_dnf.py'
Nov 24 19:55:47 compute-0 sudo[117399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:47.794+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:47 compute-0 python3.9[117401]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:55:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 24 19:55:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:48.167+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 24 19:55:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:48 compute-0 ceph-mon[75677]: 6.c scrub starts
Nov 24 19:55:48 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:48 compute-0 ceph-mon[75677]: 6.c scrub ok
Nov 24 19:55:48 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 266 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:48.826+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:49 compute-0 sudo[117403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:55:49 compute-0 sudo[117403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:49 compute-0 sudo[117403]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:49 compute-0 sudo[117428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:55:49 compute-0 sudo[117428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:49 compute-0 sudo[117428]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:49.168+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:49 compute-0 sudo[117399]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:49 compute-0 sudo[117453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:55:49 compute-0 sudo[117453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:49 compute-0 sudo[117453]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:49 compute-0 sudo[117502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 19:55:49 compute-0 sudo[117502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:49 compute-0 ceph-mon[75677]: 6.4 scrub starts
Nov 24 19:55:49 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:49 compute-0 ceph-mon[75677]: 6.4 scrub ok
Nov 24 19:55:49 compute-0 ceph-mon[75677]: pgmap v364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:49.824+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:49 compute-0 sudo[117671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lewqumhjkulsbhrdyvfqmoxueqcdcrhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014149.5160804-159-112327835702796/AnsiballZ_stat.py'
Nov 24 19:55:49 compute-0 sudo[117671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:49 compute-0 sudo[117502]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:55:49 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:55:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:55:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:55:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:55:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:55:50 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev abee7279-62f8-46d6-b7a7-8850a98b155d does not exist
Nov 24 19:55:50 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9024ed28-bf48-478e-af20-3f17cd402525 does not exist
Nov 24 19:55:50 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 32f011c8-3310-4608-a2f1-2a87e84e384d does not exist
Nov 24 19:55:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:55:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:55:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:55:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:55:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:55:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:55:50 compute-0 python3.9[117682]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:55:50 compute-0 sudo[117685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:55:50 compute-0 sudo[117685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:50 compute-0 sudo[117685]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:50 compute-0 sudo[117671]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:50.202+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 24 19:55:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 24 19:55:50 compute-0 sudo[117712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:55:50 compute-0 sudo[117712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:50 compute-0 sudo[117712]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:50 compute-0 sudo[117761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:55:50 compute-0 sudo[117761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:50 compute-0 sudo[117761]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:50 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:55:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:55:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:55:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:55:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:55:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:55:50 compute-0 sudo[117809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:55:50 compute-0 sudo[117809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:50.777+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:50 compute-0 podman[117947]: 2025-11-24 19:55:50.842752317 +0000 UTC m=+0.092044139 container create 1189c6694d2b24d01a182f10944be8587de30ab0b12840e54f5aaec87bc775d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_banach, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:55:50 compute-0 sudo[117990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thwhtbnpqavpbtvmdbbuuuansqttfjig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014150.3620014-167-228435704686127/AnsiballZ_slurp.py'
Nov 24 19:55:50 compute-0 sudo[117990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:55:50 compute-0 podman[117947]: 2025-11-24 19:55:50.783096161 +0000 UTC m=+0.032388043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:55:50 compute-0 systemd[1]: Started libpod-conmon-1189c6694d2b24d01a182f10944be8587de30ab0b12840e54f5aaec87bc775d0.scope.
Nov 24 19:55:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:55:51 compute-0 podman[117947]: 2025-11-24 19:55:51.015758848 +0000 UTC m=+0.265050620 container init 1189c6694d2b24d01a182f10944be8587de30ab0b12840e54f5aaec87bc775d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_banach, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:55:51 compute-0 podman[117947]: 2025-11-24 19:55:51.029253542 +0000 UTC m=+0.278545354 container start 1189c6694d2b24d01a182f10944be8587de30ab0b12840e54f5aaec87bc775d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:55:51 compute-0 xenodochial_banach[117995]: 167 167
Nov 24 19:55:51 compute-0 systemd[1]: libpod-1189c6694d2b24d01a182f10944be8587de30ab0b12840e54f5aaec87bc775d0.scope: Deactivated successfully.
Nov 24 19:55:51 compute-0 python3.9[117992]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 24 19:55:51 compute-0 sudo[117990]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:51 compute-0 podman[117947]: 2025-11-24 19:55:51.082050024 +0000 UTC m=+0.331341816 container attach 1189c6694d2b24d01a182f10944be8587de30ab0b12840e54f5aaec87bc775d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_banach, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:55:51 compute-0 podman[117947]: 2025-11-24 19:55:51.083439631 +0000 UTC m=+0.332731453 container died 1189c6694d2b24d01a182f10944be8587de30ab0b12840e54f5aaec87bc775d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 19:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-44fff77de513e17ecd5aedc0b447701a6bfbf57572d05950bbb1531d8a35d660-merged.mount: Deactivated successfully.
Nov 24 19:55:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 24 19:55:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:51.199+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 24 19:55:51 compute-0 podman[117947]: 2025-11-24 19:55:51.241261913 +0000 UTC m=+0.490553725 container remove 1189c6694d2b24d01a182f10944be8587de30ab0b12840e54f5aaec87bc775d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_banach, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:55:51 compute-0 systemd[1]: libpod-conmon-1189c6694d2b24d01a182f10944be8587de30ab0b12840e54f5aaec87bc775d0.scope: Deactivated successfully.
Nov 24 19:55:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:51 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:51 compute-0 ceph-mon[75677]: 6.b scrub starts
Nov 24 19:55:51 compute-0 ceph-mon[75677]: 6.b scrub ok
Nov 24 19:55:51 compute-0 ceph-mon[75677]: pgmap v365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:51 compute-0 podman[118042]: 2025-11-24 19:55:51.519074796 +0000 UTC m=+0.097030565 container create 366cfeff83cc4b818bd52bf6a565c9bedc08b464e4641233723c7e810cccd26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 24 19:55:51 compute-0 podman[118042]: 2025-11-24 19:55:51.468033342 +0000 UTC m=+0.045989171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:55:51 compute-0 systemd[1]: Started libpod-conmon-366cfeff83cc4b818bd52bf6a565c9bedc08b464e4641233723c7e810cccd26c.scope.
Nov 24 19:55:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0550fa6ffb676b4730aff53d6aba4e0066c45be345b37298721d58811e11c27b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0550fa6ffb676b4730aff53d6aba4e0066c45be345b37298721d58811e11c27b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0550fa6ffb676b4730aff53d6aba4e0066c45be345b37298721d58811e11c27b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0550fa6ffb676b4730aff53d6aba4e0066c45be345b37298721d58811e11c27b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0550fa6ffb676b4730aff53d6aba4e0066c45be345b37298721d58811e11c27b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:51 compute-0 podman[118042]: 2025-11-24 19:55:51.698492339 +0000 UTC m=+0.276448098 container init 366cfeff83cc4b818bd52bf6a565c9bedc08b464e4641233723c7e810cccd26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:55:51 compute-0 podman[118042]: 2025-11-24 19:55:51.710573985 +0000 UTC m=+0.288529754 container start 366cfeff83cc4b818bd52bf6a565c9bedc08b464e4641233723c7e810cccd26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:55:51 compute-0 podman[118042]: 2025-11-24 19:55:51.73862031 +0000 UTC m=+0.316576129 container attach 366cfeff83cc4b818bd52bf6a565c9bedc08b464e4641233723c7e810cccd26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 19:55:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:51.780+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:51 compute-0 sshd-session[115041]: Connection closed by 192.168.122.30 port 47072
Nov 24 19:55:51 compute-0 sshd-session[115038]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:55:51 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 24 19:55:51 compute-0 systemd[1]: session-36.scope: Consumed 20.701s CPU time.
Nov 24 19:55:51 compute-0 systemd-logind[795]: Session 36 logged out. Waiting for processes to exit.
Nov 24 19:55:51 compute-0 systemd-logind[795]: Removed session 36.
Nov 24 19:55:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:52.159+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:52 compute-0 ceph-mon[75677]: 6.d scrub starts
Nov 24 19:55:52 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:52 compute-0 ceph-mon[75677]: 6.d scrub ok
Nov 24 19:55:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 24 19:55:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:52.761+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 24 19:55:52 compute-0 silly_leakey[118058]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:55:52 compute-0 silly_leakey[118058]: --> relative data size: 1.0
Nov 24 19:55:52 compute-0 silly_leakey[118058]: --> All data devices are unavailable
Nov 24 19:55:52 compute-0 systemd[1]: libpod-366cfeff83cc4b818bd52bf6a565c9bedc08b464e4641233723c7e810cccd26c.scope: Deactivated successfully.
Nov 24 19:55:52 compute-0 systemd[1]: libpod-366cfeff83cc4b818bd52bf6a565c9bedc08b464e4641233723c7e810cccd26c.scope: Consumed 1.222s CPU time.
Nov 24 19:55:53 compute-0 podman[118088]: 2025-11-24 19:55:53.0442358 +0000 UTC m=+0.043202435 container died 366cfeff83cc4b818bd52bf6a565c9bedc08b464e4641233723c7e810cccd26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:55:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:53.142+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0550fa6ffb676b4730aff53d6aba4e0066c45be345b37298721d58811e11c27b-merged.mount: Deactivated successfully.
Nov 24 19:55:53 compute-0 podman[118088]: 2025-11-24 19:55:53.311085319 +0000 UTC m=+0.310051914 container remove 366cfeff83cc4b818bd52bf6a565c9bedc08b464e4641233723c7e810cccd26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:55:53 compute-0 systemd[1]: libpod-conmon-366cfeff83cc4b818bd52bf6a565c9bedc08b464e4641233723c7e810cccd26c.scope: Deactivated successfully.
Nov 24 19:55:53 compute-0 sudo[117809]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:53 compute-0 sudo[118103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:55:53 compute-0 sudo[118103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:53 compute-0 sudo[118103]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:53 compute-0 sudo[118128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:55:53 compute-0 sudo[118128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:53 compute-0 sudo[118128]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:53 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:53 compute-0 ceph-mon[75677]: pgmap v366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:53 compute-0 ceph-mon[75677]: 7.13 scrub starts
Nov 24 19:55:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:53 compute-0 ceph-mon[75677]: 7.13 scrub ok
Nov 24 19:55:53 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:53 compute-0 sudo[118153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:55:53 compute-0 sudo[118153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:53 compute-0 sudo[118153]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:53 compute-0 sudo[118178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:55:53 compute-0 sudo[118178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:53.799+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:54.161+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:54 compute-0 podman[118243]: 2025-11-24 19:55:54.279858244 +0000 UTC m=+0.088950446 container create a347848157886d37f2e18597072e7792bb05f9963b082eb7ed228cd5be0d2a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 19:55:54 compute-0 podman[118243]: 2025-11-24 19:55:54.235540521 +0000 UTC m=+0.044632783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:55:54 compute-0 systemd[1]: Started libpod-conmon-a347848157886d37f2e18597072e7792bb05f9963b082eb7ed228cd5be0d2a23.scope.
Nov 24 19:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:55:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:55:54 compute-0 podman[118243]: 2025-11-24 19:55:54.42227702 +0000 UTC m=+0.231369272 container init a347848157886d37f2e18597072e7792bb05f9963b082eb7ed228cd5be0d2a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:55:54 compute-0 podman[118243]: 2025-11-24 19:55:54.435792785 +0000 UTC m=+0.244884987 container start a347848157886d37f2e18597072e7792bb05f9963b082eb7ed228cd5be0d2a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 19:55:54 compute-0 upbeat_tesla[118260]: 167 167
Nov 24 19:55:54 compute-0 systemd[1]: libpod-a347848157886d37f2e18597072e7792bb05f9963b082eb7ed228cd5be0d2a23.scope: Deactivated successfully.
Nov 24 19:55:54 compute-0 podman[118243]: 2025-11-24 19:55:54.471036355 +0000 UTC m=+0.280128567 container attach a347848157886d37f2e18597072e7792bb05f9963b082eb7ed228cd5be0d2a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:55:54 compute-0 podman[118243]: 2025-11-24 19:55:54.472144044 +0000 UTC m=+0.281236236 container died a347848157886d37f2e18597072e7792bb05f9963b082eb7ed228cd5be0d2a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:55:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-700a5a4a432dc27045dfa535a583b40437f055f4d6eede16a1c4456b6f1f976b-merged.mount: Deactivated successfully.
Nov 24 19:55:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:54 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:54 compute-0 ceph-mon[75677]: pgmap v367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:54.809+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:54 compute-0 podman[118243]: 2025-11-24 19:55:54.841006871 +0000 UTC m=+0.650099063 container remove a347848157886d37f2e18597072e7792bb05f9963b082eb7ed228cd5be0d2a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tesla, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 19:55:54 compute-0 systemd[1]: libpod-conmon-a347848157886d37f2e18597072e7792bb05f9963b082eb7ed228cd5be0d2a23.scope: Deactivated successfully.
Nov 24 19:55:55 compute-0 podman[118286]: 2025-11-24 19:55:55.088573439 +0000 UTC m=+0.098399861 container create 7abef124859377bee8cd37f23149dfe9ad1334bac1fe376bd9171a7a5df8ade2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:55:55 compute-0 podman[118286]: 2025-11-24 19:55:55.02178668 +0000 UTC m=+0.031613162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:55:55 compute-0 systemd[1]: Started libpod-conmon-7abef124859377bee8cd37f23149dfe9ad1334bac1fe376bd9171a7a5df8ade2.scope.
Nov 24 19:55:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:55.173+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12bdde51a665473580ef37f48f1aba26bac95fc57869537a0cef449abeb7ce4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12bdde51a665473580ef37f48f1aba26bac95fc57869537a0cef449abeb7ce4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12bdde51a665473580ef37f48f1aba26bac95fc57869537a0cef449abeb7ce4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12bdde51a665473580ef37f48f1aba26bac95fc57869537a0cef449abeb7ce4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:55 compute-0 podman[118286]: 2025-11-24 19:55:55.249917676 +0000 UTC m=+0.259744108 container init 7abef124859377bee8cd37f23149dfe9ad1334bac1fe376bd9171a7a5df8ade2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:55:55 compute-0 podman[118286]: 2025-11-24 19:55:55.256659117 +0000 UTC m=+0.266485519 container start 7abef124859377bee8cd37f23149dfe9ad1334bac1fe376bd9171a7a5df8ade2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 19:55:55 compute-0 podman[118286]: 2025-11-24 19:55:55.275469004 +0000 UTC m=+0.285295416 container attach 7abef124859377bee8cd37f23149dfe9ad1334bac1fe376bd9171a7a5df8ade2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:55:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:55 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:55.827+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 24 19:55:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 24 19:55:56 compute-0 boring_moser[118304]: {
Nov 24 19:55:56 compute-0 boring_moser[118304]:     "0": [
Nov 24 19:55:56 compute-0 boring_moser[118304]:         {
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "devices": [
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "/dev/loop3"
Nov 24 19:55:56 compute-0 boring_moser[118304]:             ],
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_name": "ceph_lv0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_size": "21470642176",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "name": "ceph_lv0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "tags": {
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.cluster_name": "ceph",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.crush_device_class": "",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.encrypted": "0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.osd_id": "0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.type": "block",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.vdo": "0"
Nov 24 19:55:56 compute-0 boring_moser[118304]:             },
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "type": "block",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "vg_name": "ceph_vg0"
Nov 24 19:55:56 compute-0 boring_moser[118304]:         }
Nov 24 19:55:56 compute-0 boring_moser[118304]:     ],
Nov 24 19:55:56 compute-0 boring_moser[118304]:     "1": [
Nov 24 19:55:56 compute-0 boring_moser[118304]:         {
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "devices": [
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "/dev/loop4"
Nov 24 19:55:56 compute-0 boring_moser[118304]:             ],
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_name": "ceph_lv1",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_size": "21470642176",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "name": "ceph_lv1",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "tags": {
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.cluster_name": "ceph",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.crush_device_class": "",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.encrypted": "0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.osd_id": "1",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.type": "block",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.vdo": "0"
Nov 24 19:55:56 compute-0 boring_moser[118304]:             },
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "type": "block",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "vg_name": "ceph_vg1"
Nov 24 19:55:56 compute-0 boring_moser[118304]:         }
Nov 24 19:55:56 compute-0 boring_moser[118304]:     ],
Nov 24 19:55:56 compute-0 boring_moser[118304]:     "2": [
Nov 24 19:55:56 compute-0 boring_moser[118304]:         {
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "devices": [
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "/dev/loop5"
Nov 24 19:55:56 compute-0 boring_moser[118304]:             ],
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_name": "ceph_lv2",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_size": "21470642176",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "name": "ceph_lv2",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "tags": {
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.cluster_name": "ceph",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.crush_device_class": "",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.encrypted": "0",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.osd_id": "2",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.type": "block",
Nov 24 19:55:56 compute-0 boring_moser[118304]:                 "ceph.vdo": "0"
Nov 24 19:55:56 compute-0 boring_moser[118304]:             },
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "type": "block",
Nov 24 19:55:56 compute-0 boring_moser[118304]:             "vg_name": "ceph_vg2"
Nov 24 19:55:56 compute-0 boring_moser[118304]:         }
Nov 24 19:55:56 compute-0 boring_moser[118304]:     ]
Nov 24 19:55:56 compute-0 boring_moser[118304]: }
Nov 24 19:55:56 compute-0 systemd[1]: libpod-7abef124859377bee8cd37f23149dfe9ad1334bac1fe376bd9171a7a5df8ade2.scope: Deactivated successfully.
Nov 24 19:55:56 compute-0 podman[118286]: 2025-11-24 19:55:56.08353032 +0000 UTC m=+1.093356752 container died 7abef124859377bee8cd37f23149dfe9ad1334bac1fe376bd9171a7a5df8ade2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:55:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:56.163+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a12bdde51a665473580ef37f48f1aba26bac95fc57869537a0cef449abeb7ce4-merged.mount: Deactivated successfully.
Nov 24 19:55:56 compute-0 podman[118286]: 2025-11-24 19:55:56.263769546 +0000 UTC m=+1.273595968 container remove 7abef124859377bee8cd37f23149dfe9ad1334bac1fe376bd9171a7a5df8ade2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 24 19:55:56 compute-0 systemd[1]: libpod-conmon-7abef124859377bee8cd37f23149dfe9ad1334bac1fe376bd9171a7a5df8ade2.scope: Deactivated successfully.
Nov 24 19:55:56 compute-0 sudo[118178]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:56 compute-0 sudo[118328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:55:56 compute-0 sudo[118328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:56 compute-0 sudo[118328]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 271 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:55:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:56 compute-0 sudo[118353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:55:56 compute-0 sudo[118353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:56 compute-0 sudo[118353]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:56 compute-0 sudo[118378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:55:56 compute-0 sudo[118378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:56 compute-0 sudo[118378]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:56 compute-0 sudo[118403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:55:56 compute-0 sudo[118403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:56 compute-0 ceph-mon[75677]: 7.18 scrub starts
Nov 24 19:55:56 compute-0 ceph-mon[75677]: 7.18 scrub ok
Nov 24 19:55:56 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:56 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 271 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:55:56 compute-0 ceph-mon[75677]: pgmap v368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:56.842+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:57.167+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:57 compute-0 podman[118467]: 2025-11-24 19:55:57.189707298 +0000 UTC m=+0.102989826 container create 88bbeb8c695c99e84e1045db57c7fae2019a2dc285d26e9ba095014ced089ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 24 19:55:57 compute-0 podman[118467]: 2025-11-24 19:55:57.13447383 +0000 UTC m=+0.047756438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:55:57 compute-0 systemd[1]: Started libpod-conmon-88bbeb8c695c99e84e1045db57c7fae2019a2dc285d26e9ba095014ced089ce2.scope.
Nov 24 19:55:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:55:57 compute-0 podman[118467]: 2025-11-24 19:55:57.334059787 +0000 UTC m=+0.247342365 container init 88bbeb8c695c99e84e1045db57c7fae2019a2dc285d26e9ba095014ced089ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 19:55:57 compute-0 podman[118467]: 2025-11-24 19:55:57.348308811 +0000 UTC m=+0.261591339 container start 88bbeb8c695c99e84e1045db57c7fae2019a2dc285d26e9ba095014ced089ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:55:57 compute-0 mystifying_wilson[118483]: 167 167
Nov 24 19:55:57 compute-0 systemd[1]: libpod-88bbeb8c695c99e84e1045db57c7fae2019a2dc285d26e9ba095014ced089ce2.scope: Deactivated successfully.
Nov 24 19:55:57 compute-0 podman[118467]: 2025-11-24 19:55:57.362064871 +0000 UTC m=+0.275347499 container attach 88bbeb8c695c99e84e1045db57c7fae2019a2dc285d26e9ba095014ced089ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:55:57 compute-0 podman[118467]: 2025-11-24 19:55:57.362684668 +0000 UTC m=+0.275967226 container died 88bbeb8c695c99e84e1045db57c7fae2019a2dc285d26e9ba095014ced089ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:55:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-074c3f3b8dfa44382ed3abbf2b43f80fc2448926ed8bd4bd1a08042fd639eada-merged.mount: Deactivated successfully.
Nov 24 19:55:57 compute-0 podman[118467]: 2025-11-24 19:55:57.484751225 +0000 UTC m=+0.398033793 container remove 88bbeb8c695c99e84e1045db57c7fae2019a2dc285d26e9ba095014ced089ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:55:57 compute-0 systemd[1]: libpod-conmon-88bbeb8c695c99e84e1045db57c7fae2019a2dc285d26e9ba095014ced089ce2.scope: Deactivated successfully.
Nov 24 19:55:57 compute-0 podman[118507]: 2025-11-24 19:55:57.723291902 +0000 UTC m=+0.086366608 container create 48d9e0b4ba465110ba99338f937b6fc7ae5fff9efd23eb345f5a43dd0e4c5581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meitner, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 19:55:57 compute-0 podman[118507]: 2025-11-24 19:55:57.673754937 +0000 UTC m=+0.036829723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:55:57 compute-0 systemd[1]: Started libpod-conmon-48d9e0b4ba465110ba99338f937b6fc7ae5fff9efd23eb345f5a43dd0e4c5581.scope.
Nov 24 19:55:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:55:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:57.832+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1e92bee3d3662b6ccf872181e28092d67d8d2c6b8e8f64267bf389dff34915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1e92bee3d3662b6ccf872181e28092d67d8d2c6b8e8f64267bf389dff34915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1e92bee3d3662b6ccf872181e28092d67d8d2c6b8e8f64267bf389dff34915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1e92bee3d3662b6ccf872181e28092d67d8d2c6b8e8f64267bf389dff34915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:55:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:57 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:57 compute-0 podman[118507]: 2025-11-24 19:55:57.88286263 +0000 UTC m=+0.245937366 container init 48d9e0b4ba465110ba99338f937b6fc7ae5fff9efd23eb345f5a43dd0e4c5581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meitner, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 19:55:57 compute-0 podman[118507]: 2025-11-24 19:55:57.891238636 +0000 UTC m=+0.254313342 container start 48d9e0b4ba465110ba99338f937b6fc7ae5fff9efd23eb345f5a43dd0e4c5581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meitner, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:55:57 compute-0 podman[118507]: 2025-11-24 19:55:57.900272749 +0000 UTC m=+0.263347475 container attach 48d9e0b4ba465110ba99338f937b6fc7ae5fff9efd23eb345f5a43dd0e4c5581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meitner, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:55:58 compute-0 sshd-session[118529]: Accepted publickey for zuul from 192.168.122.30 port 46524 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:55:58 compute-0 systemd-logind[795]: New session 37 of user zuul.
Nov 24 19:55:58 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 24 19:55:58 compute-0 sshd-session[118529]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:55:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 24 19:55:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:58.131+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 24 19:55:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:58.819+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:58 compute-0 ceph-mon[75677]: 9.15 scrub starts
Nov 24 19:55:58 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:58 compute-0 ceph-mon[75677]: 9.15 scrub ok
Nov 24 19:55:58 compute-0 ceph-mon[75677]: pgmap v369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:55:59 compute-0 brave_meitner[118524]: {
Nov 24 19:55:59 compute-0 brave_meitner[118524]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "osd_id": 2,
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "type": "bluestore"
Nov 24 19:55:59 compute-0 brave_meitner[118524]:     },
Nov 24 19:55:59 compute-0 brave_meitner[118524]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "osd_id": 1,
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "type": "bluestore"
Nov 24 19:55:59 compute-0 brave_meitner[118524]:     },
Nov 24 19:55:59 compute-0 brave_meitner[118524]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "osd_id": 0,
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:55:59 compute-0 brave_meitner[118524]:         "type": "bluestore"
Nov 24 19:55:59 compute-0 brave_meitner[118524]:     }
Nov 24 19:55:59 compute-0 brave_meitner[118524]: }
Nov 24 19:55:59 compute-0 systemd[1]: libpod-48d9e0b4ba465110ba99338f937b6fc7ae5fff9efd23eb345f5a43dd0e4c5581.scope: Deactivated successfully.
Nov 24 19:55:59 compute-0 systemd[1]: libpod-48d9e0b4ba465110ba99338f937b6fc7ae5fff9efd23eb345f5a43dd0e4c5581.scope: Consumed 1.124s CPU time.
Nov 24 19:55:59 compute-0 podman[118507]: 2025-11-24 19:55:59.045472788 +0000 UTC m=+1.408547554 container died 48d9e0b4ba465110ba99338f937b6fc7ae5fff9efd23eb345f5a43dd0e4c5581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meitner, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 24 19:55:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 24 19:55:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:55:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:55:59.113+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f1e92bee3d3662b6ccf872181e28092d67d8d2c6b8e8f64267bf389dff34915-merged.mount: Deactivated successfully.
Nov 24 19:55:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 24 19:55:59 compute-0 podman[118507]: 2025-11-24 19:55:59.252253077 +0000 UTC m=+1.615327853 container remove 48d9e0b4ba465110ba99338f937b6fc7ae5fff9efd23eb345f5a43dd0e4c5581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meitner, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:55:59 compute-0 systemd[1]: libpod-conmon-48d9e0b4ba465110ba99338f937b6fc7ae5fff9efd23eb345f5a43dd0e4c5581.scope: Deactivated successfully.
Nov 24 19:55:59 compute-0 sudo[118403]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:59 compute-0 python3.9[118711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:55:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:55:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:55:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:55:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:55:59 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7ac7a7f0-31db-4278-83a8-84bb1315fc0f does not exist
Nov 24 19:55:59 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9fcb7aa0-2689-4eea-b2c2-749135ecc670 does not exist
Nov 24 19:55:59 compute-0 sudo[118732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:55:59 compute-0 sudo[118732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:59 compute-0 sudo[118732]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:59 compute-0 sudo[118757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:55:59 compute-0 sudo[118757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:55:59 compute-0 sudo[118757]: pam_unix(sudo:session): session closed for user root
Nov 24 19:55:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:55:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:55:59.825+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:55:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 24 19:55:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 24 19:56:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:00.143+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:00 compute-0 ceph-mon[75677]: 9.1f scrub starts
Nov 24 19:56:00 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:00 compute-0 ceph-mon[75677]: 9.1f scrub ok
Nov 24 19:56:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:56:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:56:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:00 compute-0 python3.9[118931]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:56:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:00.814+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:01.193+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:01 compute-0 ceph-mon[75677]: 8.1d scrub starts
Nov 24 19:56:01 compute-0 ceph-mon[75677]: 8.1d scrub ok
Nov 24 19:56:01 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:01 compute-0 ceph-mon[75677]: pgmap v370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 281 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 24 19:56:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:01.851+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 24 19:56:01 compute-0 python3.9[119124]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:56:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:02.173+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:02 compute-0 sshd-session[118532]: Connection closed by 192.168.122.30 port 46524
Nov 24 19:56:02 compute-0 sshd-session[118529]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:56:02 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 24 19:56:02 compute-0 systemd[1]: session-37.scope: Consumed 3.066s CPU time.
Nov 24 19:56:02 compute-0 systemd-logind[795]: Session 37 logged out. Waiting for processes to exit.
Nov 24 19:56:02 compute-0 systemd-logind[795]: Removed session 37.
Nov 24 19:56:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:02 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:02 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 281 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:02.871+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:03.199+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:03 compute-0 ceph-mon[75677]: 9.1d scrub starts
Nov 24 19:56:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:03 compute-0 ceph-mon[75677]: 9.1d scrub ok
Nov 24 19:56:03 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:03 compute-0 ceph-mon[75677]: pgmap v371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:03.888+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:04.179+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:04 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:04.843+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 24 19:56:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 24 19:56:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:05.226+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:05 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:05 compute-0 ceph-mon[75677]: pgmap v372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:05.887+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:06.207+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 286 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:06 compute-0 ceph-mon[75677]: 9.1b scrub starts
Nov 24 19:56:06 compute-0 ceph-mon[75677]: 9.1b scrub ok
Nov 24 19:56:06 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:06.902+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:07.190+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:07 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:07 compute-0 ceph-mon[75677]: pgmap v373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:07 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 286 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:07 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:07 compute-0 sshd-session[119150]: Accepted publickey for zuul from 192.168.122.30 port 47878 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:56:07 compute-0 systemd-logind[795]: New session 38 of user zuul.
Nov 24 19:56:07 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 24 19:56:07 compute-0 sshd-session[119150]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:56:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:07.858+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:08.221+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:08 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:08 compute-0 ceph-mon[75677]: pgmap v374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 24 19:56:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:08.853+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 24 19:56:08 compute-0 python3.9[119303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:56:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:09.266+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:09 compute-0 ceph-mon[75677]: 9.1 scrub starts
Nov 24 19:56:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:09 compute-0 ceph-mon[75677]: 9.1 scrub ok
Nov 24 19:56:09 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:09.859+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:10 compute-0 python3.9[119457]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:56:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:10.277+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:10 compute-0 sshd-session[118693]: Connection closed by authenticating user ftp 27.79.44.141 port 58318 [preauth]
Nov 24 19:56:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:10 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:10 compute-0 ceph-mon[75677]: pgmap v375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:10.866+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:11 compute-0 sudo[119611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydgbdvjvdxytskqdvfigcmocedoesjzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014170.6449754-40-19855987889606/AnsiballZ_setup.py'
Nov 24 19:56:11 compute-0 sudo[119611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:11.283+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:11 compute-0 python3.9[119613]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:56:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:11 compute-0 sudo[119611]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:11.863+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:12 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:12 compute-0 sudo[119695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laznbhvpxagrwkxdplvnrvdtccpqvrqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014170.6449754-40-19855987889606/AnsiballZ_dnf.py'
Nov 24 19:56:12 compute-0 sudo[119695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:12.328+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:12 compute-0 python3.9[119697]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:56:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:12.905+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:13 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:13 compute-0 ceph-mon[75677]: pgmap v376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:13.326+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:13.857+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:13 compute-0 sudo[119695]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:14 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:14.338+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:14 compute-0 sudo[119848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-witxpotssovwzjpowgzsdssxicvekxcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014174.179329-52-92007593595153/AnsiballZ_setup.py'
Nov 24 19:56:14 compute-0 sudo[119848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:14.846+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:14 compute-0 python3.9[119850]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:56:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:15 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:15 compute-0 ceph-mon[75677]: pgmap v377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:15.333+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:15 compute-0 sudo[119848]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:15.811+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:16 compute-0 sudo[120043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pokqrcgtxybqpizqrtosdpywbpugcekr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014175.655233-63-5338932823444/AnsiballZ_file.py'
Nov 24 19:56:16 compute-0 sudo[120043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:16 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:16.372+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:16 compute-0 python3.9[120045]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:56:16 compute-0 sudo[120043]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 291 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:16.812+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:17 compute-0 sudo[120195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sazpjqdegmfveuqtkzfmqaedribirlyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014176.6215785-71-233720279893176/AnsiballZ_command.py'
Nov 24 19:56:17 compute-0 sudo[120195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:17 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:17 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 291 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:17 compute-0 ceph-mon[75677]: pgmap v378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:17.354+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:17 compute-0 python3.9[120197]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:56:17 compute-0 sudo[120195]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:17.802+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:18.374+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:18 compute-0 sudo[120360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsfhwpqzylpdljjhgzstfungglveoqba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014177.8096204-79-215741996348007/AnsiballZ_stat.py'
Nov 24 19:56:18 compute-0 sudo[120360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:18 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:18 compute-0 python3.9[120362]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:56:18 compute-0 sudo[120360]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:18.754+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:19.366+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:19 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:19 compute-0 ceph-mon[75677]: pgmap v379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:19 compute-0 sudo[120438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otnipackybqefehbubaweefnpdqnytro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014177.8096204-79-215741996348007/AnsiballZ_file.py'
Nov 24 19:56:19 compute-0 sudo[120438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:19.709+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:19 compute-0 python3.9[120440]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:56:19 compute-0 sudo[120438]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:20.319+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:20 compute-0 sudo[120590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxvddvbewvatxgdswufrpowquykvivky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014179.9503496-91-178971588552982/AnsiballZ_stat.py'
Nov 24 19:56:20 compute-0 sudo[120590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:20 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:20 compute-0 python3.9[120592]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:56:20 compute-0 sudo[120590]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:20.724+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:20 compute-0 sudo[120668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hckelulhaxzexmjcojagngpqvmgylavl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014179.9503496-91-178971588552982/AnsiballZ_file.py'
Nov 24 19:56:20 compute-0 sudo[120668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:21 compute-0 python3.9[120670]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:56:21 compute-0 sudo[120668]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:21.335+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 301 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:21 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:21 compute-0 ceph-mon[75677]: pgmap v380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:21 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 301 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 24 19:56:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:21.722+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 24 19:56:21 compute-0 sudo[120820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pixsewlvownozqsgcnvazjlcmujcaxgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014181.427148-104-189495865842906/AnsiballZ_ini_file.py'
Nov 24 19:56:21 compute-0 sudo[120820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:22 compute-0 python3.9[120822]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:56:22 compute-0 sudo[120820]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:22.320+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:22 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 24 19:56:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:22.725+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 24 19:56:22 compute-0 sudo[120972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eypuunutjxwiwtrzrcmeldumgphprzyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014182.4115374-104-143424601806684/AnsiballZ_ini_file.py'
Nov 24 19:56:22 compute-0 sudo[120972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:23 compute-0 python3.9[120974]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:56:23 compute-0 sudo[120972]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:23.352+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:23 compute-0 sudo[121124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fexhqzzvadvhvigyqkeyabjymfeyusvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014183.2095125-104-26852608959096/AnsiballZ_ini_file.py'
Nov 24 19:56:23 compute-0 sudo[121124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:23 compute-0 ceph-mon[75677]: 9.3 scrub starts
Nov 24 19:56:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:23 compute-0 ceph-mon[75677]: 9.3 scrub ok
Nov 24 19:56:23 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:23 compute-0 ceph-mon[75677]: pgmap v381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:23 compute-0 ceph-mon[75677]: 9.d scrub starts
Nov 24 19:56:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:23 compute-0 ceph-mon[75677]: 9.d scrub ok
Nov 24 19:56:23 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:23.733+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:23 compute-0 python3.9[121126]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:56:23 compute-0 sudo[121124]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:24.309+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:56:24
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', '.rgw.root', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.meta']
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:56:24 compute-0 sudo[121276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfvlqitejkpmmahuhrvnlmbssoomjwvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014184.0869505-104-165251011348657/AnsiballZ_ini_file.py'
Nov 24 19:56:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:24 compute-0 sudo[121276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:24.713+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:24 compute-0 python3.9[121278]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:56:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:24 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:24 compute-0 ceph-mon[75677]: pgmap v382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:24 compute-0 sudo[121276]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:25.312+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:25 compute-0 sudo[121428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekzrdsgdjuixgyivhukczewqjufjlhnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014185.057-135-161504076197013/AnsiballZ_dnf.py'
Nov 24 19:56:25 compute-0 sudo[121428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 24 19:56:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:25.708+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:25 compute-0 python3.9[121430]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:56:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 24 19:56:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:25 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:26.333+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:26.673+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 24 19:56:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 24 19:56:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 306 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:26 compute-0 ceph-mon[75677]: 9.9 scrub starts
Nov 24 19:56:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:26 compute-0 ceph-mon[75677]: 9.9 scrub ok
Nov 24 19:56:26 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:26 compute-0 ceph-mon[75677]: pgmap v383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:27 compute-0 sudo[121428]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:27.298+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:27.674+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:27 compute-0 sudo[121581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlsnkyxzdhlfoshekzalryugpnngglkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014187.4504268-146-130753527559325/AnsiballZ_setup.py'
Nov 24 19:56:27 compute-0 sudo[121581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:27 compute-0 ceph-mon[75677]: 9.5 scrub starts
Nov 24 19:56:27 compute-0 ceph-mon[75677]: 9.5 scrub ok
Nov 24 19:56:27 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 306 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:27 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:28 compute-0 python3.9[121583]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:56:28 compute-0 sudo[121581]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:28.329+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:28.706+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:28 compute-0 sudo[121735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsxovexcvabwmzhchketpsdbrrduzozi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014188.457858-154-67792136802636/AnsiballZ_stat.py'
Nov 24 19:56:28 compute-0 sudo[121735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:28 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:28 compute-0 ceph-mon[75677]: pgmap v384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:29 compute-0 python3.9[121737]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:56:29 compute-0 sudo[121735]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:29.344+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:29.661+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:29 compute-0 sudo[121887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wujriinorxvoczjgsuxzphvpipzelgtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014189.3609378-163-138065556206398/AnsiballZ_stat.py'
Nov 24 19:56:29 compute-0 sudo[121887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:29 compute-0 python3.9[121889]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:56:30 compute-0 sudo[121887]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:30 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:30.332+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:30.635+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 24 19:56:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 24 19:56:30 compute-0 sudo[122039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmmxuyavpkxmjfalgxlhlpqgpdgaftaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014190.3211594-173-110999577441626/AnsiballZ_command.py'
Nov 24 19:56:30 compute-0 sudo[122039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:30 compute-0 python3.9[122041]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:56:30 compute-0 sudo[122039]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:31 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:31 compute-0 ceph-mon[75677]: pgmap v385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:31.344+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:31.675+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:31 compute-0 sudo[122192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siryanjqoddbaonehhvjokcvuwiaityb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014191.1607893-183-8647409164452/AnsiballZ_service_facts.py'
Nov 24 19:56:31 compute-0 sudo[122192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:31 compute-0 python3.9[122194]: ansible-service_facts Invoked
Nov 24 19:56:32 compute-0 network[122211]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 19:56:32 compute-0 network[122212]: 'network-scripts' will be removed from distribution in near future.
Nov 24 19:56:32 compute-0 network[122213]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 19:56:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:32.328+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:32 compute-0 ceph-mon[75677]: 9.11 scrub starts
Nov 24 19:56:32 compute-0 ceph-mon[75677]: 9.11 scrub ok
Nov 24 19:56:32 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 24 19:56:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:32.661+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 24 19:56:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:33.318+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:33 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:33 compute-0 ceph-mon[75677]: pgmap v386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:33.711+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 24 19:56:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 19:56:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:34.363+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:34 compute-0 ceph-mon[75677]: 9.b scrub starts
Nov 24 19:56:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:34 compute-0 ceph-mon[75677]: 9.b scrub ok
Nov 24 19:56:34 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:34.727+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:35.346+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:35 compute-0 ceph-mon[75677]: 6.3 scrub starts
Nov 24 19:56:35 compute-0 ceph-mon[75677]: 6.3 scrub ok
Nov 24 19:56:35 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:35 compute-0 ceph-mon[75677]: pgmap v387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:35.727+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 24 19:56:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 24 19:56:36 compute-0 sudo[122192]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:36.332+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 311 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:36 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:36 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 311 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:36.706+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:37.302+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:37 compute-0 sudo[122498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kimwntjtsqqsrczzmpcjpnpjfmpcseug ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764014196.8646188-198-161450575479178/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764014196.8646188-198-161450575479178/args'
Nov 24 19:56:37 compute-0 sudo[122498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:37 compute-0 sudo[122498]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:37 compute-0 ceph-mon[75677]: 6.7 scrub starts
Nov 24 19:56:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:37 compute-0 ceph-mon[75677]: 6.7 scrub ok
Nov 24 19:56:37 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:37 compute-0 ceph-mon[75677]: pgmap v388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:37.687+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:38 compute-0 sudo[122665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiqfhehjfixhthebfdfesijmyaexecgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014197.8075197-209-269106553694685/AnsiballZ_dnf.py'
Nov 24 19:56:38 compute-0 sudo[122665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:38.285+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:38 compute-0 sshd-session[122348]: Invalid user admin from 27.79.44.141 port 59550
Nov 24 19:56:38 compute-0 python3.9[122667]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:56:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:38.649+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:38 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:38 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:38 compute-0 ceph-mon[75677]: pgmap v389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:39.252+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:39.610+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:39 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:39 compute-0 sudo[122665]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:40 compute-0 sshd-session[122348]: Connection closed by invalid user admin 27.79.44.141 port 59550 [preauth]
Nov 24 19:56:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:40.217+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:56:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:56:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:56:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:56:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:56:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:40.636+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:40 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:40 compute-0 ceph-mon[75677]: pgmap v390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:41 compute-0 sudo[122818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qshsmwcthhyjwdddctxksvhyepyyiutb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014200.30034-222-196632939750650/AnsiballZ_package_facts.py'
Nov 24 19:56:41 compute-0 sudo[122818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:41.255+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:41 compute-0 python3.9[122820]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 19:56:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 321 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:41 compute-0 sudo[122818]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:41.648+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 19:56:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1778 writes, 8536 keys, 1778 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 1778 writes, 1778 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1778 writes, 8536 keys, 1778 commit groups, 1.0 writes per commit group, ingest: 10.56 MB, 0.02 MB/s
                                           Interval WAL: 1778 writes, 1778 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     65.4      0.12              0.03         2    0.061       0      0       0.0       0.0
                                             L6      1/0    7.90 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0     67.0     66.7      0.12              0.04         1    0.119    3941    290       0.0       0.0
                                            Sum      1/0    7.90 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     33.1     66.0      0.24              0.06         3    0.080    3941    290       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     33.4     66.5      0.24              0.06         2    0.119    3941    290       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     67.0     66.7      0.12              0.04         1    0.119    3941    290       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     66.3      0.12              0.03         1    0.119       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b73e09f1f0#2 capacity: 308.00 MB usage: 330.38 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(22,270.38 KB,0.0857267%) FilterBlock(4,19.48 KB,0.00617783%) IndexBlock(4,40.52 KB,0.0128461%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 19:56:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:42 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:42 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 321 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:42.300+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:42 compute-0 sudo[122970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuekvdhgnbxfkasbkvbydudowjnhzbyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014202.1643655-232-225695784977673/AnsiballZ_stat.py'
Nov 24 19:56:42 compute-0 sudo[122970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:42.695+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:42 compute-0 python3.9[122972]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:56:42 compute-0 sudo[122970]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:43 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:43 compute-0 ceph-mon[75677]: pgmap v391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:43 compute-0 sudo[123048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faeykmlzdvpitgkipkuwhwauyebzcyqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014202.1643655-232-225695784977673/AnsiballZ_file.py'
Nov 24 19:56:43 compute-0 sudo[123048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:43.262+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:43 compute-0 python3.9[123050]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:56:43 compute-0 sudo[123048]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:43.678+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:44 compute-0 sudo[123200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plrmduaxdtiimhhdxjlkehziqkeuaili ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014203.7062075-244-86851101589661/AnsiballZ_stat.py'
Nov 24 19:56:44 compute-0 sudo[123200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:44 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:44.249+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:44 compute-0 python3.9[123202]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:56:44 compute-0 sudo[123200]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:44.720+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:44 compute-0 sudo[123278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slhmwuymdyribsnymefrvvwnvphhrryc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014203.7062075-244-86851101589661/AnsiballZ_file.py'
Nov 24 19:56:44 compute-0 sudo[123278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:44 compute-0 python3.9[123280]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:56:45 compute-0 sudo[123278]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:45 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:45 compute-0 ceph-mon[75677]: pgmap v392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:45.280+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:45.747+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 24 19:56:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 24 19:56:46 compute-0 sudo[123430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgtqjmechrbxpspjkgsgwmfqkcscmtvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014205.6047812-262-53839204204329/AnsiballZ_lineinfile.py'
Nov 24 19:56:46 compute-0 sudo[123430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:46.259+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:46 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:46 compute-0 python3.9[123432]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:56:46 compute-0 sudo[123430]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:46.778+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:47.230+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 326 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:47 compute-0 ceph-mon[75677]: 6.5 scrub starts
Nov 24 19:56:47 compute-0 ceph-mon[75677]: 6.5 scrub ok
Nov 24 19:56:47 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:47 compute-0 ceph-mon[75677]: pgmap v393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:47 compute-0 sudo[123582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlxzxajdmnfykimxiqyscwtiewpaecju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014207.1288648-277-214537372946037/AnsiballZ_setup.py'
Nov 24 19:56:47 compute-0 sudo[123582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:47.796+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:47 compute-0 python3.9[123584]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:56:48 compute-0 sudo[123582]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:48.188+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:48 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:48 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 326 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:48.767+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:48 compute-0 sudo[123666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rethutaqrocimafzzaqzopyhqbjfaqwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014207.1288648-277-214537372946037/AnsiballZ_systemd.py'
Nov 24 19:56:48 compute-0 sudo[123666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:49 compute-0 python3.9[123668]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:56:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:49.198+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:49 compute-0 sudo[123666]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:49 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:49 compute-0 ceph-mon[75677]: pgmap v394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:49.786+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:49 compute-0 sshd-session[119153]: Connection closed by 192.168.122.30 port 47878
Nov 24 19:56:49 compute-0 sshd-session[119150]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:56:49 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 24 19:56:49 compute-0 systemd[1]: session-38.scope: Consumed 31.118s CPU time.
Nov 24 19:56:49 compute-0 systemd-logind[795]: Session 38 logged out. Waiting for processes to exit.
Nov 24 19:56:49 compute-0 systemd-logind[795]: Removed session 38.
Nov 24 19:56:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:50.237+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:50 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:50.813+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:51.249+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:51 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:51 compute-0 ceph-mon[75677]: pgmap v395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:51.768+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:52.287+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:52 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:52.723+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 24 19:56:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 24 19:56:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:53.286+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:53 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:53 compute-0 ceph-mon[75677]: pgmap v396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:53.751+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:54.328+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:56:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:54 compute-0 ceph-mon[75677]: 6.9 scrub starts
Nov 24 19:56:54 compute-0 ceph-mon[75677]: 6.9 scrub ok
Nov 24 19:56:54 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:54.730+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 24 19:56:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 24 19:56:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:55.327+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:55 compute-0 sshd-session[123695]: Accepted publickey for zuul from 192.168.122.30 port 46930 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:56:55 compute-0 systemd-logind[795]: New session 39 of user zuul.
Nov 24 19:56:55 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 24 19:56:55 compute-0 sshd-session[123695]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:56:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:55 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:55 compute-0 ceph-mon[75677]: pgmap v397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 24 19:56:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:55.694+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 24 19:56:56 compute-0 sudo[123848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lllggxpvgeomrueydfljjzervjazgzcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014215.6250067-22-217652836056981/AnsiballZ_file.py'
Nov 24 19:56:56 compute-0 sudo[123848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:56.335+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:56 compute-0 python3.9[123850]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:56:56 compute-0 sudo[123848]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 331 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:56:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:56 compute-0 ceph-mon[75677]: 6.a scrub starts
Nov 24 19:56:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:56 compute-0 ceph-mon[75677]: 6.a scrub ok
Nov 24 19:56:56 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:56 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 331 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:56:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:56.717+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 24 19:56:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 24 19:56:57 compute-0 sudo[124000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwwsnpyrubrlmkfigzulhvfcglzzstcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014216.681202-34-91118129674690/AnsiballZ_stat.py'
Nov 24 19:56:57 compute-0 sudo[124000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:57.380+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:57 compute-0 python3.9[124002]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:56:57 compute-0 sudo[124000]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:57 compute-0 ceph-mon[75677]: 9.16 scrub starts
Nov 24 19:56:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:57 compute-0 ceph-mon[75677]: 9.16 scrub ok
Nov 24 19:56:57 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:57 compute-0 ceph-mon[75677]: pgmap v398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 24 19:56:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:57.721+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 24 19:56:57 compute-0 sudo[124078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcovuqsegyksuuuqcdisepxcihguwedc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014216.681202-34-91118129674690/AnsiballZ_file.py'
Nov 24 19:56:57 compute-0 sudo[124078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:56:58 compute-0 python3.9[124080]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:56:58 compute-0 sudo[124078]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:58.392+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:58 compute-0 sshd-session[123698]: Connection closed by 192.168.122.30 port 46930
Nov 24 19:56:58 compute-0 sshd-session[123695]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:56:58 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 24 19:56:58 compute-0 systemd[1]: session-39.scope: Consumed 2.111s CPU time.
Nov 24 19:56:58 compute-0 systemd-logind[795]: Session 39 logged out. Waiting for processes to exit.
Nov 24 19:56:58 compute-0 systemd-logind[795]: Removed session 39.
Nov 24 19:56:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:58 compute-0 ceph-mon[75677]: 9.1c scrub starts
Nov 24 19:56:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:58 compute-0 ceph-mon[75677]: 9.1c scrub ok
Nov 24 19:56:58 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:58.672+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:56:59.423+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:56:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:59 compute-0 ceph-mon[75677]: 9.1e scrub starts
Nov 24 19:56:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:59 compute-0 ceph-mon[75677]: 9.1e scrub ok
Nov 24 19:56:59 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:56:59 compute-0 ceph-mon[75677]: pgmap v399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:56:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:56:59.655+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:56:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:56:59 compute-0 sudo[124105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:56:59 compute-0 sudo[124105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:56:59 compute-0 sudo[124105]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:59 compute-0 sudo[124130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:56:59 compute-0 sudo[124130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:56:59 compute-0 sudo[124130]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:59 compute-0 sudo[124155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:56:59 compute-0 sudo[124155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:56:59 compute-0 sudo[124155]: pam_unix(sudo:session): session closed for user root
Nov 24 19:56:59 compute-0 sudo[124180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 19:56:59 compute-0 sudo[124180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:00.470+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:00 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:00 compute-0 sudo[124180]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:00.626+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:57:00 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:57:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:57:00 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:57:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:57:00 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:57:00 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6b3b425b-b85b-425e-b354-453b56365f05 does not exist
Nov 24 19:57:00 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7babad9f-b639-467e-8bb3-9766c0db03e9 does not exist
Nov 24 19:57:00 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2b2a8b19-4cf1-4571-90a6-6fcd3ef639cb does not exist
Nov 24 19:57:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:57:00 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:57:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:57:00 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:57:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:57:00 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:57:00 compute-0 sudo[124236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:57:00 compute-0 sudo[124236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:00 compute-0 sudo[124236]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:00 compute-0 sudo[124261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:57:00 compute-0 sudo[124261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:00 compute-0 sudo[124261]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:00 compute-0 sudo[124286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:57:00 compute-0 sudo[124286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:00 compute-0 sudo[124286]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:00 compute-0 sudo[124311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:57:00 compute-0 sudo[124311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:01 compute-0 podman[124373]: 2025-11-24 19:57:01.363078659 +0000 UTC m=+0.050937592 container create a5e65c3e88327e8e338c21e98348908996b4c579b9d63ee7dffda205be3835a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_beaver, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:57:01 compute-0 podman[124373]: 2025-11-24 19:57:01.334902665 +0000 UTC m=+0.022761688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:57:01 compute-0 systemd[1]: Started libpod-conmon-a5e65c3e88327e8e338c21e98348908996b4c579b9d63ee7dffda205be3835a6.scope.
Nov 24 19:57:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:01.428+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:57:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 341 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:01 compute-0 podman[124373]: 2025-11-24 19:57:01.503712001 +0000 UTC m=+0.191570994 container init a5e65c3e88327e8e338c21e98348908996b4c579b9d63ee7dffda205be3835a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:57:01 compute-0 podman[124373]: 2025-11-24 19:57:01.517223928 +0000 UTC m=+0.205082891 container start a5e65c3e88327e8e338c21e98348908996b4c579b9d63ee7dffda205be3835a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 19:57:01 compute-0 podman[124373]: 2025-11-24 19:57:01.521326769 +0000 UTC m=+0.209185742 container attach a5e65c3e88327e8e338c21e98348908996b4c579b9d63ee7dffda205be3835a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_beaver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:57:01 compute-0 distracted_beaver[124389]: 167 167
Nov 24 19:57:01 compute-0 systemd[1]: libpod-a5e65c3e88327e8e338c21e98348908996b4c579b9d63ee7dffda205be3835a6.scope: Deactivated successfully.
Nov 24 19:57:01 compute-0 podman[124373]: 2025-11-24 19:57:01.528430222 +0000 UTC m=+0.216289195 container died a5e65c3e88327e8e338c21e98348908996b4c579b9d63ee7dffda205be3835a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_beaver, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 19:57:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6ddae1d5712583bb7ae7e30f2ec1160f82b1a747a42f2e7ab9f9a639842c521-merged.mount: Deactivated successfully.
Nov 24 19:57:01 compute-0 podman[124373]: 2025-11-24 19:57:01.578994443 +0000 UTC m=+0.266853376 container remove a5e65c3e88327e8e338c21e98348908996b4c579b9d63ee7dffda205be3835a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:57:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:01.586+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:01 compute-0 systemd[1]: libpod-conmon-a5e65c3e88327e8e338c21e98348908996b4c579b9d63ee7dffda205be3835a6.scope: Deactivated successfully.
Nov 24 19:57:01 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:01 compute-0 ceph-mon[75677]: pgmap v400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:57:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:57:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:57:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:57:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:57:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:57:01 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:01 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 341 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:01 compute-0 podman[124414]: 2025-11-24 19:57:01.792409829 +0000 UTC m=+0.065032514 container create 9060701f891272cbc086f9871cc93548624c53f043a449d2b7601229d9aed042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:57:01 compute-0 systemd[1]: Started libpod-conmon-9060701f891272cbc086f9871cc93548624c53f043a449d2b7601229d9aed042.scope.
Nov 24 19:57:01 compute-0 podman[124414]: 2025-11-24 19:57:01.770779972 +0000 UTC m=+0.043402647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:57:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f7f7b520883a8f1c142f806b0dc6362ea7f25387912fc3a68444a851962ddc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f7f7b520883a8f1c142f806b0dc6362ea7f25387912fc3a68444a851962ddc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f7f7b520883a8f1c142f806b0dc6362ea7f25387912fc3a68444a851962ddc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f7f7b520883a8f1c142f806b0dc6362ea7f25387912fc3a68444a851962ddc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f7f7b520883a8f1c142f806b0dc6362ea7f25387912fc3a68444a851962ddc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:01 compute-0 podman[124414]: 2025-11-24 19:57:01.897621312 +0000 UTC m=+0.170244037 container init 9060701f891272cbc086f9871cc93548624c53f043a449d2b7601229d9aed042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:57:01 compute-0 podman[124414]: 2025-11-24 19:57:01.908730413 +0000 UTC m=+0.181353098 container start 9060701f891272cbc086f9871cc93548624c53f043a449d2b7601229d9aed042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 19:57:01 compute-0 podman[124414]: 2025-11-24 19:57:01.913742498 +0000 UTC m=+0.186365183 container attach 9060701f891272cbc086f9871cc93548624c53f043a449d2b7601229d9aed042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:57:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:02.460+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:02.587+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:02 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:03 compute-0 friendly_jackson[124431]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:57:03 compute-0 friendly_jackson[124431]: --> relative data size: 1.0
Nov 24 19:57:03 compute-0 friendly_jackson[124431]: --> All data devices are unavailable
Nov 24 19:57:03 compute-0 systemd[1]: libpod-9060701f891272cbc086f9871cc93548624c53f043a449d2b7601229d9aed042.scope: Deactivated successfully.
Nov 24 19:57:03 compute-0 systemd[1]: libpod-9060701f891272cbc086f9871cc93548624c53f043a449d2b7601229d9aed042.scope: Consumed 1.224s CPU time.
Nov 24 19:57:03 compute-0 podman[124414]: 2025-11-24 19:57:03.180275168 +0000 UTC m=+1.452897863 container died 9060701f891272cbc086f9871cc93548624c53f043a449d2b7601229d9aed042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-39f7f7b520883a8f1c142f806b0dc6362ea7f25387912fc3a68444a851962ddc-merged.mount: Deactivated successfully.
Nov 24 19:57:03 compute-0 podman[124414]: 2025-11-24 19:57:03.257928854 +0000 UTC m=+1.530551509 container remove 9060701f891272cbc086f9871cc93548624c53f043a449d2b7601229d9aed042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 19:57:03 compute-0 systemd[1]: libpod-conmon-9060701f891272cbc086f9871cc93548624c53f043a449d2b7601229d9aed042.scope: Deactivated successfully.
Nov 24 19:57:03 compute-0 sudo[124311]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:03 compute-0 sudo[124474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:57:03 compute-0 sudo[124474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:03 compute-0 sudo[124474]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:03.485+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:03 compute-0 sudo[124499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:57:03 compute-0 sudo[124499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:03 compute-0 sudo[124499]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:03.594+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:03 compute-0 sudo[124524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:57:03 compute-0 sudo[124524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:03 compute-0 sudo[124524]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:03 compute-0 ceph-mon[75677]: pgmap v401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:03 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:03 compute-0 sudo[124549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:57:03 compute-0 sudo[124549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:04 compute-0 podman[124617]: 2025-11-24 19:57:04.179049368 +0000 UTC m=+0.077696758 container create 98a70df147d4959c6e2432b44b12982fe3d2b627c674fa177225dee242fd1ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:57:04 compute-0 systemd[1]: Started libpod-conmon-98a70df147d4959c6e2432b44b12982fe3d2b627c674fa177225dee242fd1ce1.scope.
Nov 24 19:57:04 compute-0 podman[124617]: 2025-11-24 19:57:04.148332525 +0000 UTC m=+0.046979985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:57:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:57:04 compute-0 podman[124617]: 2025-11-24 19:57:04.285903175 +0000 UTC m=+0.184550605 container init 98a70df147d4959c6e2432b44b12982fe3d2b627c674fa177225dee242fd1ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:57:04 compute-0 podman[124617]: 2025-11-24 19:57:04.297193731 +0000 UTC m=+0.195841091 container start 98a70df147d4959c6e2432b44b12982fe3d2b627c674fa177225dee242fd1ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 19:57:04 compute-0 podman[124617]: 2025-11-24 19:57:04.301622502 +0000 UTC m=+0.200269942 container attach 98a70df147d4959c6e2432b44b12982fe3d2b627c674fa177225dee242fd1ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 19:57:04 compute-0 loving_bell[124634]: 167 167
Nov 24 19:57:04 compute-0 systemd[1]: libpod-98a70df147d4959c6e2432b44b12982fe3d2b627c674fa177225dee242fd1ce1.scope: Deactivated successfully.
Nov 24 19:57:04 compute-0 podman[124617]: 2025-11-24 19:57:04.304441668 +0000 UTC m=+0.203089028 container died 98a70df147d4959c6e2432b44b12982fe3d2b627c674fa177225dee242fd1ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bell, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 19:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d2937b44e8201a58e942ece45d3abf2ece98038c347048241fd7eafb0f0388-merged.mount: Deactivated successfully.
Nov 24 19:57:04 compute-0 podman[124617]: 2025-11-24 19:57:04.353801656 +0000 UTC m=+0.252449016 container remove 98a70df147d4959c6e2432b44b12982fe3d2b627c674fa177225dee242fd1ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 19:57:04 compute-0 sshd-session[124637]: Accepted publickey for zuul from 192.168.122.30 port 42228 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:57:04 compute-0 systemd-logind[795]: New session 40 of user zuul.
Nov 24 19:57:04 compute-0 systemd[1]: libpod-conmon-98a70df147d4959c6e2432b44b12982fe3d2b627c674fa177225dee242fd1ce1.scope: Deactivated successfully.
Nov 24 19:57:04 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 24 19:57:04 compute-0 sshd-session[124637]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:57:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:04.445+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:04.571+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:04 compute-0 podman[124683]: 2025-11-24 19:57:04.584666046 +0000 UTC m=+0.062755843 container create 120c465bead3ff08ceced8a5a73ab5a7abfee0194d008dbc9a63ec5df3c1ab74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 19:57:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:04 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:04 compute-0 systemd[1]: Started libpod-conmon-120c465bead3ff08ceced8a5a73ab5a7abfee0194d008dbc9a63ec5df3c1ab74.scope.
Nov 24 19:57:04 compute-0 podman[124683]: 2025-11-24 19:57:04.552573715 +0000 UTC m=+0.030663512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:57:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d465cd97c1862f52fc7046f2f738fcde305c2a048c69602db0416a908d2165f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d465cd97c1862f52fc7046f2f738fcde305c2a048c69602db0416a908d2165f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d465cd97c1862f52fc7046f2f738fcde305c2a048c69602db0416a908d2165f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d465cd97c1862f52fc7046f2f738fcde305c2a048c69602db0416a908d2165f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:04 compute-0 podman[124683]: 2025-11-24 19:57:04.735840255 +0000 UTC m=+0.213930092 container init 120c465bead3ff08ceced8a5a73ab5a7abfee0194d008dbc9a63ec5df3c1ab74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:57:04 compute-0 podman[124683]: 2025-11-24 19:57:04.748930989 +0000 UTC m=+0.227020756 container start 120c465bead3ff08ceced8a5a73ab5a7abfee0194d008dbc9a63ec5df3c1ab74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:57:04 compute-0 podman[124683]: 2025-11-24 19:57:04.75262154 +0000 UTC m=+0.230711397 container attach 120c465bead3ff08ceced8a5a73ab5a7abfee0194d008dbc9a63ec5df3c1ab74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:57:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:05.488+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:05 compute-0 cranky_curie[124730]: {
Nov 24 19:57:05 compute-0 cranky_curie[124730]:     "0": [
Nov 24 19:57:05 compute-0 cranky_curie[124730]:         {
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "devices": [
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "/dev/loop3"
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             ],
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_name": "ceph_lv0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_size": "21470642176",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "name": "ceph_lv0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "tags": {
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.cluster_name": "ceph",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.crush_device_class": "",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.encrypted": "0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.osd_id": "0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.type": "block",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.vdo": "0"
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             },
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "type": "block",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "vg_name": "ceph_vg0"
Nov 24 19:57:05 compute-0 cranky_curie[124730]:         }
Nov 24 19:57:05 compute-0 cranky_curie[124730]:     ],
Nov 24 19:57:05 compute-0 cranky_curie[124730]:     "1": [
Nov 24 19:57:05 compute-0 cranky_curie[124730]:         {
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "devices": [
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "/dev/loop4"
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             ],
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_name": "ceph_lv1",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_size": "21470642176",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "name": "ceph_lv1",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "tags": {
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.cluster_name": "ceph",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.crush_device_class": "",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.encrypted": "0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.osd_id": "1",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.type": "block",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.vdo": "0"
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             },
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "type": "block",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "vg_name": "ceph_vg1"
Nov 24 19:57:05 compute-0 cranky_curie[124730]:         }
Nov 24 19:57:05 compute-0 cranky_curie[124730]:     ],
Nov 24 19:57:05 compute-0 cranky_curie[124730]:     "2": [
Nov 24 19:57:05 compute-0 cranky_curie[124730]:         {
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "devices": [
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "/dev/loop5"
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             ],
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_name": "ceph_lv2",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_size": "21470642176",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "name": "ceph_lv2",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "tags": {
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.cluster_name": "ceph",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.crush_device_class": "",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.encrypted": "0",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.osd_id": "2",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.type": "block",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:                 "ceph.vdo": "0"
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             },
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "type": "block",
Nov 24 19:57:05 compute-0 cranky_curie[124730]:             "vg_name": "ceph_vg2"
Nov 24 19:57:05 compute-0 cranky_curie[124730]:         }
Nov 24 19:57:05 compute-0 cranky_curie[124730]:     ]
Nov 24 19:57:05 compute-0 cranky_curie[124730]: }
Nov 24 19:57:05 compute-0 systemd[1]: libpod-120c465bead3ff08ceced8a5a73ab5a7abfee0194d008dbc9a63ec5df3c1ab74.scope: Deactivated successfully.
Nov 24 19:57:05 compute-0 podman[124683]: 2025-11-24 19:57:05.536330569 +0000 UTC m=+1.014420356 container died 120c465bead3ff08ceced8a5a73ab5a7abfee0194d008dbc9a63ec5df3c1ab74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 19:57:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:05.537+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d465cd97c1862f52fc7046f2f738fcde305c2a048c69602db0416a908d2165f-merged.mount: Deactivated successfully.
Nov 24 19:57:05 compute-0 python3.9[124832]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:57:05 compute-0 podman[124683]: 2025-11-24 19:57:05.621531669 +0000 UTC m=+1.099621436 container remove 120c465bead3ff08ceced8a5a73ab5a7abfee0194d008dbc9a63ec5df3c1ab74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 19:57:05 compute-0 systemd[1]: libpod-conmon-120c465bead3ff08ceced8a5a73ab5a7abfee0194d008dbc9a63ec5df3c1ab74.scope: Deactivated successfully.
Nov 24 19:57:05 compute-0 ceph-mon[75677]: pgmap v402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:05 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:05 compute-0 sudo[124549]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:05 compute-0 sudo[124852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:57:05 compute-0 sudo[124852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:05 compute-0 sudo[124852]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:05 compute-0 sudo[124877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:57:05 compute-0 sudo[124877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:05 compute-0 sudo[124877]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:05 compute-0 sudo[124902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:57:05 compute-0 sudo[124902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:05 compute-0 sudo[124902]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:06 compute-0 sudo[124951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:57:06 compute-0 sudo[124951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:06 compute-0 podman[125080]: 2025-11-24 19:57:06.467166077 +0000 UTC m=+0.059976368 container create c0c46aed9bfe11b65784836c4f257f334d9d600e9bc011b59b6298076236115d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 19:57:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:06.479+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:06 compute-0 systemd[1]: Started libpod-conmon-c0c46aed9bfe11b65784836c4f257f334d9d600e9bc011b59b6298076236115d.scope.
Nov 24 19:57:06 compute-0 podman[125080]: 2025-11-24 19:57:06.439210949 +0000 UTC m=+0.032021290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:57:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:06.535+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:57:06 compute-0 podman[125080]: 2025-11-24 19:57:06.575794872 +0000 UTC m=+0.168605173 container init c0c46aed9bfe11b65784836c4f257f334d9d600e9bc011b59b6298076236115d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 19:57:06 compute-0 podman[125080]: 2025-11-24 19:57:06.588582218 +0000 UTC m=+0.181392479 container start c0c46aed9bfe11b65784836c4f257f334d9d600e9bc011b59b6298076236115d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 19:57:06 compute-0 podman[125080]: 2025-11-24 19:57:06.594891179 +0000 UTC m=+0.187701440 container attach c0c46aed9bfe11b65784836c4f257f334d9d600e9bc011b59b6298076236115d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 19:57:06 compute-0 elastic_bartik[125129]: 167 167
Nov 24 19:57:06 compute-0 systemd[1]: libpod-c0c46aed9bfe11b65784836c4f257f334d9d600e9bc011b59b6298076236115d.scope: Deactivated successfully.
Nov 24 19:57:06 compute-0 podman[125080]: 2025-11-24 19:57:06.599560966 +0000 UTC m=+0.192371247 container died c0c46aed9bfe11b65784836c4f257f334d9d600e9bc011b59b6298076236115d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f238ec00f0c2169d90999d24aeb74e06418de6d74ce2394ae9aae7011c8b0892-merged.mount: Deactivated successfully.
Nov 24 19:57:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 346 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:06 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:06 compute-0 podman[125080]: 2025-11-24 19:57:06.662518433 +0000 UTC m=+0.255328724 container remove c0c46aed9bfe11b65784836c4f257f334d9d600e9bc011b59b6298076236115d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:57:06 compute-0 sudo[125169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgaijfviycnpjnburctctnnogtodxvmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014226.111317-33-81149340765749/AnsiballZ_file.py'
Nov 24 19:57:06 compute-0 sudo[125169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:06 compute-0 systemd[1]: libpod-conmon-c0c46aed9bfe11b65784836c4f257f334d9d600e9bc011b59b6298076236115d.scope: Deactivated successfully.
Nov 24 19:57:06 compute-0 python3.9[125176]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:06 compute-0 podman[125184]: 2025-11-24 19:57:06.888263213 +0000 UTC m=+0.060205382 container create 57f2fa145865db12863590f7e70383de110008f78c0d9c8f64abce069bbb9185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:57:06 compute-0 sudo[125169]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:06 compute-0 systemd[1]: Started libpod-conmon-57f2fa145865db12863590f7e70383de110008f78c0d9c8f64abce069bbb9185.scope.
Nov 24 19:57:06 compute-0 podman[125184]: 2025-11-24 19:57:06.859005151 +0000 UTC m=+0.030947370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:57:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11740129b1c0efd777abe29f51bc1b217356c867e9966a2e15497bb73b9aa7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11740129b1c0efd777abe29f51bc1b217356c867e9966a2e15497bb73b9aa7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11740129b1c0efd777abe29f51bc1b217356c867e9966a2e15497bb73b9aa7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11740129b1c0efd777abe29f51bc1b217356c867e9966a2e15497bb73b9aa7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:57:07 compute-0 podman[125184]: 2025-11-24 19:57:07.005139533 +0000 UTC m=+0.177081682 container init 57f2fa145865db12863590f7e70383de110008f78c0d9c8f64abce069bbb9185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 19:57:07 compute-0 podman[125184]: 2025-11-24 19:57:07.019343608 +0000 UTC m=+0.191285747 container start 57f2fa145865db12863590f7e70383de110008f78c0d9c8f64abce069bbb9185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 19:57:07 compute-0 podman[125184]: 2025-11-24 19:57:07.027562051 +0000 UTC m=+0.199504230 container attach 57f2fa145865db12863590f7e70383de110008f78c0d9c8f64abce069bbb9185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 19:57:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:07.481+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:07.514+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:07 compute-0 ceph-mon[75677]: pgmap v403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:07 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 346 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:07 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:07 compute-0 sudo[125386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyphpmbrofekyzypfqascpzuyrredspm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014227.1430457-41-44202458925361/AnsiballZ_stat.py'
Nov 24 19:57:07 compute-0 sudo[125386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:08 compute-0 python3.9[125388]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:08 compute-0 sudo[125386]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:08 compute-0 gallant_pare[125205]: {
Nov 24 19:57:08 compute-0 gallant_pare[125205]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "osd_id": 2,
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "type": "bluestore"
Nov 24 19:57:08 compute-0 gallant_pare[125205]:     },
Nov 24 19:57:08 compute-0 gallant_pare[125205]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "osd_id": 1,
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "type": "bluestore"
Nov 24 19:57:08 compute-0 gallant_pare[125205]:     },
Nov 24 19:57:08 compute-0 gallant_pare[125205]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "osd_id": 0,
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:57:08 compute-0 gallant_pare[125205]:         "type": "bluestore"
Nov 24 19:57:08 compute-0 gallant_pare[125205]:     }
Nov 24 19:57:08 compute-0 gallant_pare[125205]: }
Nov 24 19:57:08 compute-0 systemd[1]: libpod-57f2fa145865db12863590f7e70383de110008f78c0d9c8f64abce069bbb9185.scope: Deactivated successfully.
Nov 24 19:57:08 compute-0 systemd[1]: libpod-57f2fa145865db12863590f7e70383de110008f78c0d9c8f64abce069bbb9185.scope: Consumed 1.140s CPU time.
Nov 24 19:57:08 compute-0 podman[125184]: 2025-11-24 19:57:08.160631901 +0000 UTC m=+1.332574090 container died 57f2fa145865db12863590f7e70383de110008f78c0d9c8f64abce069bbb9185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 19:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a11740129b1c0efd777abe29f51bc1b217356c867e9966a2e15497bb73b9aa7-merged.mount: Deactivated successfully.
Nov 24 19:57:08 compute-0 podman[125184]: 2025-11-24 19:57:08.234414162 +0000 UTC m=+1.406356341 container remove 57f2fa145865db12863590f7e70383de110008f78c0d9c8f64abce069bbb9185 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 19:57:08 compute-0 systemd[1]: libpod-conmon-57f2fa145865db12863590f7e70383de110008f78c0d9c8f64abce069bbb9185.scope: Deactivated successfully.
Nov 24 19:57:08 compute-0 sudo[124951]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:57:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:57:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:57:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:57:08 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ff5a09a2-9212-451d-a692-674f7cd6436d does not exist
Nov 24 19:57:08 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev fc1acba6-1b4b-4f97-9cde-059699f30c5a does not exist
Nov 24 19:57:08 compute-0 sudo[125518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oghwdtkarzvaqopmpcgmbdymbjeltpfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014227.1430457-41-44202458925361/AnsiballZ_file.py'
Nov 24 19:57:08 compute-0 sudo[125518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:08 compute-0 sudo[125482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:57:08 compute-0 sudo[125482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:08 compute-0 sudo[125482]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:08 compute-0 sudo[125526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:57:08 compute-0 sudo[125526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:57:08 compute-0 sudo[125526]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:08.479+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:08.486+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:08 compute-0 python3.9[125524]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.sxvum9nv recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:08 compute-0 sudo[125518]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:57:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:57:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:09 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:09 compute-0 ceph-mon[75677]: pgmap v404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:09 compute-0 sudo[125700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clxmslulkykkiswapwddkukbsxursssu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014228.984378-61-229516821237685/AnsiballZ_stat.py'
Nov 24 19:57:09 compute-0 sudo[125700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:09.430+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:09.480+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:09 compute-0 python3.9[125702]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:09 compute-0 sudo[125700]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:09 compute-0 sudo[125778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aarklpyjnridffnmzkvgqnvxirvsqrle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014228.984378-61-229516821237685/AnsiballZ_file.py'
Nov 24 19:57:09 compute-0 sudo[125778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:10 compute-0 python3.9[125780]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ecuqpzf9 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:10 compute-0 sudo[125778]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:10 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:10.473+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:10.482+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:10 compute-0 sudo[125930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmzfzdcxhhztjavifzqguwfsgtvqlydx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014230.3342514-74-216017578967813/AnsiballZ_file.py'
Nov 24 19:57:10 compute-0 sudo[125930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:10 compute-0 python3.9[125932]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:57:10 compute-0 sudo[125930]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:11 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:11 compute-0 ceph-mon[75677]: pgmap v405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:11.461+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:11 compute-0 sudo[126082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuwlcsztindqtdulsqsfigzdxrizaanm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014231.1524947-82-82828836718122/AnsiballZ_stat.py'
Nov 24 19:57:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:11.494+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:11 compute-0 sudo[126082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:11 compute-0 python3.9[126084]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:11 compute-0 sudo[126082]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:12 compute-0 sudo[126160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dugtqkmjgrtyllnfxiyiukjmgvkxymny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014231.1524947-82-82828836718122/AnsiballZ_file.py'
Nov 24 19:57:12 compute-0 sudo[126160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:12 compute-0 python3.9[126162]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:57:12 compute-0 sudo[126160]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:12 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:12.465+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:12.509+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:12 compute-0 sudo[126312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceeldozsryvmttkzynhecwxwjqtwbhvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014232.4425004-82-67098966980128/AnsiballZ_stat.py'
Nov 24 19:57:12 compute-0 sudo[126312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:13 compute-0 python3.9[126314]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:13 compute-0 sudo[126312]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:13 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:13 compute-0 ceph-mon[75677]: pgmap v406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:13 compute-0 sudo[126390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuufnwczyuqaayaydmuzlunaorgrkdvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014232.4425004-82-67098966980128/AnsiballZ_file.py'
Nov 24 19:57:13 compute-0 sudo[126390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:13.482+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:13.490+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:13 compute-0 python3.9[126392]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:57:13 compute-0 sudo[126390]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:14 compute-0 sudo[126542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juwcjzjptgebfeugtxrqwbbcpeyxeiig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014233.7923632-105-197216815352234/AnsiballZ_file.py'
Nov 24 19:57:14 compute-0 sudo[126542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:14 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:14 compute-0 python3.9[126544]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:14 compute-0 sudo[126542]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:14.490+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:14.503+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:15 compute-0 sudo[126694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubomclnysezdwottmdngypzntwppfepn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014234.6108568-113-93181008302622/AnsiballZ_stat.py'
Nov 24 19:57:15 compute-0 sudo[126694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:15 compute-0 python3.9[126696]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:15 compute-0 sudo[126694]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:15 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:15 compute-0 ceph-mon[75677]: pgmap v407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:15 compute-0 sudo[126772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msnzoprdqqihrbyabvlhslmajxwhdzbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014234.6108568-113-93181008302622/AnsiballZ_file.py'
Nov 24 19:57:15 compute-0 sudo[126772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:15.511+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:15.527+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:15 compute-0 python3.9[126774]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:15 compute-0 sudo[126772]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:16 compute-0 sudo[126924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhycphpkyavlvtvvjazbpvwstlsapvqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014235.8837283-125-32575228343170/AnsiballZ_stat.py'
Nov 24 19:57:16 compute-0 sudo[126924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:16 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:16 compute-0 python3.9[126926]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:16.474+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 351 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:16.524+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:16 compute-0 sudo[126924]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:16 compute-0 sudo[127002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xskaxfcpksghsbtslptiygdnxqrjkpfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014235.8837283-125-32575228343170/AnsiballZ_file.py'
Nov 24 19:57:16 compute-0 sudo[127002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:17 compute-0 python3.9[127004]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:17 compute-0 sudo[127002]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:17 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:17 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 351 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:17 compute-0 ceph-mon[75677]: pgmap v408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:17.470+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:17.508+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:17 compute-0 sudo[127154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kexywnhvsgzhanmihuomixnxnjzrkmpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014237.2820613-137-175198679374139/AnsiballZ_systemd.py'
Nov 24 19:57:17 compute-0 sudo[127154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:18 compute-0 python3.9[127156]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:57:18 compute-0 systemd[1]: Reloading.
Nov 24 19:57:18 compute-0 systemd-sysv-generator[127187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:57:18 compute-0 systemd-rc-local-generator[127183]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:57:18 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:18.482+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:18.546+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:18 compute-0 sudo[127154]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:19 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:19 compute-0 ceph-mon[75677]: pgmap v409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:19 compute-0 sudo[127343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeaykfldvnnlfzbzbgaihystgmemsfcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014239.023744-145-231415956658613/AnsiballZ_stat.py'
Nov 24 19:57:19 compute-0 sudo[127343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:19.506+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:19.537+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:19 compute-0 python3.9[127345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:19 compute-0 sudo[127343]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:20 compute-0 sudo[127421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifyetxhjmeoorywklowkqwkfmrxfmzvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014239.023744-145-231415956658613/AnsiballZ_file.py'
Nov 24 19:57:20 compute-0 sudo[127421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:20 compute-0 python3.9[127423]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:20 compute-0 sudo[127421]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:20 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:20.468+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:20.524+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:20 compute-0 sudo[127573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jljztmriypuscjxfwyvxwrgeowthispv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014240.4943635-157-156437921078426/AnsiballZ_stat.py'
Nov 24 19:57:20 compute-0 sudo[127573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:21 compute-0 python3.9[127575]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:21 compute-0 sudo[127573]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:21 compute-0 sudo[127651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtxwhqkxkmigaiunqzbvsdfdsdgimpnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014240.4943635-157-156437921078426/AnsiballZ_file.py'
Nov 24 19:57:21 compute-0 sudo[127651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:21 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:21 compute-0 ceph-mon[75677]: pgmap v410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:21.473+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:21.478+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 361 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:21 compute-0 python3.9[127653]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:21 compute-0 sudo[127651]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:22 compute-0 sudo[127803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zklhqznlnxrcedvbunovwlsulrnvsjfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014241.742756-169-267944502826289/AnsiballZ_systemd.py'
Nov 24 19:57:22 compute-0 sudo[127803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:22 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:22 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 361 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:22.435+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:22 compute-0 python3.9[127805]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 19:57:22 compute-0 systemd[1]: Reloading.
Nov 24 19:57:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:22.523+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:22 compute-0 systemd-rc-local-generator[127832]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 19:57:22 compute-0 systemd-sysv-generator[127836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 19:57:22 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 19:57:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 19:57:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 19:57:22 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 19:57:23 compute-0 sudo[127803]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:23.426+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:23 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:23 compute-0 ceph-mon[75677]: pgmap v411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:23.532+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:24 compute-0 python3.9[127996]: ansible-ansible.builtin.service_facts Invoked
Nov 24 19:57:24 compute-0 network[128013]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 19:57:24 compute-0 network[128014]: 'network-scripts' will be removed from distribution in near future.
Nov 24 19:57:24 compute-0 network[128015]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:57:24
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'images']
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:57:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:24.408+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:24 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:24.527+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:25.397+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:25 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:25 compute-0 ceph-mon[75677]: pgmap v412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:25.529+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:26.396+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:26 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:26.524+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:27.408+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 366 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:27 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:27 compute-0 ceph-mon[75677]: pgmap v413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:27.510+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:28.387+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:28.467+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:28 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:28 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 366 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:29 compute-0 sudo[128275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtqixmumcehghknpwbzmcwapusihyhag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014248.6777353-195-109402295372336/AnsiballZ_stat.py'
Nov 24 19:57:29 compute-0 sudo[128275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:29 compute-0 python3.9[128277]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:29.364+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:29 compute-0 sudo[128275]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:29 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:29 compute-0 ceph-mon[75677]: pgmap v414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:29.508+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:29 compute-0 sudo[128353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekdjxjxhgvqmnmxxttfbvdarprwwpdcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014248.6777353-195-109402295372336/AnsiballZ_file.py'
Nov 24 19:57:29 compute-0 sudo[128353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:29 compute-0 python3.9[128355]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:29 compute-0 sudo[128353]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:30.332+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:30.487+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:30 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:30 compute-0 sudo[128505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icdbjeohpzondjcrumtjqhwrautqgkll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014250.1721318-208-48830672609459/AnsiballZ_file.py'
Nov 24 19:57:30 compute-0 sudo[128505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:30 compute-0 python3.9[128507]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:30 compute-0 sudo[128505]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:31.335+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:31 compute-0 sudo[128657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlhqvrdpnrjdlscjblezdkquwlnryzsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014250.995648-216-91109092389121/AnsiballZ_stat.py'
Nov 24 19:57:31 compute-0 sudo[128657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:31 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:31 compute-0 ceph-mon[75677]: pgmap v415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:31.513+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:31 compute-0 python3.9[128659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:31 compute-0 sudo[128657]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:31 compute-0 sudo[128735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifdztpwookowfnltxehoiulakvzyngiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014250.995648-216-91109092389121/AnsiballZ_file.py'
Nov 24 19:57:31 compute-0 sudo[128735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:32 compute-0 python3.9[128737]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:32 compute-0 sudo[128735]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:32.320+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:32.510+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:32 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:33 compute-0 sudo[128887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmtbqipcychcdditethofodxulacyfcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014252.54981-231-257308779062901/AnsiballZ_timezone.py'
Nov 24 19:57:33 compute-0 sudo[128887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:33 compute-0 python3.9[128889]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 19:57:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:33.331+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:33 compute-0 systemd[1]: Starting Time & Date Service...
Nov 24 19:57:33 compute-0 systemd[1]: Started Time & Date Service.
Nov 24 19:57:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:33.503+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:33 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:33 compute-0 ceph-mon[75677]: pgmap v416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:33 compute-0 sudo[128887]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:34 compute-0 sudo[129043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyqxfmuzguahxerhejfvvptksezvoyze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014253.8472393-240-871482350135/AnsiballZ_file.py'
Nov 24 19:57:34 compute-0 sudo[129043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 19:57:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:34.317+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:34 compute-0 python3.9[129045]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:34 compute-0 sudo[129043]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:34.459+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:34 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:35 compute-0 sudo[129195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgdcwqyhgrycpqupvzsddhjhgkwkqmwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014254.6817794-248-133603483178263/AnsiballZ_stat.py'
Nov 24 19:57:35 compute-0 sudo[129195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:35.290+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:35 compute-0 python3.9[129197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:35 compute-0 sudo[129195]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:35.462+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:35 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:35 compute-0 ceph-mon[75677]: pgmap v417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:35 compute-0 sudo[129273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psrzuvyyfncjmdxmeipubdmdeqfleasu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014254.6817794-248-133603483178263/AnsiballZ_file.py'
Nov 24 19:57:35 compute-0 sudo[129273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:35 compute-0 python3.9[129275]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:35 compute-0 sudo[129273]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:36.258+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:36.460+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 371 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:36 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:36 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 371 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:36 compute-0 sudo[129425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zynvjxducilmlkrcdzbcahzsehwvzmtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014256.1950533-260-121247939033599/AnsiballZ_stat.py'
Nov 24 19:57:36 compute-0 sudo[129425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:36 compute-0 python3.9[129427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:36 compute-0 sudo[129425]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:37 compute-0 sudo[129503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuubhexadmkndvhcgcaaxyubnfppjqmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014256.1950533-260-121247939033599/AnsiballZ_file.py'
Nov 24 19:57:37 compute-0 sudo[129503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:37.211+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:37 compute-0 python3.9[129505]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.o8yzhg54 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:37.448+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:37 compute-0 sudo[129503]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:37 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:37 compute-0 ceph-mon[75677]: pgmap v418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:38 compute-0 sudo[129655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udcoxrnzlfuyxhmmqapxdvexfmiqhnmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014257.6696873-272-263420093961877/AnsiballZ_stat.py'
Nov 24 19:57:38 compute-0 sudo[129655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:38.225+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:38 compute-0 python3.9[129657]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:38 compute-0 sudo[129655]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:38.421+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:38 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:38 compute-0 sudo[129733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gddewxtklulenwccaabhmyeebpsutgln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014257.6696873-272-263420093961877/AnsiballZ_file.py'
Nov 24 19:57:38 compute-0 sudo[129733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:38 compute-0 python3.9[129735]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:38 compute-0 sudo[129733]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:39.263+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:39.409+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:39 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:39 compute-0 ceph-mon[75677]: pgmap v419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:39 compute-0 sudo[129885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtmkbulxleneyvmsplzjihdwjzgzoffl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014259.1353333-285-202065865852696/AnsiballZ_command.py'
Nov 24 19:57:39 compute-0 sudo[129885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:39 compute-0 python3.9[129887]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:57:39 compute-0 sudo[129885]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:40.247+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:57:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:57:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:57:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:57:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:57:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:40.388+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:40 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:40 compute-0 sudo[130038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyxepjbstqtzobprznnmvkbvdvrfovmm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764014260.1585476-293-182629317841392/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 19:57:40 compute-0 sudo[130038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:40 compute-0 python3[130040]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 19:57:40 compute-0 sudo[130038]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:41.289+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:41.396+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 381 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:41 compute-0 sudo[130190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcpsnrrgwjgercwsziajqlqoviyncplj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014261.2038012-301-22566572686016/AnsiballZ_stat.py'
Nov 24 19:57:41 compute-0 sudo[130190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:41 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:41 compute-0 ceph-mon[75677]: pgmap v420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:41 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:41 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 381 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:41 compute-0 python3.9[130192]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:41 compute-0 sudo[130190]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:42 compute-0 sudo[130268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ateutbtjrgnlmzsuwhrjdveurhwfnyun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014261.2038012-301-22566572686016/AnsiballZ_file.py'
Nov 24 19:57:42 compute-0 sudo[130268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:42.278+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:42 compute-0 python3.9[130270]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:42 compute-0 sudo[130268]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:42.436+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:42 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:43 compute-0 sudo[130420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwvassuwzvfqkktappoxbjiyawmfoudd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014262.6432612-313-253250449777608/AnsiballZ_stat.py'
Nov 24 19:57:43 compute-0 sudo[130420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:43 compute-0 python3.9[130422]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:43.274+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:43 compute-0 sudo[130420]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:43.458+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:43 compute-0 ceph-mon[75677]: pgmap v421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:43 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:43 compute-0 sudo[130498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvmcyzamybhmioyrjtljnnwqqloqikm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014262.6432612-313-253250449777608/AnsiballZ_file.py'
Nov 24 19:57:43 compute-0 sudo[130498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:43 compute-0 python3.9[130500]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:43 compute-0 sudo[130498]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:44.245+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:44 compute-0 sudo[130650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odrxcueuvumttrsbzwhtsokzkiirphrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014264.091942-325-171623653455028/AnsiballZ_stat.py'
Nov 24 19:57:44 compute-0 sudo[130650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:44.509+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:44 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:44 compute-0 python3.9[130652]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:44 compute-0 sudo[130650]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:45 compute-0 sudo[130728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmfpxbqzpknbxgnfmtzwebmgcyolbzxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014264.091942-325-171623653455028/AnsiballZ_file.py'
Nov 24 19:57:45 compute-0 sudo[130728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:45.286+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:45 compute-0 python3.9[130730]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:45 compute-0 sudo[130728]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:45.542+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:45 compute-0 ceph-mon[75677]: pgmap v422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:45 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:45 compute-0 sudo[130880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxcjdozyqvjbeocjwjkzxvmqlolpwjvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014265.5515249-337-137303944032477/AnsiballZ_stat.py'
Nov 24 19:57:45 compute-0 sudo[130880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:46 compute-0 python3.9[130882]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:46 compute-0 sudo[130880]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:46.293+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:46.504+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:46 compute-0 sudo[130958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzkzefcriglzkvvyjvlkkoqzkjbktjcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014265.5515249-337-137303944032477/AnsiballZ_file.py'
Nov 24 19:57:46 compute-0 sudo[130958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 386 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:46 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:46 compute-0 ceph-mon[75677]: pgmap v423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:46 compute-0 python3.9[130960]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:46 compute-0 sudo[130958]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:47.254+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:47 compute-0 sudo[131110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxsxpgtnvduwnlfabmclmqhfqyyhuezf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014266.9518688-349-156788305105966/AnsiballZ_stat.py'
Nov 24 19:57:47 compute-0 sudo[131110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:47.478+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:47 compute-0 python3.9[131112]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:57:47 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 386 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:47 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:47 compute-0 sudo[131110]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:48 compute-0 sudo[131188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cijfnrvtbeavbjgfwjggzchabxicshzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014266.9518688-349-156788305105966/AnsiballZ_file.py'
Nov 24 19:57:48 compute-0 sudo[131188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:48 compute-0 python3.9[131190]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:48 compute-0 sudo[131188]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:48.288+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:48.484+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:48 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:48 compute-0 ceph-mon[75677]: pgmap v424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:48 compute-0 sudo[131340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbfjloyhvkxyfcwrclfxzplnzyyrgicp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014268.547177-362-135945847382366/AnsiballZ_command.py'
Nov 24 19:57:48 compute-0 sudo[131340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:49 compute-0 python3.9[131342]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:57:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:49.242+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:49 compute-0 sudo[131340]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:49.513+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:49 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 19:57:49 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:50 compute-0 sudo[131496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhagigpvixquplgtfyxclppdvjgcmkgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014269.4758978-370-504012430669/AnsiballZ_blockinfile.py'
Nov 24 19:57:50 compute-0 sudo[131496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:50.210+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:50 compute-0 python3.9[131498]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:50 compute-0 sudo[131496]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:50.512+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:50 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:50 compute-0 ceph-mon[75677]: pgmap v425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:50 compute-0 sudo[131648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phvpkrfqluokewtiuoxletjyrcreatle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014270.5304685-379-248882528356030/AnsiballZ_file.py'
Nov 24 19:57:50 compute-0 sudo[131648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:51 compute-0 python3.9[131650]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:51 compute-0 sudo[131648]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:51.227+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:51.502+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:51 compute-0 sudo[131800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wccpxbwflevelhvdifwczzswfpbjtykw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014271.2729404-379-81758832054382/AnsiballZ_file.py'
Nov 24 19:57:51 compute-0 sudo[131800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:51 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:51 compute-0 python3.9[131802]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:57:51 compute-0 sudo[131800]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:52.241+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:52.526+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:52 compute-0 sudo[131952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtknkkgdxmkvmgavweaqtbxshvxkhccx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014272.113045-394-89237067918943/AnsiballZ_mount.py'
Nov 24 19:57:52 compute-0 sudo[131952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:52 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:52 compute-0 ceph-mon[75677]: pgmap v426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:52 compute-0 python3.9[131954]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 19:57:52 compute-0 sudo[131952]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:53.206+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:53 compute-0 sudo[132104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-filmoicqcyytlrvmdprqpiwjdzwziaql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014273.1044886-394-36868251679896/AnsiballZ_mount.py'
Nov 24 19:57:53 compute-0 sudo[132104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:57:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:53.551+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:53 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:53 compute-0 python3.9[132106]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 19:57:53 compute-0 sudo[132104]: pam_unix(sudo:session): session closed for user root
Nov 24 19:57:54 compute-0 sshd-session[124656]: Connection closed by 192.168.122.30 port 42228
Nov 24 19:57:54 compute-0 sshd-session[124637]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:57:54 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 24 19:57:54 compute-0 systemd[1]: session-40.scope: Consumed 39.558s CPU time.
Nov 24 19:57:54 compute-0 systemd-logind[795]: Session 40 logged out. Waiting for processes to exit.
Nov 24 19:57:54 compute-0 systemd-logind[795]: Removed session 40.
Nov 24 19:57:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:54.257+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:57:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:54.528+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:54 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:54 compute-0 ceph-mon[75677]: pgmap v427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:55.275+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:55.570+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:55 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:56.254+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 391 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:57:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:56.566+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:56 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:56 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 391 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:57:56 compute-0 ceph-mon[75677]: pgmap v428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:57.265+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:57.591+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:57 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:58.306+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:58.557+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:58 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:58 compute-0 ceph-mon[75677]: pgmap v429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:57:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:57:59.340+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:57:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:57:59.578+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:57:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:59 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:57:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:57:59 compute-0 sshd-session[132131]: Accepted publickey for zuul from 192.168.122.30 port 53988 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:57:59 compute-0 systemd-logind[795]: New session 41 of user zuul.
Nov 24 19:57:59 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 24 19:57:59 compute-0 sshd-session[132131]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:58:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:00.329+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:00.600+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:00 compute-0 sudo[132284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kycavpsjuftkespvzmvphsnhfvpwufho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014280.0647721-16-87094318697372/AnsiballZ_tempfile.py'
Nov 24 19:58:00 compute-0 sudo[132284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:00 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:00 compute-0 ceph-mon[75677]: pgmap v430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:00 compute-0 python3.9[132286]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 19:58:00 compute-0 sudo[132284]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:01.358+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 401 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:01.576+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:01 compute-0 sudo[132436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmmzhyrmuwjfmnjtpeckgmeuyttyojfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014281.087655-28-68264198028475/AnsiballZ_stat.py'
Nov 24 19:58:01 compute-0 sudo[132436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:01 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:01 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 401 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:01 compute-0 python3.9[132438]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:58:01 compute-0 sudo[132436]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:02.399+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:02.560+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:02 compute-0 sudo[132590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddgopggtthmgtrkzyqhlypvhyoouhsyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014282.0955815-36-185010057622797/AnsiballZ_slurp.py'
Nov 24 19:58:02 compute-0 sudo[132590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:02 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:02 compute-0 ceph-mon[75677]: pgmap v431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:02 compute-0 python3.9[132592]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 24 19:58:02 compute-0 sudo[132590]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:03.447+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:03 compute-0 sudo[132742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhflnoqqncbbmdpbrozpdheghajgbgdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014283.1328297-44-144307416200006/AnsiballZ_stat.py'
Nov 24 19:58:03 compute-0 sudo[132742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:03.554+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:03 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 19:58:03 compute-0 python3.9[132744]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.amu5gl44 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:58:03 compute-0 sudo[132742]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:03 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:04 compute-0 sudo[132869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iogeupsbagkupcupakxzltbyaoqxpofd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014283.1328297-44-144307416200006/AnsiballZ_copy.py'
Nov 24 19:58:04 compute-0 sudo[132869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:04.429+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:04.551+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:04 compute-0 python3.9[132871]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.amu5gl44 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014283.1328297-44-144307416200006/.source.amu5gl44 _original_basename=.xh_tn6ba follow=False checksum=d6b696a3dc61361640405f48eef221e19865997a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:58:04 compute-0 sudo[132869]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:04 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:04 compute-0 ceph-mon[75677]: pgmap v432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:05.399+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:05 compute-0 sudo[133021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grdcjabwooxgotaweuabzjpyzqglnbcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014284.8348463-59-32005603355353/AnsiballZ_setup.py'
Nov 24 19:58:05 compute-0 sudo[133021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:05.530+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:05 compute-0 python3.9[133023]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:58:05 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:05 compute-0 sudo[133021]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:06.357+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:06.523+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:06 compute-0 sudo[133173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nypktktvrwapmecnrvrsqudyxagkoopj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014286.132533-68-68360070744404/AnsiballZ_blockinfile.py'
Nov 24 19:58:06 compute-0 sudo[133173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 406 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:06 compute-0 python3.9[133175]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDz08cIhIJvEgXDwwGqJcUcccV13vZKm79Alj6fJP3mPS8+SwiNI2qAVhh0gh5ljYJD+o/0TOs+oZqmsC5hBhAO2ePN3HXhd28IAsAKLACa/ITk0kE++96j+0UiC4lw+9hb+48H8lKqPpNrF4uYg1DJ28srFtzLeR0FNjuaAz5045n1dGd+mMz75P/cAKwMKTlAklCc8V/Kug6mBm12mItgO4kd9XjLa6tSbZ5n9KuTW094j2RJFwUCXAoVEDXBI7CUAUMuKR8M3TriPeAeRsm38Do1qBf66tdb+5RzcVeOpDvLPe6oe6ys1AbYx1xOxF33s+YojUw3r94r7LUGviON0qiGkWmLBXAzWeE/KL/QI+tx7hSicZ1AnRFsCo4GAyLRAeyYhcStsMfKyEZkGLIqRoUaCvjUyOnIk8B1lLcUWnw11MeV2gBW9oSLHHN9vSQKePdKKvWvKyoHNrBECkye93MoYc8g9QPF+9a+gChshN/8DHBpQFG1PXhb3KMYM38=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGRjViR+rENMWsp0rfw0jkB6UrpO4igMTnHnreNvRXh6
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGmuOeSvYdXKNZKhBs8YqKEpqCpD8Nk8aZY8F++/S1nbmdyIEMuIhp/lyVvyV1J7c6T45oEtqKedTy9KkwaDKNA=
                                              create=True mode=0644 path=/tmp/ansible.amu5gl44 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:58:06 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:06 compute-0 ceph-mon[75677]: pgmap v433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:06 compute-0 sudo[133173]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:07.403+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:07.550+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:07 compute-0 sudo[133325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swgnbffhqegfcdzsmrijzyptugofdzsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014287.1466343-76-97560300961756/AnsiballZ_command.py'
Nov 24 19:58:07 compute-0 sudo[133325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:07 compute-0 python3.9[133327]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.amu5gl44' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:58:07 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 406 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:07 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:07 compute-0 sudo[133325]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:08.392+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:08.517+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:08 compute-0 sudo[133415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:08 compute-0 sudo[133415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:08 compute-0 sudo[133415]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:08 compute-0 sudo[133468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:58:08 compute-0 sudo[133468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:08 compute-0 sudo[133468]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:08 compute-0 sudo[133536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yklcxvlalxsejepexoiutziqvupisblm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014288.1900086-84-69266461089304/AnsiballZ_file.py'
Nov 24 19:58:08 compute-0 sudo[133536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:08 compute-0 sudo[133523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:08 compute-0 sudo[133523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:08 compute-0 sudo[133523]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:08 compute-0 sudo[133557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 19:58:08 compute-0 sudo[133557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:08 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:08 compute-0 ceph-mon[75677]: pgmap v434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:08 compute-0 python3.9[133554]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.amu5gl44 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:58:08 compute-0 sudo[133536]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:09 compute-0 sudo[133557]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:58:09 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:58:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:58:09 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:58:09 compute-0 sudo[133627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:09 compute-0 sudo[133627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:09 compute-0 sudo[133627]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:09 compute-0 sshd-session[132134]: Connection closed by 192.168.122.30 port 53988
Nov 24 19:58:09 compute-0 sshd-session[132131]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:58:09 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 24 19:58:09 compute-0 systemd[1]: session-41.scope: Consumed 6.799s CPU time.
Nov 24 19:58:09 compute-0 systemd-logind[795]: Session 41 logged out. Waiting for processes to exit.
Nov 24 19:58:09 compute-0 systemd-logind[795]: Removed session 41.
Nov 24 19:58:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:09.398+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:09 compute-0 sudo[133652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:58:09 compute-0 sudo[133652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:09 compute-0 sudo[133652]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:09 compute-0 sudo[133677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:09 compute-0 sudo[133677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:09 compute-0 sudo[133677]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:09.542+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:09 compute-0 sudo[133702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 19:58:09 compute-0 sudo[133702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:10 compute-0 sudo[133702]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:58:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:58:10 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:58:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:58:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:58:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:58:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:58:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:58:10 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4c4fa06a-b7d0-4492-bc28-bed464477f9a does not exist
Nov 24 19:58:10 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f829881d-a3dc-44ba-a05e-a16fc52ded44 does not exist
Nov 24 19:58:10 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 5fc55fe5-f34b-4bb8-b867-63fec75ec191 does not exist
Nov 24 19:58:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:58:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:58:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:58:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:58:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:58:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:58:10 compute-0 sudo[133758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:10 compute-0 sudo[133758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:10 compute-0 sudo[133758]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:10.398+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:10 compute-0 sudo[133783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:58:10 compute-0 sudo[133783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:10 compute-0 sudo[133783]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:10.534+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:10 compute-0 sudo[133808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:10 compute-0 sudo[133808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:10 compute-0 sudo[133808]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:10 compute-0 sudo[133833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:58:10 compute-0 sudo[133833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:11 compute-0 podman[133898]: 2025-11-24 19:58:11.174820986 +0000 UTC m=+0.075778190 container create aae7af198366bc2739ebc3c2f39f9dc44b6a97cd0822b993dc49d52e60e6c776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bouman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 19:58:11 compute-0 systemd[1]: Started libpod-conmon-aae7af198366bc2739ebc3c2f39f9dc44b6a97cd0822b993dc49d52e60e6c776.scope.
Nov 24 19:58:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:58:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:58:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:58:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:58:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:58:11 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:58:11 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:11 compute-0 ceph-mon[75677]: pgmap v435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:11 compute-0 podman[133898]: 2025-11-24 19:58:11.145390427 +0000 UTC m=+0.046347691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:58:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:58:11 compute-0 podman[133898]: 2025-11-24 19:58:11.290291223 +0000 UTC m=+0.191248477 container init aae7af198366bc2739ebc3c2f39f9dc44b6a97cd0822b993dc49d52e60e6c776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 19:58:11 compute-0 podman[133898]: 2025-11-24 19:58:11.30231539 +0000 UTC m=+0.203272594 container start aae7af198366bc2739ebc3c2f39f9dc44b6a97cd0822b993dc49d52e60e6c776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bouman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 19:58:11 compute-0 podman[133898]: 2025-11-24 19:58:11.306540204 +0000 UTC m=+0.207497418 container attach aae7af198366bc2739ebc3c2f39f9dc44b6a97cd0822b993dc49d52e60e6c776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:58:11 compute-0 adoring_bouman[133913]: 167 167
Nov 24 19:58:11 compute-0 systemd[1]: libpod-aae7af198366bc2739ebc3c2f39f9dc44b6a97cd0822b993dc49d52e60e6c776.scope: Deactivated successfully.
Nov 24 19:58:11 compute-0 podman[133918]: 2025-11-24 19:58:11.377446912 +0000 UTC m=+0.045685853 container died aae7af198366bc2739ebc3c2f39f9dc44b6a97cd0822b993dc49d52e60e6c776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 19:58:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:11.394+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-30b536cced1003fe9ca5629f9d2ae174e1a0c30133cc518cfc670d5b5cb7ecb4-merged.mount: Deactivated successfully.
Nov 24 19:58:11 compute-0 podman[133918]: 2025-11-24 19:58:11.434364648 +0000 UTC m=+0.102603589 container remove aae7af198366bc2739ebc3c2f39f9dc44b6a97cd0822b993dc49d52e60e6c776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_bouman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:58:11 compute-0 systemd[1]: libpod-conmon-aae7af198366bc2739ebc3c2f39f9dc44b6a97cd0822b993dc49d52e60e6c776.scope: Deactivated successfully.
Nov 24 19:58:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:11.560+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:11 compute-0 podman[133941]: 2025-11-24 19:58:11.713331837 +0000 UTC m=+0.079613814 container create 18ce3288b605e565bebefbe70186478abb88cf373caebe425362110a7c566209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_turing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:58:11 compute-0 systemd[1]: Started libpod-conmon-18ce3288b605e565bebefbe70186478abb88cf373caebe425362110a7c566209.scope.
Nov 24 19:58:11 compute-0 podman[133941]: 2025-11-24 19:58:11.6825385 +0000 UTC m=+0.048820487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:58:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ed76b573604c78f62854b78665cdb1c759b6e4f90af93a448558a4d313f802/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ed76b573604c78f62854b78665cdb1c759b6e4f90af93a448558a4d313f802/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ed76b573604c78f62854b78665cdb1c759b6e4f90af93a448558a4d313f802/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ed76b573604c78f62854b78665cdb1c759b6e4f90af93a448558a4d313f802/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ed76b573604c78f62854b78665cdb1c759b6e4f90af93a448558a4d313f802/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:11 compute-0 podman[133941]: 2025-11-24 19:58:11.830343116 +0000 UTC m=+0.196625093 container init 18ce3288b605e565bebefbe70186478abb88cf373caebe425362110a7c566209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_turing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:58:11 compute-0 podman[133941]: 2025-11-24 19:58:11.850198935 +0000 UTC m=+0.216480912 container start 18ce3288b605e565bebefbe70186478abb88cf373caebe425362110a7c566209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 19:58:11 compute-0 podman[133941]: 2025-11-24 19:58:11.854500733 +0000 UTC m=+0.220782770 container attach 18ce3288b605e565bebefbe70186478abb88cf373caebe425362110a7c566209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 19:58:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:12.410+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:12 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:12.573+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:13 compute-0 tender_turing[133957]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:58:13 compute-0 tender_turing[133957]: --> relative data size: 1.0
Nov 24 19:58:13 compute-0 tender_turing[133957]: --> All data devices are unavailable
Nov 24 19:58:13 compute-0 systemd[1]: libpod-18ce3288b605e565bebefbe70186478abb88cf373caebe425362110a7c566209.scope: Deactivated successfully.
Nov 24 19:58:13 compute-0 podman[133941]: 2025-11-24 19:58:13.284319709 +0000 UTC m=+1.650601676 container died 18ce3288b605e565bebefbe70186478abb88cf373caebe425362110a7c566209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 19:58:13 compute-0 systemd[1]: libpod-18ce3288b605e565bebefbe70186478abb88cf373caebe425362110a7c566209.scope: Consumed 1.234s CPU time.
Nov 24 19:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4ed76b573604c78f62854b78665cdb1c759b6e4f90af93a448558a4d313f802-merged.mount: Deactivated successfully.
Nov 24 19:58:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:13.427+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:13 compute-0 podman[133941]: 2025-11-24 19:58:13.529484741 +0000 UTC m=+1.895766708 container remove 18ce3288b605e565bebefbe70186478abb88cf373caebe425362110a7c566209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_turing, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 19:58:13 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:13 compute-0 ceph-mon[75677]: pgmap v436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:13 compute-0 systemd[1]: libpod-conmon-18ce3288b605e565bebefbe70186478abb88cf373caebe425362110a7c566209.scope: Deactivated successfully.
Nov 24 19:58:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:13.568+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:13 compute-0 sudo[133833]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:13 compute-0 sudo[134000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:13 compute-0 sudo[134000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:13 compute-0 sudo[134000]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:13 compute-0 sudo[134025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:58:13 compute-0 sudo[134025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:13 compute-0 sudo[134025]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:13 compute-0 sudo[134050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:13 compute-0 sudo[134050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:13 compute-0 sudo[134050]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:13 compute-0 sudo[134075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:58:13 compute-0 sudo[134075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:14 compute-0 podman[134139]: 2025-11-24 19:58:14.443411632 +0000 UTC m=+0.079259515 container create 50b74c36e5d6e9c4cde1a95964e69766baf03448b6cc6acdd3389638da1888df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:58:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:14.475+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:14 compute-0 systemd[1]: Started libpod-conmon-50b74c36e5d6e9c4cde1a95964e69766baf03448b6cc6acdd3389638da1888df.scope.
Nov 24 19:58:14 compute-0 podman[134139]: 2025-11-24 19:58:14.412785499 +0000 UTC m=+0.048633422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:58:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:58:14 compute-0 podman[134139]: 2025-11-24 19:58:14.544493168 +0000 UTC m=+0.180341101 container init 50b74c36e5d6e9c4cde1a95964e69766baf03448b6cc6acdd3389638da1888df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:58:14 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:14 compute-0 podman[134139]: 2025-11-24 19:58:14.55561567 +0000 UTC m=+0.191463553 container start 50b74c36e5d6e9c4cde1a95964e69766baf03448b6cc6acdd3389638da1888df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 19:58:14 compute-0 nervous_shirley[134155]: 167 167
Nov 24 19:58:14 compute-0 systemd[1]: libpod-50b74c36e5d6e9c4cde1a95964e69766baf03448b6cc6acdd3389638da1888df.scope: Deactivated successfully.
Nov 24 19:58:14 compute-0 conmon[134155]: conmon 50b74c36e5d6e9c4cde1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50b74c36e5d6e9c4cde1a95964e69766baf03448b6cc6acdd3389638da1888df.scope/container/memory.events
Nov 24 19:58:14 compute-0 podman[134139]: 2025-11-24 19:58:14.565717565 +0000 UTC m=+0.201565508 container attach 50b74c36e5d6e9c4cde1a95964e69766baf03448b6cc6acdd3389638da1888df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:58:14 compute-0 podman[134139]: 2025-11-24 19:58:14.567203675 +0000 UTC m=+0.203051558 container died 50b74c36e5d6e9c4cde1a95964e69766baf03448b6cc6acdd3389638da1888df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:58:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fd25907fffaea50067d7f54dc5d3bde7c3d68eeb9a3058de2607df76b47a02c-merged.mount: Deactivated successfully.
Nov 24 19:58:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:14.619+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:14 compute-0 podman[134139]: 2025-11-24 19:58:14.63105258 +0000 UTC m=+0.266900453 container remove 50b74c36e5d6e9c4cde1a95964e69766baf03448b6cc6acdd3389638da1888df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_shirley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:58:14 compute-0 systemd[1]: libpod-conmon-50b74c36e5d6e9c4cde1a95964e69766baf03448b6cc6acdd3389638da1888df.scope: Deactivated successfully.
Nov 24 19:58:14 compute-0 podman[134179]: 2025-11-24 19:58:14.899501713 +0000 UTC m=+0.081215027 container create 4f12a13d4f9fcd1a0baa1ba9cc2bdeccd70b5e58cbe54a8128920d0df9e6573a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lalande, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 19:58:14 compute-0 podman[134179]: 2025-11-24 19:58:14.866791024 +0000 UTC m=+0.048504388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:58:14 compute-0 systemd[1]: Started libpod-conmon-4f12a13d4f9fcd1a0baa1ba9cc2bdeccd70b5e58cbe54a8128920d0df9e6573a.scope.
Nov 24 19:58:14 compute-0 sshd-session[134192]: Accepted publickey for zuul from 192.168.122.30 port 59932 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:58:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3af9b94f544382f549cf19b53cf494ffa79dea11d0cc7fc1cb1e68ce497251b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3af9b94f544382f549cf19b53cf494ffa79dea11d0cc7fc1cb1e68ce497251b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3af9b94f544382f549cf19b53cf494ffa79dea11d0cc7fc1cb1e68ce497251b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3af9b94f544382f549cf19b53cf494ffa79dea11d0cc7fc1cb1e68ce497251b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:15 compute-0 systemd-logind[795]: New session 42 of user zuul.
Nov 24 19:58:15 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 24 19:58:15 compute-0 podman[134179]: 2025-11-24 19:58:15.02524998 +0000 UTC m=+0.206963334 container init 4f12a13d4f9fcd1a0baa1ba9cc2bdeccd70b5e58cbe54a8128920d0df9e6573a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lalande, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 19:58:15 compute-0 sshd-session[134192]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:58:15 compute-0 podman[134179]: 2025-11-24 19:58:15.040169385 +0000 UTC m=+0.221882689 container start 4f12a13d4f9fcd1a0baa1ba9cc2bdeccd70b5e58cbe54a8128920d0df9e6573a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:58:15 compute-0 podman[134179]: 2025-11-24 19:58:15.04438588 +0000 UTC m=+0.226099184 container attach 4f12a13d4f9fcd1a0baa1ba9cc2bdeccd70b5e58cbe54a8128920d0df9e6573a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 19:58:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:15.485+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:15 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:15 compute-0 ceph-mon[75677]: pgmap v437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:15.629+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:15 compute-0 naughty_lalande[134197]: {
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:     "0": [
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:         {
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "devices": [
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "/dev/loop3"
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             ],
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_name": "ceph_lv0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_size": "21470642176",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "name": "ceph_lv0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "tags": {
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.cluster_name": "ceph",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.crush_device_class": "",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.encrypted": "0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.osd_id": "0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.type": "block",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.vdo": "0"
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             },
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "type": "block",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "vg_name": "ceph_vg0"
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:         }
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:     ],
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:     "1": [
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:         {
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "devices": [
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "/dev/loop4"
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             ],
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_name": "ceph_lv1",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_size": "21470642176",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "name": "ceph_lv1",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "tags": {
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.cluster_name": "ceph",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.crush_device_class": "",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.encrypted": "0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.osd_id": "1",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.type": "block",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.vdo": "0"
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             },
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "type": "block",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "vg_name": "ceph_vg1"
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:         }
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:     ],
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:     "2": [
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:         {
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "devices": [
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "/dev/loop5"
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             ],
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_name": "ceph_lv2",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_size": "21470642176",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "name": "ceph_lv2",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "tags": {
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.cluster_name": "ceph",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.crush_device_class": "",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.encrypted": "0",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.osd_id": "2",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.type": "block",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:                 "ceph.vdo": "0"
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             },
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "type": "block",
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:             "vg_name": "ceph_vg2"
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:         }
Nov 24 19:58:15 compute-0 naughty_lalande[134197]:     ]
Nov 24 19:58:15 compute-0 naughty_lalande[134197]: }
Nov 24 19:58:15 compute-0 systemd[1]: libpod-4f12a13d4f9fcd1a0baa1ba9cc2bdeccd70b5e58cbe54a8128920d0df9e6573a.scope: Deactivated successfully.
Nov 24 19:58:15 compute-0 podman[134179]: 2025-11-24 19:58:15.972079305 +0000 UTC m=+1.153792619 container died 4f12a13d4f9fcd1a0baa1ba9cc2bdeccd70b5e58cbe54a8128920d0df9e6573a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lalande, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 19:58:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3af9b94f544382f549cf19b53cf494ffa79dea11d0cc7fc1cb1e68ce497251b-merged.mount: Deactivated successfully.
Nov 24 19:58:16 compute-0 podman[134179]: 2025-11-24 19:58:16.056898419 +0000 UTC m=+1.238611733 container remove 4f12a13d4f9fcd1a0baa1ba9cc2bdeccd70b5e58cbe54a8128920d0df9e6573a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:58:16 compute-0 systemd[1]: libpod-conmon-4f12a13d4f9fcd1a0baa1ba9cc2bdeccd70b5e58cbe54a8128920d0df9e6573a.scope: Deactivated successfully.
Nov 24 19:58:16 compute-0 sudo[134075]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:16 compute-0 sudo[134372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:16 compute-0 sudo[134372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:16 compute-0 sudo[134372]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:16 compute-0 sudo[134397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:58:16 compute-0 sudo[134397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:16 compute-0 sudo[134397]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:16 compute-0 sudo[134422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:16 compute-0 sudo[134422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:16 compute-0 sudo[134422]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:16 compute-0 python3.9[134371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:58:16 compute-0 sudo[134447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:58:16 compute-0 sudo[134447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 411 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:16.529+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.534373) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014296534441, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2189, "num_deletes": 251, "total_data_size": 2667655, "memory_usage": 2713240, "flush_reason": "Manual Compaction"}
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014296550389, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1686313, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7946, "largest_seqno": 10134, "table_properties": {"data_size": 1678616, "index_size": 3747, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 25281, "raw_average_key_size": 22, "raw_value_size": 1659219, "raw_average_value_size": 1469, "num_data_blocks": 170, "num_entries": 1129, "num_filter_entries": 1129, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764014146, "oldest_key_time": 1764014146, "file_creation_time": 1764014296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 16056 microseconds, and 6760 cpu microseconds.
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.550429) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1686313 bytes OK
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.550453) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.551681) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.551697) EVENT_LOG_v1 {"time_micros": 1764014296551692, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.551718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2657577, prev total WAL file size 2657577, number of live WAL files 2.
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.552810) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1646KB)], [20(8094KB)]
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014296552877, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9974914, "oldest_snapshot_seqno": -1}
Nov 24 19:58:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:16 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:16 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 411 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 4335 keys, 7968300 bytes, temperature: kUnknown
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014296613281, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7968300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7936126, "index_size": 20224, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107154, "raw_average_key_size": 24, "raw_value_size": 7854317, "raw_average_value_size": 1811, "num_data_blocks": 880, "num_entries": 4335, "num_filter_entries": 4335, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764014296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.613713) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7968300 bytes
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.615473) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.8 rd, 131.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.9 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(10.6) write-amplify(4.7) OK, records in: 4780, records dropped: 445 output_compression: NoCompression
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.615506) EVENT_LOG_v1 {"time_micros": 1764014296615489, "job": 6, "event": "compaction_finished", "compaction_time_micros": 60530, "compaction_time_cpu_micros": 25181, "output_level": 6, "num_output_files": 1, "total_output_size": 7968300, "num_input_records": 4780, "num_output_records": 4335, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014296616235, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014296618806, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.552730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.618853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.618860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.618863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.618866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:16.618869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:16.631+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:16 compute-0 podman[134552]: 2025-11-24 19:58:16.934135533 +0000 UTC m=+0.062954901 container create 0c38a611aab4ec7f19a04bdcfcedbecdb84657500525662ecdd5d6bb4df9b549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:58:16 compute-0 systemd[1]: Started libpod-conmon-0c38a611aab4ec7f19a04bdcfcedbecdb84657500525662ecdd5d6bb4df9b549.scope.
Nov 24 19:58:17 compute-0 podman[134552]: 2025-11-24 19:58:16.912599628 +0000 UTC m=+0.041419026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:58:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:58:17 compute-0 podman[134552]: 2025-11-24 19:58:17.053855056 +0000 UTC m=+0.182674464 container init 0c38a611aab4ec7f19a04bdcfcedbecdb84657500525662ecdd5d6bb4df9b549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:58:17 compute-0 podman[134552]: 2025-11-24 19:58:17.064891046 +0000 UTC m=+0.193710444 container start 0c38a611aab4ec7f19a04bdcfcedbecdb84657500525662ecdd5d6bb4df9b549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 19:58:17 compute-0 podman[134552]: 2025-11-24 19:58:17.069188072 +0000 UTC m=+0.198007490 container attach 0c38a611aab4ec7f19a04bdcfcedbecdb84657500525662ecdd5d6bb4df9b549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 19:58:17 compute-0 jovial_ganguly[134609]: 167 167
Nov 24 19:58:17 compute-0 systemd[1]: libpod-0c38a611aab4ec7f19a04bdcfcedbecdb84657500525662ecdd5d6bb4df9b549.scope: Deactivated successfully.
Nov 24 19:58:17 compute-0 podman[134552]: 2025-11-24 19:58:17.074960179 +0000 UTC m=+0.203779577 container died 0c38a611aab4ec7f19a04bdcfcedbecdb84657500525662ecdd5d6bb4df9b549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:58:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab3c9270a1a6af8a79288fa34be8257e3cbadd91bdc125f83b832533da468526-merged.mount: Deactivated successfully.
Nov 24 19:58:17 compute-0 podman[134552]: 2025-11-24 19:58:17.135104524 +0000 UTC m=+0.263923882 container remove 0c38a611aab4ec7f19a04bdcfcedbecdb84657500525662ecdd5d6bb4df9b549 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:58:17 compute-0 systemd[1]: libpod-conmon-0c38a611aab4ec7f19a04bdcfcedbecdb84657500525662ecdd5d6bb4df9b549.scope: Deactivated successfully.
Nov 24 19:58:17 compute-0 podman[134645]: 2025-11-24 19:58:17.396208567 +0000 UTC m=+0.079972853 container create e38b98999915d2e7bd67399569aac9138356f05300f75f216cd3139f76d5f080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:58:17 compute-0 podman[134645]: 2025-11-24 19:58:17.362272785 +0000 UTC m=+0.046037141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:58:17 compute-0 systemd[1]: Started libpod-conmon-e38b98999915d2e7bd67399569aac9138356f05300f75f216cd3139f76d5f080.scope.
Nov 24 19:58:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1fa71737c2d3f6d591051ad43d3b6183e9dbe476b4bd69707f58560c13fbc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1fa71737c2d3f6d591051ad43d3b6183e9dbe476b4bd69707f58560c13fbc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1fa71737c2d3f6d591051ad43d3b6183e9dbe476b4bd69707f58560c13fbc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1fa71737c2d3f6d591051ad43d3b6183e9dbe476b4bd69707f58560c13fbc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:58:17 compute-0 podman[134645]: 2025-11-24 19:58:17.505013474 +0000 UTC m=+0.188777850 container init e38b98999915d2e7bd67399569aac9138356f05300f75f216cd3139f76d5f080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 19:58:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:17.506+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:17 compute-0 podman[134645]: 2025-11-24 19:58:17.519904978 +0000 UTC m=+0.203669264 container start e38b98999915d2e7bd67399569aac9138356f05300f75f216cd3139f76d5f080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:58:17 compute-0 podman[134645]: 2025-11-24 19:58:17.523705082 +0000 UTC m=+0.207469458 container attach e38b98999915d2e7bd67399569aac9138356f05300f75f216cd3139f76d5f080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 19:58:17 compute-0 sudo[134728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijriandwmnfhsvdnugfokexufogieeyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014296.8843992-32-167011308280981/AnsiballZ_systemd.py'
Nov 24 19:58:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:17 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:17 compute-0 sudo[134728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:17 compute-0 ceph-mon[75677]: pgmap v438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:17.613+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:17 compute-0 python3.9[134730]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 19:58:18 compute-0 sudo[134728]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:18.552+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:18 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:18.601+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]: {
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "osd_id": 2,
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "type": "bluestore"
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:     },
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "osd_id": 1,
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "type": "bluestore"
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:     },
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "osd_id": 0,
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:         "type": "bluestore"
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]:     }
Nov 24 19:58:18 compute-0 gifted_sinoussi[134697]: }
Nov 24 19:58:18 compute-0 systemd[1]: libpod-e38b98999915d2e7bd67399569aac9138356f05300f75f216cd3139f76d5f080.scope: Deactivated successfully.
Nov 24 19:58:18 compute-0 systemd[1]: libpod-e38b98999915d2e7bd67399569aac9138356f05300f75f216cd3139f76d5f080.scope: Consumed 1.195s CPU time.
Nov 24 19:58:18 compute-0 podman[134645]: 2025-11-24 19:58:18.711147903 +0000 UTC m=+1.394912229 container died e38b98999915d2e7bd67399569aac9138356f05300f75f216cd3139f76d5f080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 19:58:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d1fa71737c2d3f6d591051ad43d3b6183e9dbe476b4bd69707f58560c13fbc6-merged.mount: Deactivated successfully.
Nov 24 19:58:18 compute-0 podman[134645]: 2025-11-24 19:58:18.780248191 +0000 UTC m=+1.464012517 container remove e38b98999915d2e7bd67399569aac9138356f05300f75f216cd3139f76d5f080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 19:58:18 compute-0 systemd[1]: libpod-conmon-e38b98999915d2e7bd67399569aac9138356f05300f75f216cd3139f76d5f080.scope: Deactivated successfully.
Nov 24 19:58:18 compute-0 sudo[134922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxxpkkmcrhkrosqohoimqsissoplavgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014298.2706165-40-269872673307923/AnsiballZ_systemd.py'
Nov 24 19:58:18 compute-0 sudo[134922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:18 compute-0 sudo[134447]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:58:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:58:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:58:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:58:18 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 080dd85b-6d25-40bd-b08d-5e9d1ebf037a does not exist
Nov 24 19:58:18 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c8249749-09ec-4ea0-aea6-05364cc9d491 does not exist
Nov 24 19:58:18 compute-0 sudo[134925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:58:18 compute-0 sudo[134925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:18 compute-0 sudo[134925]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:19 compute-0 sudo[134950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:58:19 compute-0 sudo[134950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:58:19 compute-0 sudo[134950]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:19 compute-0 python3.9[134924]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 19:58:19 compute-0 sudo[134922]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:19.518+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:19.560+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:19 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:19 compute-0 ceph-mon[75677]: pgmap v439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:58:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:58:20 compute-0 sudo[135125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kosxeqkxiaptfayvfkfayqfxxfhthrlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014299.4538214-49-215310375267808/AnsiballZ_command.py'
Nov 24 19:58:20 compute-0 sudo[135125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:20 compute-0 python3.9[135127]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:58:20 compute-0 sudo[135125]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:20.539+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:20.548+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:20 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:21 compute-0 sudo[135278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbxoxpdsedhautodzlumxcckmnvrslzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014300.5451422-57-190631817237991/AnsiballZ_stat.py'
Nov 24 19:58:21 compute-0 sudo[135278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:21 compute-0 python3.9[135280]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:58:21 compute-0 sudo[135278]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:21.527+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 421 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:21.540+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:21 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:21 compute-0 ceph-mon[75677]: pgmap v440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:21 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 421 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:22 compute-0 sudo[135430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzyvedajlwzhkabhntfzjezuoorymmgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014301.607717-66-161333988595966/AnsiballZ_file.py'
Nov 24 19:58:22 compute-0 sudo[135430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:22 compute-0 python3.9[135432]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:58:22 compute-0 sudo[135430]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:22.481+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:22.558+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:22 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:22 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:22 compute-0 ceph-mon[75677]: pgmap v441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:22 compute-0 sshd-session[134202]: Connection closed by 192.168.122.30 port 59932
Nov 24 19:58:22 compute-0 sshd-session[134192]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:58:22 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 24 19:58:22 compute-0 systemd[1]: session-42.scope: Consumed 5.265s CPU time.
Nov 24 19:58:22 compute-0 systemd-logind[795]: Session 42 logged out. Waiting for processes to exit.
Nov 24 19:58:22 compute-0 systemd-logind[795]: Removed session 42.
Nov 24 19:58:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 19:58:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5550 writes, 23K keys, 5550 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5550 writes, 833 syncs, 6.66 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5550 writes, 23K keys, 5550 commit groups, 1.0 writes per commit group, ingest: 18.58 MB, 0.03 MB/s
                                           Interval WAL: 5550 writes, 833 syncs, 6.66 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa41090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa41090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa41090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 19:58:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:23.458+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:23.584+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:23 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:58:24
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.mgr']
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:58:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:24.455+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:24.550+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:24 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:24 compute-0 ceph-mon[75677]: pgmap v442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:25.483+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:25.541+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:25 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:26.516+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:26.529+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 426 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:26 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:26 compute-0 ceph-mon[75677]: pgmap v443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:27.484+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:27.514+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:27 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 426 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:27 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:28 compute-0 sshd-session[135460]: Accepted publickey for zuul from 192.168.122.30 port 36128 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:58:28 compute-0 systemd-logind[795]: New session 43 of user zuul.
Nov 24 19:58:28 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 24 19:58:28 compute-0 sshd-session[135460]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:58:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:28.491+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:28.505+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 19:58:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6636 writes, 27K keys, 6636 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6636 writes, 1193 syncs, 5.56 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6636 writes, 27K keys, 6636 commit groups, 1.0 writes per commit group, ingest: 19.42 MB, 0.03 MB/s
                                           Interval WAL: 6636 writes, 1193 syncs, 5.56 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 19:58:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:28 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:28 compute-0 ceph-mon[75677]: pgmap v444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.862422) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014308862457, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 434, "num_deletes": 251, "total_data_size": 288432, "memory_usage": 298040, "flush_reason": "Manual Compaction"}
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014308866762, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 285136, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10135, "largest_seqno": 10568, "table_properties": {"data_size": 282582, "index_size": 590, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6516, "raw_average_key_size": 18, "raw_value_size": 277311, "raw_average_value_size": 801, "num_data_blocks": 26, "num_entries": 346, "num_filter_entries": 346, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764014296, "oldest_key_time": 1764014296, "file_creation_time": 1764014308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 4367 microseconds, and 1436 cpu microseconds.
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.866793) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 285136 bytes OK
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.866807) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.868605) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.868616) EVENT_LOG_v1 {"time_micros": 1764014308868612, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.868630) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 285671, prev total WAL file size 285671, number of live WAL files 2.
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.869063) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(278KB)], [23(7781KB)]
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014308869120, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8253436, "oldest_snapshot_seqno": -1}
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4169 keys, 6454472 bytes, temperature: kUnknown
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014308926643, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6454472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6425448, "index_size": 17522, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 104820, "raw_average_key_size": 25, "raw_value_size": 6348472, "raw_average_value_size": 1522, "num_data_blocks": 750, "num_entries": 4169, "num_filter_entries": 4169, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764014308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.927019) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6454472 bytes
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.928527) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.2 rd, 112.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 7.6 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(51.6) write-amplify(22.6) OK, records in: 4681, records dropped: 512 output_compression: NoCompression
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.928574) EVENT_LOG_v1 {"time_micros": 1764014308928552, "job": 8, "event": "compaction_finished", "compaction_time_micros": 57637, "compaction_time_cpu_micros": 36557, "output_level": 6, "num_output_files": 1, "total_output_size": 6454472, "num_input_records": 4681, "num_output_records": 4169, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014308928950, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014308932274, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.868979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.932415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.932426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.932430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.932434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:28 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-19:58:28.932438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 19:58:29 compute-0 python3.9[135613]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:58:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:29.473+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:29.543+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:29 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:30 compute-0 sudo[135767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhzinaoptdkfvqmxvqhneubnjvsrbbyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014309.8709836-34-46551516923381/AnsiballZ_setup.py'
Nov 24 19:58:30 compute-0 sudo[135767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:30.516+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:30.532+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:30 compute-0 python3.9[135769]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:58:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:30 compute-0 sudo[135767]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:30 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:30 compute-0 ceph-mon[75677]: pgmap v445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:31 compute-0 sudo[135851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egujjhleiwioghoneicfjvdyoiwhiuay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014309.8709836-34-46551516923381/AnsiballZ_dnf.py'
Nov 24 19:58:31 compute-0 sudo[135851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:31.488+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:31.497+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:31 compute-0 python3.9[135853]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 19:58:31 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:32.481+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:32.524+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:32 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:32 compute-0 ceph-mon[75677]: pgmap v446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:32 compute-0 sudo[135851]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:33.529+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:33.533+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:33 compute-0 python3.9[136004]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:58:34 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 19:58:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 19:58:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5353 writes, 23K keys, 5353 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5353 writes, 746 syncs, 7.18 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5353 writes, 23K keys, 5353 commit groups, 1.0 writes per commit group, ingest: 18.31 MB, 0.03 MB/s
                                           Interval WAL: 5353 writes, 746 syncs, 7.18 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 19:58:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:34.523+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:34.564+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:35 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:35 compute-0 ceph-mon[75677]: pgmap v447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:35 compute-0 python3.9[136155]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 19:58:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:35.554+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:35.556+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:36 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:36 compute-0 python3.9[136305]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:58:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:36.510+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 431 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:36.541+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:37 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:37 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 431 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:37 compute-0 ceph-mon[75677]: pgmap v448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:37 compute-0 python3.9[136455]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 19:58:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:37.508+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:37 compute-0 sshd-session[135463]: Connection closed by 192.168.122.30 port 36128
Nov 24 19:58:37 compute-0 sshd-session[135460]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:58:37 compute-0 systemd-logind[795]: Session 43 logged out. Waiting for processes to exit.
Nov 24 19:58:37 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 24 19:58:37 compute-0 systemd[1]: session-43.scope: Consumed 6.841s CPU time.
Nov 24 19:58:37 compute-0 systemd-logind[795]: Removed session 43.
Nov 24 19:58:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:37.563+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:38 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:38.497+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:38.515+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:39 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:39 compute-0 ceph-mon[75677]: pgmap v449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:39.477+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:39.508+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:39 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Check health
Nov 24 19:58:40 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:58:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:58:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:58:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:58:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:58:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:40.440+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:40.499+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:41 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:41 compute-0 ceph-mon[75677]: pgmap v450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:41.446+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:41.457+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 441 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:42 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:42 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 441 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:42.411+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:42.471+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:43 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:43 compute-0 ceph-mon[75677]: pgmap v451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:43.398+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:43 compute-0 sshd-session[136480]: Accepted publickey for zuul from 192.168.122.30 port 51622 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:58:43 compute-0 systemd-logind[795]: New session 44 of user zuul.
Nov 24 19:58:43 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 24 19:58:43 compute-0 sshd-session[136480]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:58:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:43.504+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:44 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:44.401+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:44.527+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:44 compute-0 python3.9[136633]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:58:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:45.367+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:45 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:45 compute-0 ceph-mon[75677]: pgmap v452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:45.537+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:46 compute-0 sudo[136787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cizubftviibyhuardcuuvvxwoywschlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014325.6919193-50-198352974774249/AnsiballZ_file.py'
Nov 24 19:58:46 compute-0 sudo[136787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:46.345+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:46 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:46 compute-0 python3.9[136789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:58:46 compute-0 sudo[136787]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:46.519+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:47 compute-0 sudo[136939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnzkngzwzfuafiklppftpkyrzyngkhrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014326.6714447-50-46327760669751/AnsiballZ_file.py'
Nov 24 19:58:47 compute-0 sudo[136939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:47 compute-0 python3.9[136941]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:58:47 compute-0 sudo[136939]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:47.391+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 446 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:47 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:47 compute-0 ceph-mon[75677]: pgmap v453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:47.554+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:48 compute-0 sudo[137091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjehqckhsnbwtwnvcejhqdlpmdzqxrhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014327.4963524-65-164908635455506/AnsiballZ_stat.py'
Nov 24 19:58:48 compute-0 sudo[137091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:48 compute-0 python3.9[137093]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:58:48 compute-0 sudo[137091]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:48.363+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:48 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 446 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:48 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:48.598+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:48 compute-0 sudo[137214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvaefwgosvpaxrshxpnjgwseyndvingu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014327.4963524-65-164908635455506/AnsiballZ_copy.py'
Nov 24 19:58:48 compute-0 sudo[137214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:49 compute-0 python3.9[137216]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014327.4963524-65-164908635455506/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4896782f0550077d5e3c79ae8236687cb8f41972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:58:49 compute-0 sudo[137214]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:49.343+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:49.588+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:49 compute-0 ceph-mon[75677]: pgmap v454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:49 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:49 compute-0 sudo[137366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbbflfshfgfnxaivhrinsttkorywzynq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014329.3441436-65-117708995129791/AnsiballZ_stat.py'
Nov 24 19:58:49 compute-0 sudo[137366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:50 compute-0 python3.9[137368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:58:50 compute-0 sudo[137366]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:50.380+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:50 compute-0 sudo[137489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etnclehllfmuwrjzbtkmsyakcgxneijp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014329.3441436-65-117708995129791/AnsiballZ_copy.py'
Nov 24 19:58:50 compute-0 sudo[137489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:50.632+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:50 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:50 compute-0 python3.9[137491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014329.3441436-65-117708995129791/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=00a6c657d224172b6fb3646f320786a75f91736e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:58:50 compute-0 sudo[137489]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:50 compute-0 sshd-session[137516]: Connection closed by 159.65.46.209 port 57808
Nov 24 19:58:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:51.352+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:51 compute-0 sudo[137642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufdckfqkxhcybmqynuxviowantmoxlcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014331.009079-65-42564499254108/AnsiballZ_stat.py'
Nov 24 19:58:51 compute-0 sudo[137642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:51 compute-0 python3.9[137644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:58:51 compute-0 sudo[137642]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:51.613+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:51 compute-0 ceph-mon[75677]: pgmap v455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:51 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:51 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:51 compute-0 sudo[137765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csjqmltolddrvmyvcsddcucliawqagdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014331.009079-65-42564499254108/AnsiballZ_copy.py'
Nov 24 19:58:51 compute-0 sudo[137765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:52 compute-0 python3.9[137767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014331.009079-65-42564499254108/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=32cded0fa34ffb75c9fe3f92d141c284d99d8ad6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:58:52 compute-0 sudo[137765]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:52.354+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:52.580+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:52 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:52 compute-0 ceph-mon[75677]: pgmap v456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:52 compute-0 sudo[137917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehsefkpbsnbhllglrpfjkfgbkldndbsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014332.4595072-109-271123144688463/AnsiballZ_file.py'
Nov 24 19:58:52 compute-0 sudo[137917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:53 compute-0 python3.9[137919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:58:53 compute-0 sudo[137917]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:53.377+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:53.584+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:53 compute-0 sudo[138069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcfqaddpatymugemsfiqkgtghlpilymj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014333.332441-109-87041343162375/AnsiballZ_file.py'
Nov 24 19:58:53 compute-0 sudo[138069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:53 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:53 compute-0 python3.9[138071]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:58:53 compute-0 sudo[138069]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:54.342+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:58:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:54.591+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:54 compute-0 sudo[138221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwuexsvlmmhwrdmrettfyozrevfvywbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014334.218549-124-114101551415394/AnsiballZ_stat.py'
Nov 24 19:58:54 compute-0 sudo[138221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:54 compute-0 python3.9[138223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:58:54 compute-0 sudo[138221]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:54 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:54 compute-0 ceph-mon[75677]: pgmap v457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:55.295+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:55 compute-0 sudo[138344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajrlwvldmlczihulkdlbdungztdulklx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014334.218549-124-114101551415394/AnsiballZ_copy.py'
Nov 24 19:58:55 compute-0 sudo[138344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:55 compute-0 python3.9[138346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014334.218549-124-114101551415394/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d1ed54bd3cedc12976ba069849c1421b653df906 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:58:55 compute-0 sudo[138344]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:55.615+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:56 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:56 compute-0 sudo[138496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkhcfjkosfjtzqkuwmytuzfidszrbxgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014335.790098-124-212787197234436/AnsiballZ_stat.py'
Nov 24 19:58:56 compute-0 sudo[138496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:56.308+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:56 compute-0 python3.9[138498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:58:56 compute-0 sudo[138496]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 451 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:58:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:56.636+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:56 compute-0 sudo[138619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jynpmxqsrbmcqecapysrlfctsxcpnyqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014335.790098-124-212787197234436/AnsiballZ_copy.py'
Nov 24 19:58:56 compute-0 sudo[138619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:57 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 451 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:58:57 compute-0 ceph-mon[75677]: pgmap v458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:57 compute-0 python3.9[138621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014335.790098-124-212787197234436/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=190c40a22266308c54ef409861b0d52094b929ea backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:58:57 compute-0 sudo[138619]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:57.270+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:57.620+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:57 compute-0 sudo[138771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jypacdtigsxelxjyodgeuqdhhpaqmucj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014337.2771678-124-158557999719850/AnsiballZ_stat.py'
Nov 24 19:58:57 compute-0 sudo[138771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:57 compute-0 python3.9[138773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:58:57 compute-0 sudo[138771]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:58 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:58 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:58.293+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:58 compute-0 sudo[138894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbyjvhsylohvdxaopthlmtryteskialb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014337.2771678-124-158557999719850/AnsiballZ_copy.py'
Nov 24 19:58:58 compute-0 sudo[138894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:58 compute-0 python3.9[138896]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014337.2771678-124-158557999719850/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=50195378a08a6362a9cfd05666c80095820911b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:58:58 compute-0 sudo[138894]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:58.661+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:58:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:59 compute-0 ceph-mon[75677]: pgmap v459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:58:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:58:59.307+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:58:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:58:59 compute-0 sudo[139046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgaunjqduqnmrjijqzmnjlcilldgjnaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014338.8596187-168-41530153709327/AnsiballZ_file.py'
Nov 24 19:58:59 compute-0 sudo[139046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:58:59 compute-0 python3.9[139048]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:58:59 compute-0 sudo[139046]: pam_unix(sudo:session): session closed for user root
Nov 24 19:58:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:58:59.693+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:58:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:00 compute-0 sudo[139198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gklbyiknobdkbhqjzlvfrblfpyzxrypl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014339.7798586-168-272369891822551/AnsiballZ_file.py'
Nov 24 19:59:00 compute-0 sudo[139198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:00 compute-0 sshd-session[135458]: Connection closed by authenticating user operator 27.79.44.141 port 39962 [preauth]
Nov 24 19:59:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:00.309+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:00 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:00 compute-0 python3.9[139200]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:59:00 compute-0 sudo[139198]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:00.726+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:00 compute-0 sudo[139350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjwhvwvlfkzfzdiaykbpoipjsofitipl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014340.6225846-183-117150645614640/AnsiballZ_stat.py'
Nov 24 19:59:00 compute-0 sudo[139350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:01 compute-0 python3.9[139352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:01 compute-0 sudo[139350]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:01.341+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:01 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:01 compute-0 ceph-mon[75677]: pgmap v460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 461 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:01 compute-0 sudo[139473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drhytereonaxktyvjddjizwvmoegsbbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014340.6225846-183-117150645614640/AnsiballZ_copy.py'
Nov 24 19:59:01 compute-0 sudo[139473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:01.731+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:01 compute-0 python3.9[139475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014340.6225846-183-117150645614640/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=21a12b83bf085cceb3ab7137908129ccecbc20d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:01 compute-0 sudo[139473]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:02.383+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:02 compute-0 sudo[139625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvqwpltnqkhojumpsmbxpurovytzmsqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014342.1006591-183-150261761445409/AnsiballZ_stat.py'
Nov 24 19:59:02 compute-0 sudo[139625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:02 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:02 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 461 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:02 compute-0 python3.9[139627]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:02 compute-0 sudo[139625]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:02.759+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:03 compute-0 sudo[139748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czkvowfrccrjbbcjsoniqipamsnritku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014342.1006591-183-150261761445409/AnsiballZ_copy.py'
Nov 24 19:59:03 compute-0 sudo[139748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:03.383+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:03 compute-0 python3.9[139750]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014342.1006591-183-150261761445409/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=190c40a22266308c54ef409861b0d52094b929ea backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:03 compute-0 sudo[139748]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:03 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:03 compute-0 ceph-mon[75677]: pgmap v461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:03.718+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:03 compute-0 sudo[139900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnxegwpnnxtmpvdhjqttwweevywvmpxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014343.6287565-183-40729143918828/AnsiballZ_stat.py'
Nov 24 19:59:03 compute-0 sudo[139900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:04 compute-0 python3.9[139902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:04 compute-0 sudo[139900]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:04.342+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:04 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:04 compute-0 sudo[140023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-larmfdhhpfwvfddpndogndasdjugglxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014343.6287565-183-40729143918828/AnsiballZ_copy.py'
Nov 24 19:59:04 compute-0 sudo[140023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:04.754+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:04 compute-0 python3.9[140025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014343.6287565-183-40729143918828/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=df196eb85cf708a7b109309219fdc5ff4f0fd28d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:04 compute-0 sudo[140023]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:05.322+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:05 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:05 compute-0 ceph-mon[75677]: pgmap v462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:05.761+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:06 compute-0 sudo[140175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dchyflhnpfnikelckjpzivytowppmbxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014345.6600535-243-89508416078059/AnsiballZ_file.py'
Nov 24 19:59:06 compute-0 sudo[140175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:06 compute-0 python3.9[140177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:59:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:06.292+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:06 compute-0 sudo[140175]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 466 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:06 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:06.795+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:06 compute-0 sudo[140327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnwwyedsxcthxmngypnstjgeiriaqggj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014346.5414798-251-211557307828144/AnsiballZ_stat.py'
Nov 24 19:59:06 compute-0 sudo[140327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:07 compute-0 python3.9[140329]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:07 compute-0 sudo[140327]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:07.331+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:07 compute-0 sudo[140450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-farchmdjcwfpsblrzeiynybnclydnpnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014346.5414798-251-211557307828144/AnsiballZ_copy.py'
Nov 24 19:59:07 compute-0 sudo[140450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:07 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:07 compute-0 ceph-mon[75677]: pgmap v463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:07 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 466 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:07 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:07.750+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:07 compute-0 python3.9[140452]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014346.5414798-251-211557307828144/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=de5200111fe33e8245893b12bd9b83df41ebfe0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:07 compute-0 sudo[140450]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:08.338+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:08 compute-0 sudo[140602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkokzyakbfyeeafaubcnipcmmwlkljo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014348.0478563-267-123607737499586/AnsiballZ_file.py'
Nov 24 19:59:08 compute-0 sudo[140602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:08 compute-0 python3.9[140604]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:59:08 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:08 compute-0 sudo[140602]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:08.726+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:09 compute-0 sudo[140756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaxzvfewdggiuujscpuxynkguxxhtkso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014348.857622-275-232529198531592/AnsiballZ_stat.py'
Nov 24 19:59:09 compute-0 sudo[140756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:09.339+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:09 compute-0 python3.9[140758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:09 compute-0 sudo[140756]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:09 compute-0 ceph-mon[75677]: pgmap v464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:09 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:09.726+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:10 compute-0 sudo[140879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyefepqpmysnpunxogdjtwygruhizntn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014348.857622-275-232529198531592/AnsiballZ_copy.py'
Nov 24 19:59:10 compute-0 sudo[140879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:10 compute-0 python3.9[140881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014348.857622-275-232529198531592/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=de5200111fe33e8245893b12bd9b83df41ebfe0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:10 compute-0 sudo[140879]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:10.333+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:10 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:10.707+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:10 compute-0 sudo[141031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adczmzuitbgkafxrztlspuvwmwmjdqlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014350.5346677-291-238646061133350/AnsiballZ_file.py'
Nov 24 19:59:10 compute-0 sudo[141031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:11 compute-0 python3.9[141033]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:59:11 compute-0 sudo[141031]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:11.328+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:11 compute-0 sudo[141183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsmgcsffsjqeaihcssifjjlwgaazlqih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014351.318481-299-197108954210785/AnsiballZ_stat.py'
Nov 24 19:59:11 compute-0 sudo[141183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:11 compute-0 ceph-mon[75677]: pgmap v465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:11 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:11.721+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:11 compute-0 python3.9[141185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:11 compute-0 sudo[141183]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:12.302+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:12 compute-0 sudo[141306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhdwcmodfhyztrucxsrnqvzifrasabmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014351.318481-299-197108954210785/AnsiballZ_copy.py'
Nov 24 19:59:12 compute-0 sudo[141306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:12 compute-0 sshd-session[140682]: Connection closed by authenticating user root 27.79.44.141 port 55674 [preauth]
Nov 24 19:59:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:12 compute-0 python3.9[141308]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014351.318481-299-197108954210785/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=de5200111fe33e8245893b12bd9b83df41ebfe0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:12 compute-0 sudo[141306]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:12.747+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:12 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:12 compute-0 ceph-mon[75677]: pgmap v466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:13.266+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:13 compute-0 sudo[141460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdvzfhlgyfdycbldyatzsdxunpnhriag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014352.924803-315-232061240544303/AnsiballZ_file.py'
Nov 24 19:59:13 compute-0 sudo[141460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:13 compute-0 python3.9[141462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:59:13 compute-0 sudo[141460]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:13.708+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:13 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:14 compute-0 sudo[141612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqxfsbpyqmnwmcjiqbknjpsdvmtuoduj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014353.7428038-323-121561799130685/AnsiballZ_stat.py'
Nov 24 19:59:14 compute-0 sudo[141612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:14.290+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:14 compute-0 python3.9[141614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:14 compute-0 sudo[141612]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:14.682+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:14 compute-0 sudo[141735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyzopyftlqoholsekuifehmxzaqqzqsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014353.7428038-323-121561799130685/AnsiballZ_copy.py'
Nov 24 19:59:14 compute-0 sudo[141735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:14 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:14 compute-0 ceph-mon[75677]: pgmap v467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:15 compute-0 python3.9[141737]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014353.7428038-323-121561799130685/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=de5200111fe33e8245893b12bd9b83df41ebfe0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:15 compute-0 sudo[141735]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:15.278+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:15.649+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:15 compute-0 sudo[141887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmcmeovskmfwtzehvonysukxfwtvwind ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014355.444062-339-222921214494658/AnsiballZ_file.py'
Nov 24 19:59:15 compute-0 sudo[141887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:16 compute-0 sshd-session[141309]: Invalid user support from 27.79.44.141 port 50038
Nov 24 19:59:16 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:16 compute-0 python3.9[141889]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:59:16 compute-0 sudo[141887]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:16.275+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 471 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:16.603+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:16 compute-0 sudo[142039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhrtipfiehliqaapaemjvqpaaehaktbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014356.312673-347-217407460775079/AnsiballZ_stat.py'
Nov 24 19:59:16 compute-0 sudo[142039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:16 compute-0 python3.9[142041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:16 compute-0 sudo[142039]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:17 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:17 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 471 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:17 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:17 compute-0 ceph-mon[75677]: pgmap v468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:17.299+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:17 compute-0 sudo[142162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcnnebclbhxumccsnejyqnzkmmoentie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014356.312673-347-217407460775079/AnsiballZ_copy.py'
Nov 24 19:59:17 compute-0 sudo[142162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:17 compute-0 python3.9[142164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014356.312673-347-217407460775079/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=de5200111fe33e8245893b12bd9b83df41ebfe0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:17 compute-0 sudo[142162]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:17.582+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:18 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:18 compute-0 sudo[142314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzzvtdiwddfbichawxqjjijzgzzgghxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014357.824564-363-243893436643174/AnsiballZ_file.py'
Nov 24 19:59:18 compute-0 sudo[142314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:18.276+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:18 compute-0 python3.9[142316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:59:18 compute-0 sudo[142314]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:18.587+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:19 compute-0 sudo[142466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apnxkjdbpdegynnsmailoefpinvxdwyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014358.6585572-371-8406698139574/AnsiballZ_stat.py'
Nov 24 19:59:19 compute-0 sudo[142466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:19 compute-0 sudo[142468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:59:19 compute-0 sudo[142468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:19 compute-0 sudo[142468]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:19 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:19 compute-0 ceph-mon[75677]: pgmap v469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:19 compute-0 sudo[142494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:59:19 compute-0 sudo[142494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:19 compute-0 sudo[142494]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:19 compute-0 sudo[142519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:59:19 compute-0 sudo[142519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:19 compute-0 sudo[142519]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:19.283+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:19 compute-0 python3.9[142469]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:19 compute-0 sudo[142466]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:19 compute-0 sudo[142544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 19:59:19 compute-0 sudo[142544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:19.552+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:19 compute-0 sudo[142710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rukdokjsnuvyllomfkmvxjenfukmfbvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014358.6585572-371-8406698139574/AnsiballZ_copy.py'
Nov 24 19:59:19 compute-0 sudo[142710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:19 compute-0 sudo[142544]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 19:59:19 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 19:59:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:59:19 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:59:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 19:59:19 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:59:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 19:59:20 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:59:20 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 198260d2-8c67-48aa-8469-4a168f365e65 does not exist
Nov 24 19:59:20 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d6da7f58-2252-4ffc-adb5-3bb35684d7e3 does not exist
Nov 24 19:59:20 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3edbd6a3-5091-4b10-b7ef-f2c67b09d4a0 does not exist
Nov 24 19:59:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 19:59:20 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:59:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 19:59:20 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:59:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 19:59:20 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:59:20 compute-0 sudo[142723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:59:20 compute-0 sudo[142723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:20 compute-0 sudo[142723]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:20 compute-0 python3.9[142720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014358.6585572-371-8406698139574/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=de5200111fe33e8245893b12bd9b83df41ebfe0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:20 compute-0 sudo[142710]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:20 compute-0 sudo[142748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:59:20 compute-0 sudo[142748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:20 compute-0 sudo[142748]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:20.284+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:20 compute-0 sudo[142791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:59:20 compute-0 sudo[142791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:20 compute-0 sudo[142791]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:20 compute-0 sudo[142822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 19:59:20 compute-0 sudo[142822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:20 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 19:59:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:59:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 19:59:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:59:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 19:59:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 19:59:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 19:59:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:20.559+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:20 compute-0 sshd-session[136483]: Connection closed by 192.168.122.30 port 51622
Nov 24 19:59:20 compute-0 sshd-session[136480]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:59:20 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 24 19:59:20 compute-0 systemd[1]: session-44.scope: Consumed 29.924s CPU time.
Nov 24 19:59:20 compute-0 systemd-logind[795]: Session 44 logged out. Waiting for processes to exit.
Nov 24 19:59:20 compute-0 systemd-logind[795]: Removed session 44.
Nov 24 19:59:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:20 compute-0 podman[142886]: 2025-11-24 19:59:20.817278158 +0000 UTC m=+0.030514539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:59:20 compute-0 podman[142886]: 2025-11-24 19:59:20.937868064 +0000 UTC m=+0.151104435 container create 5e93695873b3142027a05d1a6f9724978bc98879b6bee47cbbb8c6205965e7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 19:59:20 compute-0 systemd[1]: Started libpod-conmon-5e93695873b3142027a05d1a6f9724978bc98879b6bee47cbbb8c6205965e7ce.scope.
Nov 24 19:59:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:59:21 compute-0 podman[142886]: 2025-11-24 19:59:21.053613609 +0000 UTC m=+0.266850020 container init 5e93695873b3142027a05d1a6f9724978bc98879b6bee47cbbb8c6205965e7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:59:21 compute-0 podman[142886]: 2025-11-24 19:59:21.06522983 +0000 UTC m=+0.278466231 container start 5e93695873b3142027a05d1a6f9724978bc98879b6bee47cbbb8c6205965e7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:59:21 compute-0 frosty_goldstine[142902]: 167 167
Nov 24 19:59:21 compute-0 systemd[1]: libpod-5e93695873b3142027a05d1a6f9724978bc98879b6bee47cbbb8c6205965e7ce.scope: Deactivated successfully.
Nov 24 19:59:21 compute-0 podman[142886]: 2025-11-24 19:59:21.07417761 +0000 UTC m=+0.287413981 container attach 5e93695873b3142027a05d1a6f9724978bc98879b6bee47cbbb8c6205965e7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 19:59:21 compute-0 conmon[142902]: conmon 5e93695873b3142027a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e93695873b3142027a05d1a6f9724978bc98879b6bee47cbbb8c6205965e7ce.scope/container/memory.events
Nov 24 19:59:21 compute-0 podman[142886]: 2025-11-24 19:59:21.075177467 +0000 UTC m=+0.288413868 container died 5e93695873b3142027a05d1a6f9724978bc98879b6bee47cbbb8c6205965e7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldstine, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-089f722a29be87d92de9d0c58a29083080562037bdb963863730e7c8bc752720-merged.mount: Deactivated successfully.
Nov 24 19:59:21 compute-0 podman[142886]: 2025-11-24 19:59:21.187959703 +0000 UTC m=+0.401196064 container remove 5e93695873b3142027a05d1a6f9724978bc98879b6bee47cbbb8c6205965e7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:59:21 compute-0 systemd[1]: libpod-conmon-5e93695873b3142027a05d1a6f9724978bc98879b6bee47cbbb8c6205965e7ce.scope: Deactivated successfully.
Nov 24 19:59:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:21.313+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:21 compute-0 podman[142927]: 2025-11-24 19:59:21.414476979 +0000 UTC m=+0.053269930 container create ead37e458a2a1c71c1119ee60c2e94a18591f2b39c1c31242892f2cdc31ba231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hawking, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:59:21 compute-0 systemd[1]: Started libpod-conmon-ead37e458a2a1c71c1119ee60c2e94a18591f2b39c1c31242892f2cdc31ba231.scope.
Nov 24 19:59:21 compute-0 podman[142927]: 2025-11-24 19:59:21.393778844 +0000 UTC m=+0.032571815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:59:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:21 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:21 compute-0 ceph-mon[75677]: pgmap v470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aeb0f68f1d36364507860e0fe3c8e514e31e29c1f9ad72a2e746e3db11cba1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aeb0f68f1d36364507860e0fe3c8e514e31e29c1f9ad72a2e746e3db11cba1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aeb0f68f1d36364507860e0fe3c8e514e31e29c1f9ad72a2e746e3db11cba1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aeb0f68f1d36364507860e0fe3c8e514e31e29c1f9ad72a2e746e3db11cba1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aeb0f68f1d36364507860e0fe3c8e514e31e29c1f9ad72a2e746e3db11cba1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 481 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:21.596+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:21 compute-0 sshd-session[141309]: Connection closed by invalid user support 27.79.44.141 port 50038 [preauth]
Nov 24 19:59:21 compute-0 podman[142927]: 2025-11-24 19:59:21.903326794 +0000 UTC m=+0.542119765 container init ead37e458a2a1c71c1119ee60c2e94a18591f2b39c1c31242892f2cdc31ba231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 19:59:21 compute-0 podman[142927]: 2025-11-24 19:59:21.917086143 +0000 UTC m=+0.555879094 container start ead37e458a2a1c71c1119ee60c2e94a18591f2b39c1c31242892f2cdc31ba231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 24 19:59:21 compute-0 podman[142927]: 2025-11-24 19:59:21.921386918 +0000 UTC m=+0.560179969 container attach ead37e458a2a1c71c1119ee60c2e94a18591f2b39c1c31242892f2cdc31ba231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 19:59:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:22.311+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:22 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 481 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:22 compute-0 ceph-mon[75677]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'default.rgw.log' : 18 ])
Nov 24 19:59:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:22.643+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:23 compute-0 objective_hawking[142944]: --> passed data devices: 0 physical, 3 LVM
Nov 24 19:59:23 compute-0 objective_hawking[142944]: --> relative data size: 1.0
Nov 24 19:59:23 compute-0 objective_hawking[142944]: --> All data devices are unavailable
Nov 24 19:59:23 compute-0 systemd[1]: libpod-ead37e458a2a1c71c1119ee60c2e94a18591f2b39c1c31242892f2cdc31ba231.scope: Deactivated successfully.
Nov 24 19:59:23 compute-0 systemd[1]: libpod-ead37e458a2a1c71c1119ee60c2e94a18591f2b39c1c31242892f2cdc31ba231.scope: Consumed 1.159s CPU time.
Nov 24 19:59:23 compute-0 podman[142927]: 2025-11-24 19:59:23.130169915 +0000 UTC m=+1.768962906 container died ead37e458a2a1c71c1119ee60c2e94a18591f2b39c1c31242892f2cdc31ba231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hawking, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:59:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:23.306+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-18aeb0f68f1d36364507860e0fe3c8e514e31e29c1f9ad72a2e746e3db11cba1-merged.mount: Deactivated successfully.
Nov 24 19:59:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:23.608+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:23 compute-0 ceph-mon[75677]: pgmap v471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:23 compute-0 podman[142927]: 2025-11-24 19:59:23.65784915 +0000 UTC m=+2.296642101 container remove ead37e458a2a1c71c1119ee60c2e94a18591f2b39c1c31242892f2cdc31ba231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hawking, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:59:23 compute-0 systemd[1]: libpod-conmon-ead37e458a2a1c71c1119ee60c2e94a18591f2b39c1c31242892f2cdc31ba231.scope: Deactivated successfully.
Nov 24 19:59:23 compute-0 sudo[142822]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:23 compute-0 sudo[142986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:59:23 compute-0 sudo[142986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:23 compute-0 sudo[142986]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:23 compute-0 sudo[143011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:59:23 compute-0 sudo[143011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:23 compute-0 sudo[143011]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:24 compute-0 sudo[143036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:59:24 compute-0 sudo[143036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:24 compute-0 sudo[143036]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:24 compute-0 sudo[143061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 19:59:24 compute-0 sudo[143061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:24.264+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_19:59:24
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta', '.mgr']
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:59:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 19:59:24 compute-0 podman[143123]: 2025-11-24 19:59:24.619272962 +0000 UTC m=+0.109202061 container create 7643fc8baaefe9749d45b3b26d545d33960d64319ee4ba50f233845fdaa9a107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:59:24 compute-0 podman[143123]: 2025-11-24 19:59:24.534136708 +0000 UTC m=+0.024065827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:59:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:24.629+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:24 compute-0 systemd[1]: Started libpod-conmon-7643fc8baaefe9749d45b3b26d545d33960d64319ee4ba50f233845fdaa9a107.scope.
Nov 24 19:59:24 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:24 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:24 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:59:24 compute-0 podman[143123]: 2025-11-24 19:59:24.903448145 +0000 UTC m=+0.393377254 container init 7643fc8baaefe9749d45b3b26d545d33960d64319ee4ba50f233845fdaa9a107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 19:59:24 compute-0 podman[143123]: 2025-11-24 19:59:24.91519137 +0000 UTC m=+0.405120469 container start 7643fc8baaefe9749d45b3b26d545d33960d64319ee4ba50f233845fdaa9a107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mccarthy, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 19:59:24 compute-0 trusting_mccarthy[143140]: 167 167
Nov 24 19:59:24 compute-0 systemd[1]: libpod-7643fc8baaefe9749d45b3b26d545d33960d64319ee4ba50f233845fdaa9a107.scope: Deactivated successfully.
Nov 24 19:59:25 compute-0 podman[143123]: 2025-11-24 19:59:25.198765038 +0000 UTC m=+0.688694207 container attach 7643fc8baaefe9749d45b3b26d545d33960d64319ee4ba50f233845fdaa9a107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 19:59:25 compute-0 podman[143123]: 2025-11-24 19:59:25.199304793 +0000 UTC m=+0.689233912 container died 7643fc8baaefe9749d45b3b26d545d33960d64319ee4ba50f233845fdaa9a107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mccarthy, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 19:59:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:25.304+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d9afef8a1bfd95311a40af28cf4d82b36189e9cc3ad01620755139f00bc028d-merged.mount: Deactivated successfully.
Nov 24 19:59:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:25 compute-0 podman[143123]: 2025-11-24 19:59:25.433256319 +0000 UTC m=+0.923185398 container remove 7643fc8baaefe9749d45b3b26d545d33960d64319ee4ba50f233845fdaa9a107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:59:25 compute-0 systemd[1]: libpod-conmon-7643fc8baaefe9749d45b3b26d545d33960d64319ee4ba50f233845fdaa9a107.scope: Deactivated successfully.
Nov 24 19:59:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:25.605+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:25 compute-0 podman[143164]: 2025-11-24 19:59:25.728378865 +0000 UTC m=+0.112349895 container create 27404e8a34d3a30cb9400e4cd0c541d47a2185a6b6aa32e949598d51d47b0211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 19:59:25 compute-0 podman[143164]: 2025-11-24 19:59:25.658343986 +0000 UTC m=+0.042315076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:59:25 compute-0 systemd[1]: Started libpod-conmon-27404e8a34d3a30cb9400e4cd0c541d47a2185a6b6aa32e949598d51d47b0211.scope.
Nov 24 19:59:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98a834ba874ee8c121d0875f639c09ee9725921429f8922917fd94cf7673e44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98a834ba874ee8c121d0875f639c09ee9725921429f8922917fd94cf7673e44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98a834ba874ee8c121d0875f639c09ee9725921429f8922917fd94cf7673e44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98a834ba874ee8c121d0875f639c09ee9725921429f8922917fd94cf7673e44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:25 compute-0 ceph-mon[75677]: pgmap v472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 19:59:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:25 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:25 compute-0 podman[143164]: 2025-11-24 19:59:25.975273928 +0000 UTC m=+0.359245038 container init 27404e8a34d3a30cb9400e4cd0c541d47a2185a6b6aa32e949598d51d47b0211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 19:59:25 compute-0 podman[143164]: 2025-11-24 19:59:25.986228682 +0000 UTC m=+0.370199742 container start 27404e8a34d3a30cb9400e4cd0c541d47a2185a6b6aa32e949598d51d47b0211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:59:26 compute-0 podman[143164]: 2025-11-24 19:59:26.015531858 +0000 UTC m=+0.399502928 container attach 27404e8a34d3a30cb9400e4cd0c541d47a2185a6b6aa32e949598d51d47b0211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:59:26 compute-0 sshd-session[143185]: Accepted publickey for zuul from 192.168.122.30 port 54394 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:59:26 compute-0 systemd-logind[795]: New session 45 of user zuul.
Nov 24 19:59:26 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 24 19:59:26 compute-0 sshd-session[143185]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:59:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:26.310+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 19:59:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:26.633+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 486 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]: {
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:     "0": [
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:         {
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "devices": [
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "/dev/loop3"
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             ],
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_name": "ceph_lv0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_size": "21470642176",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "name": "ceph_lv0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "tags": {
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.cluster_name": "ceph",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.crush_device_class": "",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.encrypted": "0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.osd_id": "0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.type": "block",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.vdo": "0"
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             },
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "type": "block",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "vg_name": "ceph_vg0"
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:         }
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:     ],
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:     "1": [
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:         {
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "devices": [
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "/dev/loop4"
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             ],
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_name": "ceph_lv1",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_size": "21470642176",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "name": "ceph_lv1",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "tags": {
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.cluster_name": "ceph",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.crush_device_class": "",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.encrypted": "0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.osd_id": "1",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.type": "block",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.vdo": "0"
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             },
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "type": "block",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "vg_name": "ceph_vg1"
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:         }
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:     ],
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:     "2": [
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:         {
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "devices": [
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "/dev/loop5"
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             ],
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_name": "ceph_lv2",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_size": "21470642176",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "name": "ceph_lv2",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "tags": {
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.cluster_name": "ceph",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.crush_device_class": "",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.encrypted": "0",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.osd_id": "2",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.type": "block",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:                 "ceph.vdo": "0"
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             },
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "type": "block",
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:             "vg_name": "ceph_vg2"
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:         }
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]:     ]
Nov 24 19:59:26 compute-0 suspicious_murdock[143180]: }
Nov 24 19:59:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:26 compute-0 ceph-mon[75677]: pgmap v473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 19:59:26 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:26 compute-0 systemd[1]: libpod-27404e8a34d3a30cb9400e4cd0c541d47a2185a6b6aa32e949598d51d47b0211.scope: Deactivated successfully.
Nov 24 19:59:26 compute-0 podman[143164]: 2025-11-24 19:59:26.914274338 +0000 UTC m=+1.298245378 container died 27404e8a34d3a30cb9400e4cd0c541d47a2185a6b6aa32e949598d51d47b0211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 19:59:26 compute-0 sudo[143352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpgkxleyaaltusaqimbqgjfozrsqgssr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014366.3544967-22-222682400132364/AnsiballZ_file.py'
Nov 24 19:59:26 compute-0 sudo[143352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e98a834ba874ee8c121d0875f639c09ee9725921429f8922917fd94cf7673e44-merged.mount: Deactivated successfully.
Nov 24 19:59:27 compute-0 python3.9[143354]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:27.343+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:27 compute-0 sudo[143352]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:27 compute-0 podman[143164]: 2025-11-24 19:59:27.472712059 +0000 UTC m=+1.856683119 container remove 27404e8a34d3a30cb9400e4cd0c541d47a2185a6b6aa32e949598d51d47b0211 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 19:59:27 compute-0 systemd[1]: libpod-conmon-27404e8a34d3a30cb9400e4cd0c541d47a2185a6b6aa32e949598d51d47b0211.scope: Deactivated successfully.
Nov 24 19:59:27 compute-0 sudo[143061]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:27 compute-0 sudo[143384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:59:27 compute-0 sudo[143384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:27 compute-0 sudo[143384]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:27.666+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:27 compute-0 sudo[143433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 19:59:27 compute-0 sudo[143433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:27 compute-0 sudo[143433]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:27 compute-0 sudo[143482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:59:27 compute-0 sudo[143482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:27 compute-0 sudo[143482]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:27 compute-0 ceph-mon[75677]: Health check update: 3 slow ops, oldest one blocked for 486 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:27 compute-0 sudo[143507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 19:59:27 compute-0 sudo[143507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:28 compute-0 sudo[143634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzgkheiuqgbdjswjralmdmudljcnharb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014367.6071918-34-263020326593263/AnsiballZ_stat.py'
Nov 24 19:59:28 compute-0 sudo[143634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:28.316+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:28 compute-0 podman[143649]: 2025-11-24 19:59:28.35771996 +0000 UTC m=+0.062515507 container create 2ac007d231ab41a59a41eb29e91d8e71c839b6277793f7289b1c214f2e033e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:59:28 compute-0 systemd[1]: Started libpod-conmon-2ac007d231ab41a59a41eb29e91d8e71c839b6277793f7289b1c214f2e033e26.scope.
Nov 24 19:59:28 compute-0 python3.9[143641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:28 compute-0 podman[143649]: 2025-11-24 19:59:28.325712372 +0000 UTC m=+0.030508009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:59:28 compute-0 sudo[143634]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:59:28 compute-0 podman[143649]: 2025-11-24 19:59:28.462754997 +0000 UTC m=+0.167550584 container init 2ac007d231ab41a59a41eb29e91d8e71c839b6277793f7289b1c214f2e033e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_varahamihira, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 19:59:28 compute-0 podman[143649]: 2025-11-24 19:59:28.474002549 +0000 UTC m=+0.178798076 container start 2ac007d231ab41a59a41eb29e91d8e71c839b6277793f7289b1c214f2e033e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_varahamihira, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 19:59:28 compute-0 podman[143649]: 2025-11-24 19:59:28.478636104 +0000 UTC m=+0.183431661 container attach 2ac007d231ab41a59a41eb29e91d8e71c839b6277793f7289b1c214f2e033e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 19:59:28 compute-0 condescending_varahamihira[143665]: 167 167
Nov 24 19:59:28 compute-0 systemd[1]: libpod-2ac007d231ab41a59a41eb29e91d8e71c839b6277793f7289b1c214f2e033e26.scope: Deactivated successfully.
Nov 24 19:59:28 compute-0 podman[143649]: 2025-11-24 19:59:28.483059663 +0000 UTC m=+0.187855210 container died 2ac007d231ab41a59a41eb29e91d8e71c839b6277793f7289b1c214f2e033e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 19:59:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7275aa861c225205b71d5b76ff35f9c841bb47a48ca4debbb7335c3f11e618c-merged.mount: Deactivated successfully.
Nov 24 19:59:28 compute-0 podman[143649]: 2025-11-24 19:59:28.539131137 +0000 UTC m=+0.243926694 container remove 2ac007d231ab41a59a41eb29e91d8e71c839b6277793f7289b1c214f2e033e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 19:59:28 compute-0 systemd[1]: libpod-conmon-2ac007d231ab41a59a41eb29e91d8e71c839b6277793f7289b1c214f2e033e26.scope: Deactivated successfully.
Nov 24 19:59:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 19:59:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:28.700+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:28 compute-0 podman[143736]: 2025-11-24 19:59:28.752792929 +0000 UTC m=+0.062678512 container create 4420952cf831df290def5bd4127e1d352fa56bdbf0059bb3e93b860b3983997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 24 19:59:28 compute-0 systemd[1]: Started libpod-conmon-4420952cf831df290def5bd4127e1d352fa56bdbf0059bb3e93b860b3983997b.scope.
Nov 24 19:59:28 compute-0 podman[143736]: 2025-11-24 19:59:28.722832575 +0000 UTC m=+0.032718178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 19:59:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 19:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc260798514e88360d3311f9ac7a26a4ab048f7d9b9e2d15b74a376698e878d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc260798514e88360d3311f9ac7a26a4ab048f7d9b9e2d15b74a376698e878d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc260798514e88360d3311f9ac7a26a4ab048f7d9b9e2d15b74a376698e878d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc260798514e88360d3311f9ac7a26a4ab048f7d9b9e2d15b74a376698e878d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 19:59:28 compute-0 podman[143736]: 2025-11-24 19:59:28.870052464 +0000 UTC m=+0.179938077 container init 4420952cf831df290def5bd4127e1d352fa56bdbf0059bb3e93b860b3983997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_babbage, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 19:59:28 compute-0 podman[143736]: 2025-11-24 19:59:28.883412033 +0000 UTC m=+0.193297606 container start 4420952cf831df290def5bd4127e1d352fa56bdbf0059bb3e93b860b3983997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 19:59:28 compute-0 podman[143736]: 2025-11-24 19:59:28.888336964 +0000 UTC m=+0.198222597 container attach 4420952cf831df290def5bd4127e1d352fa56bdbf0059bb3e93b860b3983997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 19:59:28 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:28 compute-0 ceph-mon[75677]: pgmap v474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 19:59:29 compute-0 sudo[143830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veoguojrnyaubwrytmccrerxkzcoewad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014367.6071918-34-263020326593263/AnsiballZ_copy.py'
Nov 24 19:59:29 compute-0 sudo[143830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:29 compute-0 python3.9[143832]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014367.6071918-34-263020326593263/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=fa21d6f168c8a77ce51e23081d832e1507915a8f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:29 compute-0 sudo[143830]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:29.293+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:29.736+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:29 compute-0 sudo[144005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnqgbrjyivzntnfoxoabtpiqhytuvann ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014369.4762096-34-153314271429934/AnsiballZ_stat.py'
Nov 24 19:59:29 compute-0 sudo[144005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]: {
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "osd_id": 2,
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "type": "bluestore"
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:     },
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "osd_id": 1,
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "type": "bluestore"
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:     },
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "osd_id": 0,
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:         "type": "bluestore"
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]:     }
Nov 24 19:59:29 compute-0 heuristic_babbage[143770]: }
Nov 24 19:59:29 compute-0 systemd[1]: libpod-4420952cf831df290def5bd4127e1d352fa56bdbf0059bb3e93b860b3983997b.scope: Deactivated successfully.
Nov 24 19:59:29 compute-0 systemd[1]: libpod-4420952cf831df290def5bd4127e1d352fa56bdbf0059bb3e93b860b3983997b.scope: Consumed 1.023s CPU time.
Nov 24 19:59:29 compute-0 podman[143736]: 2025-11-24 19:59:29.900402355 +0000 UTC m=+1.210287898 container died 4420952cf831df290def5bd4127e1d352fa56bdbf0059bb3e93b860b3983997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_babbage, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 19:59:29 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bc260798514e88360d3311f9ac7a26a4ab048f7d9b9e2d15b74a376698e878d-merged.mount: Deactivated successfully.
Nov 24 19:59:29 compute-0 podman[143736]: 2025-11-24 19:59:29.95578784 +0000 UTC m=+1.265673383 container remove 4420952cf831df290def5bd4127e1d352fa56bdbf0059bb3e93b860b3983997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_babbage, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 19:59:29 compute-0 systemd[1]: libpod-conmon-4420952cf831df290def5bd4127e1d352fa56bdbf0059bb3e93b860b3983997b.scope: Deactivated successfully.
Nov 24 19:59:29 compute-0 sudo[143507]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 19:59:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:59:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 19:59:30 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:59:30 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2946d406-d5fa-4020-92b7-ba86c839abcd does not exist
Nov 24 19:59:30 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4937641c-347d-4276-89f8-937aa5d7919f does not exist
Nov 24 19:59:30 compute-0 python3.9[144009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:30 compute-0 sudo[144005]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:30 compute-0 sudo[144026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 19:59:30 compute-0 sudo[144026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:30 compute-0 sudo[144026]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:30 compute-0 sudo[144056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 19:59:30 compute-0 sudo[144056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 19:59:30 compute-0 sudo[144056]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:30.317+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:30 compute-0 sudo[144196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irbixvvblbarivngruxqonanmodplhfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014369.4762096-34-153314271429934/AnsiballZ_copy.py'
Nov 24 19:59:30 compute-0 sudo[144196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 19:59:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:30.705+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:30 compute-0 python3.9[144198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014369.4762096-34-153314271429934/.source.conf _original_basename=ceph.conf follow=False checksum=07c1ff2feab2636408faf41d182feb87d277c56d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:30 compute-0 sudo[144196]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:31 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:59:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 19:59:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:31 compute-0 ceph-mon[75677]: pgmap v475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 19:59:31 compute-0 sshd-session[143188]: Connection closed by 192.168.122.30 port 54394
Nov 24 19:59:31 compute-0 sshd-session[143185]: pam_unix(sshd:session): session closed for user zuul
Nov 24 19:59:31 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 24 19:59:31 compute-0 systemd[1]: session-45.scope: Consumed 3.608s CPU time.
Nov 24 19:59:31 compute-0 systemd-logind[795]: Session 45 logged out. Waiting for processes to exit.
Nov 24 19:59:31 compute-0 systemd-logind[795]: Removed session 45.
Nov 24 19:59:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:31.301+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:31.745+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:32 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:32.279+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 19:59:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:32.786+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:33 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:33 compute-0 ceph-mon[75677]: pgmap v476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 19:59:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:33.235+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:33.815+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:34 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:34.224+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 19:59:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 19:59:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:34.832+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:35 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:35 compute-0 ceph-mon[75677]: pgmap v477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 19:59:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:35.243+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:35.794+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:36 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:36.256+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 491 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 19:59:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:36.754+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:36 compute-0 sshd-session[144223]: Accepted publickey for zuul from 192.168.122.30 port 55736 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 19:59:36 compute-0 systemd-logind[795]: New session 46 of user zuul.
Nov 24 19:59:36 compute-0 systemd[1]: Started Session 46 of User zuul.
Nov 24 19:59:36 compute-0 sshd-session[144223]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 19:59:37 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:37 compute-0 ceph-mon[75677]: Health check update: 3 slow ops, oldest one blocked for 491 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:37 compute-0 ceph-mon[75677]: pgmap v478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 19:59:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:37.285+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:37.765+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:38 compute-0 python3.9[144376]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:59:38 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:38.320+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:38.767+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:39 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:39 compute-0 ceph-mon[75677]: pgmap v479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:39 compute-0 sudo[144530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tysfugocgzmqzwyhthlzekorrufepydl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014378.6697967-34-141072296495741/AnsiballZ_file.py'
Nov 24 19:59:39 compute-0 sudo[144530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:39.300+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:39 compute-0 python3.9[144532]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:59:39 compute-0 sudo[144530]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:39.810+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:40 compute-0 sudo[144682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqubfdeozkiswhlnogtbzbenvovuvzyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014379.6640782-34-184369113571511/AnsiballZ_file.py'
Nov 24 19:59:40 compute-0 sudo[144682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:40 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:40.265+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:40 compute-0 python3.9[144684]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 19:59:40 compute-0 sudo[144682]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 19:59:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 19:59:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 19:59:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 19:59:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 19:59:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:40.783+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:41 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:41 compute-0 ceph-mon[75677]: pgmap v480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:41 compute-0 python3.9[144834]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 19:59:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:41.224+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 502 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:41.818+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:42 compute-0 sudo[144984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwfnnomkpaqltsclaipahwfwsfvrmfda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014381.475724-57-169051414878805/AnsiballZ_seboolean.py'
Nov 24 19:59:42 compute-0 sudo[144984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:42 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:42 compute-0 ceph-mon[75677]: Health check update: 3 slow ops, oldest one blocked for 502 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:42.251+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:42 compute-0 python3.9[144986]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 19:59:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:42.835+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:43 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:43 compute-0 ceph-mon[75677]: pgmap v481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:43.250+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:43 compute-0 sudo[144984]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:43.833+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:44 compute-0 sudo[145140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbnztzizykjkedbptdmoigckbjomkheh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014383.7999363-67-3060767506236/AnsiballZ_setup.py'
Nov 24 19:59:44 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 24 19:59:44 compute-0 sudo[145140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:44 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:44.284+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:44 compute-0 python3.9[145142]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 19:59:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:44 compute-0 sudo[145140]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:44.878+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:45.258+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:45 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:45 compute-0 ceph-mon[75677]: pgmap v482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:45 compute-0 sudo[145224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmtklybrqcrjdlolxtnoyenjzlmahemv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014383.7999363-67-3060767506236/AnsiballZ_dnf.py'
Nov 24 19:59:45 compute-0 sudo[145224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:45 compute-0 python3.9[145226]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 19:59:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:45.829+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:46 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:46.292+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:46 compute-0 sudo[145224]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:46.825+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 507 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:47.298+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:47 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:47 compute-0 ceph-mon[75677]: pgmap v483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:47 compute-0 sudo[145377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wguoipkuzdmpaxeomlaqdrapqbkkdpoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014387.0508597-79-106595087853613/AnsiballZ_systemd.py'
Nov 24 19:59:47 compute-0 sudo[145377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:47.829+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:48 compute-0 python3.9[145379]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 19:59:48 compute-0 sudo[145377]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:48.295+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:48 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:48 compute-0 ceph-mon[75677]: Health check update: 3 slow ops, oldest one blocked for 507 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:48.782+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:48 compute-0 sudo[145532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nukfdksupysjjvbjavwuovqnwxlvrlnb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764014388.404337-87-32167223936498/AnsiballZ_edpm_nftables_snippet.py'
Nov 24 19:59:48 compute-0 sudo[145532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:49 compute-0 python3[145534]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 24 19:59:49 compute-0 sudo[145532]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:49 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:49 compute-0 ceph-mon[75677]: pgmap v484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:49.338+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:49.778+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:49 compute-0 sudo[145684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aofywtedjqdsjrbsnpcimtdigeebxtbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014389.566593-96-162134821609547/AnsiballZ_file.py'
Nov 24 19:59:49 compute-0 sudo[145684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:50 compute-0 python3.9[145686]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:50 compute-0 sudo[145684]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:50.289+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:50 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:50.813+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:50 compute-0 sudo[145836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaagbssjdvzqiundgnhvooovdpmzgcbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014390.402173-104-80736440940967/AnsiballZ_stat.py'
Nov 24 19:59:50 compute-0 sudo[145836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:51 compute-0 python3.9[145838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:51 compute-0 sudo[145836]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:51 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:51 compute-0 ceph-mon[75677]: pgmap v485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:51.338+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:51 compute-0 sudo[145914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieyefruwssqpqzbluoqviezuqppfqcsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014390.402173-104-80736440940967/AnsiballZ_file.py'
Nov 24 19:59:51 compute-0 sudo[145914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:51 compute-0 python3.9[145916]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:51 compute-0 sudo[145914]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:51.819+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:52 compute-0 sudo[146066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwhohxyebhixekyhaohifhnzbsxtbktj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014391.9234052-116-180263105197787/AnsiballZ_stat.py'
Nov 24 19:59:52 compute-0 sudo[146066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:52.309+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:52 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:52 compute-0 python3.9[146068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:52 compute-0 sudo[146066]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:52.774+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:52 compute-0 sudo[146144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hquhmutzpaworehbvwvqlcdhubavwedd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014391.9234052-116-180263105197787/AnsiballZ_file.py'
Nov 24 19:59:52 compute-0 sudo[146144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:53 compute-0 python3.9[146146]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.jyqo3l5x recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:53 compute-0 sudo[146144]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:53 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 24 19:59:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:53 compute-0 ceph-mon[75677]: pgmap v486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:53.356+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:53 compute-0 sudo[146296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwoqwhdyzskysmtwbxzbqvrkqubftysp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014393.2227912-128-122882080262298/AnsiballZ_stat.py'
Nov 24 19:59:53 compute-0 sudo[146296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:53.751+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:53 compute-0 python3.9[146298]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:53 compute-0 sudo[146296]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:54 compute-0 sudo[146374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeezhyncbqsouatcqelodmmnbixeutma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014393.2227912-128-122882080262298/AnsiballZ_file.py'
Nov 24 19:59:54 compute-0 sudo[146374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:54 compute-0 python3.9[146376]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:54 compute-0 sudo[146374]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:54.375+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 19:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 19:59:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:54.754+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:55 compute-0 sudo[146526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twejdltxrxlwfbzuoncgreloikxjbvxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014394.5333433-141-143723037236847/AnsiballZ_command.py'
Nov 24 19:59:55 compute-0 sudo[146526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:55 compute-0 python3.9[146528]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 19:59:55 compute-0 sudo[146526]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:55.332+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:55 compute-0 ceph-mon[75677]: pgmap v487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:55.743+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:56 compute-0 sudo[146679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiqnrfvxghjryreftuptylkrnbbcanva ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764014395.5340617-149-202350250217146/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 19:59:56 compute-0 sudo[146679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:56 compute-0 python3[146681]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 19:59:56 compute-0 sudo[146679]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:56.353+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 511 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 19:59:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:56.756+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:56 compute-0 sudo[146831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kykdxjbabctagvykhgmkksrnbjruscnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014396.5281506-157-278000304183775/AnsiballZ_stat.py'
Nov 24 19:59:56 compute-0 sudo[146831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:57 compute-0 python3.9[146833]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:57 compute-0 sudo[146831]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:57 compute-0 ceph-mon[75677]: Health check update: 3 slow ops, oldest one blocked for 511 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 19:59:57 compute-0 ceph-mon[75677]: pgmap v488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:57.393+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:57.720+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:57 compute-0 sudo[146956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljafsdezemcsjhsbezuhvrnccwyyzlro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014396.5281506-157-278000304183775/AnsiballZ_copy.py'
Nov 24 19:59:57 compute-0 sudo[146956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:58 compute-0 python3.9[146958]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014396.5281506-157-278000304183775/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:58 compute-0 sudo[146956]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:58.422+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:58.719+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:58 compute-0 sudo[147108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sswovmxnpxettckjtacpoknbdldlxkgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014398.3274183-172-128455801346013/AnsiballZ_stat.py'
Nov 24 19:59:58 compute-0 sudo[147108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:59 compute-0 python3.9[147110]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 19:59:59 compute-0 sudo[147108]: pam_unix(sudo:session): session closed for user root
Nov 24 19:59:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:59 compute-0 ceph-mon[75677]: pgmap v489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 19:59:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T19:59:59.462+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 19:59:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 19:59:59 compute-0 sudo[147233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlhyliedkyklgzgktiqmotjfjuohvlqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014398.3274183-172-128455801346013/AnsiballZ_copy.py'
Nov 24 19:59:59 compute-0 sudo[147233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 19:59:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T19:59:59.682+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 19:59:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 19:59:59 compute-0 python3.9[147235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014398.3274183-172-128455801346013/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 19:59:59 compute-0 sudo[147233]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:00 compute-0 sudo[147385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnryenljnbuokpwfaowjeezuitfinakp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014399.9377115-187-237211193404478/AnsiballZ_stat.py'
Nov 24 20:00:00 compute-0 sudo[147385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:00.415+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:00 compute-0 python3.9[147387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:00 compute-0 sudo[147385]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:00.678+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:00 compute-0 sudo[147510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epjolvjwagxpfnttcudhoukovomfdltb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014399.9377115-187-237211193404478/AnsiballZ_copy.py'
Nov 24 20:00:00 compute-0 sudo[147510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:01 compute-0 python3.9[147512]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014399.9377115-187-237211193404478/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:01 compute-0 sudo[147510]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:01.441+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:01 compute-0 ceph-mon[75677]: pgmap v490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 521 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:01.653+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:01 compute-0 sudo[147664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snmtahriblbeqcytlnlpbtizhlmirjjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014401.349048-202-16002272439858/AnsiballZ_stat.py'
Nov 24 20:00:01 compute-0 sudo[147664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:01 compute-0 python3.9[147666]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:01 compute-0 sudo[147664]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:02 compute-0 sudo[147789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwxoglswkewowvowoxlcrkwtqxlaiccf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014401.349048-202-16002272439858/AnsiballZ_copy.py'
Nov 24 20:00:02 compute-0 sudo[147789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:02.450+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:02 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 521 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:02.676+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:02 compute-0 python3.9[147791]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014401.349048-202-16002272439858/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:02 compute-0 sudo[147789]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:02 compute-0 sshd-session[147536]: Connection closed by authenticating user root 27.79.44.141 port 34616 [preauth]
Nov 24 20:00:03 compute-0 sudo[147941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjavujhslbxzzvqrfuracsrsfwtwqbtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014402.8887904-217-192065906466875/AnsiballZ_stat.py'
Nov 24 20:00:03 compute-0 sudo[147941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:03 compute-0 ceph-mon[75677]: pgmap v491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:03.484+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:03 compute-0 python3.9[147943]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:03 compute-0 sudo[147941]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:03.662+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:04 compute-0 sudo[148066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awqguobgnalqnhtplfkzdloyqmrgafmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014402.8887904-217-192065906466875/AnsiballZ_copy.py'
Nov 24 20:00:04 compute-0 sudo[148066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:04 compute-0 python3.9[148068]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014402.8887904-217-192065906466875/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:04 compute-0 sudo[148066]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:04.481+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:04.680+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:04 compute-0 sudo[148218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apzrfdvuvoqvwtzbcmoauwtrwhejauba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014404.529695-232-204980454969089/AnsiballZ_file.py'
Nov 24 20:00:04 compute-0 sudo[148218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:05 compute-0 python3.9[148220]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:05 compute-0 sudo[148218]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:05.477+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:05 compute-0 ceph-mon[75677]: pgmap v492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:05.653+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:05 compute-0 sudo[148370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvflhyqrupsjceagudlsdedsgipodtru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014405.4155202-240-149169664622603/AnsiballZ_command.py'
Nov 24 20:00:05 compute-0 sudo[148370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:05 compute-0 python3.9[148372]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:00:06 compute-0 sudo[148370]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:06.490+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:06.644+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:06 compute-0 sudo[148526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnzxnxygzferggdnmmkjtqcleecwfhav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014406.239054-248-75378773824794/AnsiballZ_blockinfile.py'
Nov 24 20:00:06 compute-0 sudo[148526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:06 compute-0 python3.9[148528]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:06 compute-0 sudo[148526]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:07.456+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 526 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:07 compute-0 ceph-mon[75677]: pgmap v493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:07 compute-0 sudo[148678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jubdltljivtmnsgaayclcecakxckxijf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014407.234688-257-173855565223081/AnsiballZ_command.py'
Nov 24 20:00:07 compute-0 sudo[148678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:07.660+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:07 compute-0 python3.9[148680]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:00:07 compute-0 sudo[148678]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:08.411+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:08 compute-0 sudo[148831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dspktzcubsgaykitbxutuueigbhqcbyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014408.0632262-265-81902571826687/AnsiballZ_stat.py'
Nov 24 20:00:08 compute-0 sudo[148831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:08 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 526 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:08.639+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:08 compute-0 python3.9[148833]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:00:08 compute-0 sudo[148831]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:09 compute-0 sudo[148985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exozbwmomizcnjkatwubmoqokunpmpsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014408.9659815-273-154544116347350/AnsiballZ_command.py'
Nov 24 20:00:09 compute-0 sudo[148985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:09.441+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:09 compute-0 ceph-mon[75677]: pgmap v494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:09 compute-0 python3.9[148987]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:00:09 compute-0 sudo[148985]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:09.635+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:10 compute-0 sudo[149140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqfwgajctdfdsoexrtljhqtmetkpkdsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014409.8088715-281-53803262171678/AnsiballZ_file.py'
Nov 24 20:00:10 compute-0 sudo[149140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:10 compute-0 python3.9[149142]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:10 compute-0 sudo[149140]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:10.484+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:10.620+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:11.462+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:11 compute-0 ceph-mon[75677]: pgmap v495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:11.655+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:11 compute-0 python3.9[149292]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 20:00:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:12.464+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:12.626+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:12 compute-0 sudo[149443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doyswetwgmdtojiroziivrsgcrqfsris ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014412.451421-321-274176190482440/AnsiballZ_command.py'
Nov 24 20:00:12 compute-0 sudo[149443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:13 compute-0 python3.9[149445]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:00:13 compute-0 ovs-vsctl[149446]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 24 20:00:13 compute-0 sudo[149443]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:13.432+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:13 compute-0 ceph-mon[75677]: pgmap v496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:13.624+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:13 compute-0 sudo[149596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdtgvucsgnusiphzbfwflxhpwxqhzzre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014413.2801964-330-49525550241492/AnsiballZ_command.py'
Nov 24 20:00:13 compute-0 sudo[149596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:13 compute-0 python3.9[149598]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:00:13 compute-0 sudo[149596]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:14.481+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:14 compute-0 sudo[149751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsyjcifmllfhlsvreevpctxigspjbera ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014414.1457658-338-251345223462738/AnsiballZ_command.py'
Nov 24 20:00:14 compute-0 sudo[149751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:14.624+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:14 compute-0 python3.9[149753]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:00:14 compute-0 ovs-vsctl[149754]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 24 20:00:14 compute-0 sudo[149751]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:15.483+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:15 compute-0 python3.9[149904]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:00:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:15.657+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:15 compute-0 ceph-mon[75677]: pgmap v497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:16 compute-0 sudo[150056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohqxzgxmatirtloqyxqsbnogcttfbeym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014415.9694371-355-66282685482552/AnsiballZ_file.py'
Nov 24 20:00:16 compute-0 sudo[150056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:16.446+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:16 compute-0 python3.9[150058]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:00:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 531 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:16 compute-0 sudo[150056]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:16.643+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:16 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 531 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:16 compute-0 ceph-mon[75677]: pgmap v498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:17 compute-0 sudo[150208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfemmhyjlwwcvbjzacqcsscxcyesxpsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014416.8042214-363-99320590325763/AnsiballZ_stat.py'
Nov 24 20:00:17 compute-0 sudo[150208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:17 compute-0 python3.9[150210]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:17 compute-0 sudo[150208]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:17.488+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:17.622+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:17 compute-0 sudo[150286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcnwjvzgzexlannitbqlytnberuusnrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014416.8042214-363-99320590325763/AnsiballZ_file.py'
Nov 24 20:00:17 compute-0 sudo[150286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:17 compute-0 python3.9[150288]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:00:17 compute-0 sudo[150286]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:18 compute-0 sudo[150438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isbqcohzoetzpekbfoeuauzgwjjdqlwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014418.1304858-363-49496483882888/AnsiballZ_stat.py'
Nov 24 20:00:18 compute-0 sudo[150438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:18.495+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:18.655+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:18 compute-0 python3.9[150440]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:18 compute-0 sudo[150438]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:18 compute-0 ceph-mon[75677]: pgmap v499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:19 compute-0 sudo[150516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcmvbrlqytbuephupspwhsacwvluyboe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014418.1304858-363-49496483882888/AnsiballZ_file.py'
Nov 24 20:00:19 compute-0 sudo[150516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:19 compute-0 python3.9[150518]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:00:19 compute-0 sudo[150516]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:19.528+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:19.635+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:20 compute-0 sudo[150668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uahbgmdvjpfzkkqsfwqhtpevbeclfnhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014419.4318323-386-182323212612742/AnsiballZ_file.py'
Nov 24 20:00:20 compute-0 sudo[150668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:20 compute-0 python3.9[150670]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:20 compute-0 sudo[150668]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:20.562+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:20.593+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:20 compute-0 sudo[150820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxuxwxgekshjgppbcjrfxxblqukqiwpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014420.609538-394-221957821089711/AnsiballZ_stat.py'
Nov 24 20:00:20 compute-0 sudo[150820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:21 compute-0 ceph-mon[75677]: pgmap v500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:21 compute-0 python3.9[150822]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:21 compute-0 sudo[150820]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:21 compute-0 sudo[150898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umfhwcjxncxbpdsjguqkjoacuexjbvdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014420.609538-394-221957821089711/AnsiballZ_file.py'
Nov 24 20:00:21 compute-0 sudo[150898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 541 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:21.573+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:21.580+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:21 compute-0 python3.9[150900]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:21 compute-0 sudo[150898]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:22 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 541 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:22 compute-0 sudo[151050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qweirqudbjpxuuvwghhuxdichaqyyzui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014421.8640432-406-21298708315831/AnsiballZ_stat.py'
Nov 24 20:00:22 compute-0 sudo[151050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:22 compute-0 python3.9[151052]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:22 compute-0 sudo[151050]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:22.547+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:22.557+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:22 compute-0 sudo[151128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgurzemdeltkskjtoxotxawyuxtsyoxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014421.8640432-406-21298708315831/AnsiballZ_file.py'
Nov 24 20:00:22 compute-0 sudo[151128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:22 compute-0 python3.9[151130]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:22 compute-0 sudo[151128]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:23 compute-0 ceph-mon[75677]: pgmap v501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:23 compute-0 sudo[151280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uziaungfknojhunaxmadftsmlxnbipsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014423.1277454-418-186430116658777/AnsiballZ_systemd.py'
Nov 24 20:00:23 compute-0 sudo[151280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:23.517+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:23.543+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:23 compute-0 python3.9[151282]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:00:23 compute-0 systemd[1]: Reloading.
Nov 24 20:00:23 compute-0 systemd-rc-local-generator[151311]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:00:23 compute-0 systemd-sysv-generator[151314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:00:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:24 compute-0 sudo[151280]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:00:24
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images', '.rgw.root']
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:00:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:24.498+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:24.521+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:24 compute-0 sudo[151469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkeojxnunuusgobczsnmxabzgnbjxalr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014424.31207-426-16145255823415/AnsiballZ_stat.py'
Nov 24 20:00:24 compute-0 sudo[151469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:24 compute-0 python3.9[151471]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:24 compute-0 sudo[151469]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:25 compute-0 ceph-mon[75677]: pgmap v502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:25 compute-0 sudo[151547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhpqfpfmetomplcqnjjgotsotbvkmulw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014424.31207-426-16145255823415/AnsiballZ_file.py'
Nov 24 20:00:25 compute-0 sudo[151547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:25 compute-0 python3.9[151549]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:25 compute-0 sudo[151547]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:25.487+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:25.538+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:25 compute-0 sudo[151699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpnnhncarikcatcxzrmsbgstyafasthp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014425.5686843-438-156771402107604/AnsiballZ_stat.py'
Nov 24 20:00:25 compute-0 sudo[151699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:26 compute-0 python3.9[151701]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:26 compute-0 sudo[151699]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:26.455+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:26 compute-0 sudo[151777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zagazisehzoraibhvlgnizsqreldtrlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014425.5686843-438-156771402107604/AnsiballZ_file.py'
Nov 24 20:00:26 compute-0 sudo[151777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:26.578+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:26 compute-0 python3.9[151779]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:26 compute-0 sudo[151777]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 546 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:27 compute-0 ceph-mon[75677]: pgmap v503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:27.431+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:27 compute-0 sudo[151929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcjvfvmrmbkuyqtpbuxsrfelqjwvopbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014427.0790272-450-179410725732441/AnsiballZ_systemd.py'
Nov 24 20:00:27 compute-0 sudo[151929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:27.621+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:27 compute-0 python3.9[151931]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:00:27 compute-0 systemd[1]: Reloading.
Nov 24 20:00:27 compute-0 systemd-sysv-generator[151965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:00:27 compute-0 systemd-rc-local-generator[151960]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:00:28 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 546 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:28 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 20:00:28 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 20:00:28 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 20:00:28 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 20:00:28 compute-0 sudo[151929]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:28.444+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:28.663+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:28 compute-0 sudo[152123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mklsqoefexcbjfbbfskukbhohyvzjtka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014428.4660685-460-122053444660975/AnsiballZ_file.py'
Nov 24 20:00:28 compute-0 sudo[152123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:29 compute-0 ceph-mon[75677]: pgmap v504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:29 compute-0 python3.9[152125]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:00:29 compute-0 sudo[152123]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:29.470+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:29.713+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:29 compute-0 sudo[152275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-senziljkajqksvaocjaycbxqjhleohix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014429.3830707-468-244378929758418/AnsiballZ_stat.py'
Nov 24 20:00:29 compute-0 sudo[152275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:29 compute-0 python3.9[152277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:29 compute-0 sudo[152275]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:30 compute-0 sudo[152371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:00:30 compute-0 sudo[152371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:30 compute-0 sudo[152371]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:30 compute-0 sudo[152423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbpcundtgcmoeawniflnyaadbvdbzdln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014429.3830707-468-244378929758418/AnsiballZ_copy.py'
Nov 24 20:00:30 compute-0 sudo[152423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:30 compute-0 sudo[152424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:00:30 compute-0 sudo[152424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:30 compute-0 sudo[152424]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:30 compute-0 sudo[152451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:00:30 compute-0 sudo[152451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:30 compute-0 sudo[152451]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:30.515+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:30 compute-0 python3.9[152430]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764014429.3830707-468-244378929758418/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:00:30 compute-0 sudo[152476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:00:30 compute-0 sudo[152476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:30 compute-0 sudo[152423]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:30.669+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:31 compute-0 sudo[152476]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:00:31 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:00:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:00:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:00:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:00:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:00:31 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 45aac38a-2b7e-48fa-82c8-07fd22546d36 does not exist
Nov 24 20:00:31 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f162b443-6aac-42f6-9b7c-41300140dcd9 does not exist
Nov 24 20:00:31 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ea365f14-4059-48ad-bf18-0413d9fd603a does not exist
Nov 24 20:00:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:00:31 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:00:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:00:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:00:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:00:31 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:00:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:31 compute-0 ceph-mon[75677]: pgmap v505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:00:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:00:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:00:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:00:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:00:31 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:00:31 compute-0 sudo[152609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:00:31 compute-0 sudo[152609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:31 compute-0 sudo[152609]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:31 compute-0 sudo[152657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:00:31 compute-0 sudo[152657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:31 compute-0 sudo[152657]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:31 compute-0 sudo[152706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:00:31 compute-0 sudo[152706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:31 compute-0 sudo[152706]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:31 compute-0 sudo[152755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jltzjcxfziyhgdibmzxliqklcnmnjphr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014431.0091517-485-271936581862810/AnsiballZ_file.py'
Nov 24 20:00:31 compute-0 sudo[152755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:31 compute-0 sudo[152760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:00:31 compute-0 sudo[152760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:31.478+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:31 compute-0 python3.9[152759]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:00:31 compute-0 sudo[152755]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:31.705+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:31 compute-0 podman[152849]: 2025-11-24 20:00:31.810589997 +0000 UTC m=+0.059963502 container create a29bde19bd7a68c8454646a98f7ed457f10df6d013d5d50c95d22c398152c419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 24 20:00:31 compute-0 systemd[1]: Started libpod-conmon-a29bde19bd7a68c8454646a98f7ed457f10df6d013d5d50c95d22c398152c419.scope.
Nov 24 20:00:31 compute-0 podman[152849]: 2025-11-24 20:00:31.779489097 +0000 UTC m=+0.028862682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:00:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:00:31 compute-0 podman[152849]: 2025-11-24 20:00:31.903163039 +0000 UTC m=+0.152536584 container init a29bde19bd7a68c8454646a98f7ed457f10df6d013d5d50c95d22c398152c419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 24 20:00:31 compute-0 podman[152849]: 2025-11-24 20:00:31.91556238 +0000 UTC m=+0.164935885 container start a29bde19bd7a68c8454646a98f7ed457f10df6d013d5d50c95d22c398152c419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:00:31 compute-0 podman[152849]: 2025-11-24 20:00:31.921523119 +0000 UTC m=+0.170896604 container attach a29bde19bd7a68c8454646a98f7ed457f10df6d013d5d50c95d22c398152c419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:00:31 compute-0 optimistic_easley[152906]: 167 167
Nov 24 20:00:31 compute-0 systemd[1]: libpod-a29bde19bd7a68c8454646a98f7ed457f10df6d013d5d50c95d22c398152c419.scope: Deactivated successfully.
Nov 24 20:00:31 compute-0 conmon[152906]: conmon a29bde19bd7a68c84546 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a29bde19bd7a68c8454646a98f7ed457f10df6d013d5d50c95d22c398152c419.scope/container/memory.events
Nov 24 20:00:31 compute-0 podman[152849]: 2025-11-24 20:00:31.927760095 +0000 UTC m=+0.177133640 container died a29bde19bd7a68c8454646a98f7ed457f10df6d013d5d50c95d22c398152c419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-903d3302ba2f1d9539fc92a4747e4852ce5664235289523fa80a64bc2322afc2-merged.mount: Deactivated successfully.
Nov 24 20:00:31 compute-0 podman[152849]: 2025-11-24 20:00:31.994769905 +0000 UTC m=+0.244143430 container remove a29bde19bd7a68c8454646a98f7ed457f10df6d013d5d50c95d22c398152c419 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 20:00:32 compute-0 systemd[1]: libpod-conmon-a29bde19bd7a68c8454646a98f7ed457f10df6d013d5d50c95d22c398152c419.scope: Deactivated successfully.
Nov 24 20:00:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:32 compute-0 sudo[153014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfbljasovllewsoooplbegpmgjqkacki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014431.8033507-493-219554734948497/AnsiballZ_stat.py'
Nov 24 20:00:32 compute-0 sudo[153014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:32 compute-0 podman[153015]: 2025-11-24 20:00:32.273918208 +0000 UTC m=+0.077587883 container create 20a719b7d120c9b902a63f1b4823ece5b835bf794c819973eab72cccb574389e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_liskov, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:00:32 compute-0 systemd[1]: Started libpod-conmon-20a719b7d120c9b902a63f1b4823ece5b835bf794c819973eab72cccb574389e.scope.
Nov 24 20:00:32 compute-0 podman[153015]: 2025-11-24 20:00:32.247728769 +0000 UTC m=+0.051398524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:00:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6c0b88135e4352ea658e55a5a1d256d58e29cdf2583e9fff9673e0e2bf4523/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6c0b88135e4352ea658e55a5a1d256d58e29cdf2583e9fff9673e0e2bf4523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6c0b88135e4352ea658e55a5a1d256d58e29cdf2583e9fff9673e0e2bf4523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6c0b88135e4352ea658e55a5a1d256d58e29cdf2583e9fff9673e0e2bf4523/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6c0b88135e4352ea658e55a5a1d256d58e29cdf2583e9fff9673e0e2bf4523/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:32 compute-0 podman[153015]: 2025-11-24 20:00:32.388910528 +0000 UTC m=+0.192580283 container init 20a719b7d120c9b902a63f1b4823ece5b835bf794c819973eab72cccb574389e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_liskov, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:00:32 compute-0 python3.9[153018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:00:32 compute-0 podman[153015]: 2025-11-24 20:00:32.409850317 +0000 UTC m=+0.213519992 container start 20a719b7d120c9b902a63f1b4823ece5b835bf794c819973eab72cccb574389e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_liskov, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 20:00:32 compute-0 podman[153015]: 2025-11-24 20:00:32.413742522 +0000 UTC m=+0.217412207 container attach 20a719b7d120c9b902a63f1b4823ece5b835bf794c819973eab72cccb574389e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_liskov, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:00:32 compute-0 sudo[153014]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:32.494+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:32.712+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:33 compute-0 sudo[153159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utuzignugnlqgujpiqubiqkkqsdrakqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014431.8033507-493-219554734948497/AnsiballZ_copy.py'
Nov 24 20:00:33 compute-0 sudo[153159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:33 compute-0 ceph-mon[75677]: pgmap v506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:33 compute-0 python3.9[153161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014431.8033507-493-219554734948497/.source.json _original_basename=.x9la_7my follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:33 compute-0 sudo[153159]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:33.503+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:33 compute-0 youthful_liskov[153034]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:00:33 compute-0 youthful_liskov[153034]: --> relative data size: 1.0
Nov 24 20:00:33 compute-0 youthful_liskov[153034]: --> All data devices are unavailable
Nov 24 20:00:33 compute-0 systemd[1]: libpod-20a719b7d120c9b902a63f1b4823ece5b835bf794c819973eab72cccb574389e.scope: Deactivated successfully.
Nov 24 20:00:33 compute-0 systemd[1]: libpod-20a719b7d120c9b902a63f1b4823ece5b835bf794c819973eab72cccb574389e.scope: Consumed 1.192s CPU time.
Nov 24 20:00:33 compute-0 podman[153015]: 2025-11-24 20:00:33.666627753 +0000 UTC m=+1.470297458 container died 20a719b7d120c9b902a63f1b4823ece5b835bf794c819973eab72cccb574389e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_liskov, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:00:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:33.703+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec6c0b88135e4352ea658e55a5a1d256d58e29cdf2583e9fff9673e0e2bf4523-merged.mount: Deactivated successfully.
Nov 24 20:00:33 compute-0 podman[153015]: 2025-11-24 20:00:33.757817008 +0000 UTC m=+1.561486703 container remove 20a719b7d120c9b902a63f1b4823ece5b835bf794c819973eab72cccb574389e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 20:00:33 compute-0 systemd[1]: libpod-conmon-20a719b7d120c9b902a63f1b4823ece5b835bf794c819973eab72cccb574389e.scope: Deactivated successfully.
Nov 24 20:00:33 compute-0 sudo[152760]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:33 compute-0 sudo[153320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:00:33 compute-0 sudo[153320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:33 compute-0 sudo[153320]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:33 compute-0 sudo[153372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kadxblqnvxhgjnudunjzlaunhkzpklzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014433.4658368-508-57096239231681/AnsiballZ_file.py'
Nov 24 20:00:33 compute-0 sudo[153372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:33 compute-0 sudo[153373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:00:34 compute-0 sudo[153373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:34 compute-0 sudo[153373]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:34 compute-0 sudo[153400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:00:34 compute-0 sudo[153400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:34 compute-0 sudo[153400]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:34 compute-0 python3.9[153379]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:34 compute-0 sudo[153372]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:34 compute-0 sudo[153425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:00:34 compute-0 sudo[153425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:00:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:34.476+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:34 compute-0 podman[153566]: 2025-11-24 20:00:34.585818355 +0000 UTC m=+0.069969859 container create fa01582a06ef50bf7f10d10779f7b72f84b0ee9d5ef921b13912cd0bdab56582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 20:00:34 compute-0 systemd[1]: Started libpod-conmon-fa01582a06ef50bf7f10d10779f7b72f84b0ee9d5ef921b13912cd0bdab56582.scope.
Nov 24 20:00:34 compute-0 podman[153566]: 2025-11-24 20:00:34.559957855 +0000 UTC m=+0.044109429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:00:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:00:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:34.681+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:34 compute-0 podman[153566]: 2025-11-24 20:00:34.690900571 +0000 UTC m=+0.175052075 container init fa01582a06ef50bf7f10d10779f7b72f84b0ee9d5ef921b13912cd0bdab56582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_thompson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:00:34 compute-0 podman[153566]: 2025-11-24 20:00:34.699054069 +0000 UTC m=+0.183205563 container start fa01582a06ef50bf7f10d10779f7b72f84b0ee9d5ef921b13912cd0bdab56582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:00:34 compute-0 podman[153566]: 2025-11-24 20:00:34.703637891 +0000 UTC m=+0.187789395 container attach fa01582a06ef50bf7f10d10779f7b72f84b0ee9d5ef921b13912cd0bdab56582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_thompson, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:00:34 compute-0 trusting_thompson[153628]: 167 167
Nov 24 20:00:34 compute-0 systemd[1]: libpod-fa01582a06ef50bf7f10d10779f7b72f84b0ee9d5ef921b13912cd0bdab56582.scope: Deactivated successfully.
Nov 24 20:00:34 compute-0 podman[153566]: 2025-11-24 20:00:34.707809513 +0000 UTC m=+0.191961047 container died fa01582a06ef50bf7f10d10779f7b72f84b0ee9d5ef921b13912cd0bdab56582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-28f90d6fc0a925642c5f22163537669cf2bece5a26e5d7dafe24b1404f9ddaa5-merged.mount: Deactivated successfully.
Nov 24 20:00:34 compute-0 podman[153566]: 2025-11-24 20:00:34.765046611 +0000 UTC m=+0.249198115 container remove fa01582a06ef50bf7f10d10779f7b72f84b0ee9d5ef921b13912cd0bdab56582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 20:00:34 compute-0 sudo[153671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gunaoebacbabxwvfdkrwgjzmzvrichxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014434.3916469-516-201596017662528/AnsiballZ_stat.py'
Nov 24 20:00:34 compute-0 sudo[153671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:34 compute-0 systemd[1]: libpod-conmon-fa01582a06ef50bf7f10d10779f7b72f84b0ee9d5ef921b13912cd0bdab56582.scope: Deactivated successfully.
Nov 24 20:00:34 compute-0 sudo[153671]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:34 compute-0 podman[153682]: 2025-11-24 20:00:34.971145324 +0000 UTC m=+0.063577539 container create 98d79f99dd1138cf138c018e34787178bc305aa3914be791b14be942f6338a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:00:35 compute-0 podman[153682]: 2025-11-24 20:00:34.937397643 +0000 UTC m=+0.029829918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:00:35 compute-0 systemd[1]: Started libpod-conmon-98d79f99dd1138cf138c018e34787178bc305aa3914be791b14be942f6338a4d.scope.
Nov 24 20:00:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4d345580a275c6f5d04ee92b06308f4f99c71d5fad5364b04f0e6656b929ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4d345580a275c6f5d04ee92b06308f4f99c71d5fad5364b04f0e6656b929ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4d345580a275c6f5d04ee92b06308f4f99c71d5fad5364b04f0e6656b929ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4d345580a275c6f5d04ee92b06308f4f99c71d5fad5364b04f0e6656b929ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:35 compute-0 podman[153682]: 2025-11-24 20:00:35.098171565 +0000 UTC m=+0.190603750 container init 98d79f99dd1138cf138c018e34787178bc305aa3914be791b14be942f6338a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 20:00:35 compute-0 podman[153682]: 2025-11-24 20:00:35.108710177 +0000 UTC m=+0.201142362 container start 98d79f99dd1138cf138c018e34787178bc305aa3914be791b14be942f6338a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:00:35 compute-0 podman[153682]: 2025-11-24 20:00:35.113437972 +0000 UTC m=+0.205870147 container attach 98d79f99dd1138cf138c018e34787178bc305aa3914be791b14be942f6338a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:00:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:35 compute-0 ceph-mon[75677]: pgmap v507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:35 compute-0 sudo[153824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roatfuqvjwnmhpbrgohkwfoneaoqcukx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014434.3916469-516-201596017662528/AnsiballZ_copy.py'
Nov 24 20:00:35 compute-0 sudo[153824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:35.501+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:35.658+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:35 compute-0 sudo[153824]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]: {
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:     "0": [
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:         {
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "devices": [
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "/dev/loop3"
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             ],
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_name": "ceph_lv0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_size": "21470642176",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "name": "ceph_lv0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "tags": {
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.cluster_name": "ceph",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.crush_device_class": "",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.encrypted": "0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.osd_id": "0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.type": "block",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.vdo": "0"
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             },
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "type": "block",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "vg_name": "ceph_vg0"
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:         }
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:     ],
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:     "1": [
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:         {
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "devices": [
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "/dev/loop4"
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             ],
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_name": "ceph_lv1",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_size": "21470642176",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "name": "ceph_lv1",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "tags": {
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.cluster_name": "ceph",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.crush_device_class": "",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.encrypted": "0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.osd_id": "1",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.type": "block",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.vdo": "0"
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             },
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "type": "block",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "vg_name": "ceph_vg1"
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:         }
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:     ],
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:     "2": [
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:         {
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "devices": [
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "/dev/loop5"
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             ],
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_name": "ceph_lv2",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_size": "21470642176",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "name": "ceph_lv2",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "tags": {
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.cluster_name": "ceph",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.crush_device_class": "",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.encrypted": "0",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.osd_id": "2",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.type": "block",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:                 "ceph.vdo": "0"
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             },
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "type": "block",
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:             "vg_name": "ceph_vg2"
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:         }
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]:     ]
Nov 24 20:00:35 compute-0 sleepy_franklin[153717]: }
Nov 24 20:00:35 compute-0 systemd[1]: libpod-98d79f99dd1138cf138c018e34787178bc305aa3914be791b14be942f6338a4d.scope: Deactivated successfully.
Nov 24 20:00:35 compute-0 podman[153682]: 2025-11-24 20:00:35.930469827 +0000 UTC m=+1.022902032 container died 98d79f99dd1138cf138c018e34787178bc305aa3914be791b14be942f6338a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 24 20:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e4d345580a275c6f5d04ee92b06308f4f99c71d5fad5364b04f0e6656b929ad-merged.mount: Deactivated successfully.
Nov 24 20:00:35 compute-0 podman[153682]: 2025-11-24 20:00:35.985075346 +0000 UTC m=+1.077507531 container remove 98d79f99dd1138cf138c018e34787178bc305aa3914be791b14be942f6338a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_franklin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:00:35 compute-0 systemd[1]: libpod-conmon-98d79f99dd1138cf138c018e34787178bc305aa3914be791b14be942f6338a4d.scope: Deactivated successfully.
Nov 24 20:00:36 compute-0 sudo[153425]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:36 compute-0 sudo[153910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:00:36 compute-0 sudo[153910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:36 compute-0 sudo[153910]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:36 compute-0 sudo[153944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:00:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:36 compute-0 sudo[153944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:36 compute-0 sudo[153944]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:36 compute-0 sudo[153969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:00:36 compute-0 sudo[153969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:36 compute-0 sudo[153969]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:36 compute-0 sudo[154015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:00:36 compute-0 sudo[154015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:36.478+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:36 compute-0 sudo[154092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mokoouoocjjorwhcntzcssfbgivwrbsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014435.9977846-533-210715633460006/AnsiballZ_container_config_data.py'
Nov 24 20:00:36 compute-0 sudo[154092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 551 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:36.612+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:36 compute-0 python3.9[154094]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 24 20:00:36 compute-0 sudo[154092]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:36 compute-0 podman[154138]: 2025-11-24 20:00:36.820495721 +0000 UTC m=+0.044865509 container create 39b8434b8cd12a684b3396f74d90d8f08444ea03e3386210fa9fda09112c2a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hypatia, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:00:36 compute-0 systemd[1]: Started libpod-conmon-39b8434b8cd12a684b3396f74d90d8f08444ea03e3386210fa9fda09112c2a8e.scope.
Nov 24 20:00:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:00:36 compute-0 podman[154138]: 2025-11-24 20:00:36.802246343 +0000 UTC m=+0.026616161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:00:36 compute-0 podman[154138]: 2025-11-24 20:00:36.911343646 +0000 UTC m=+0.135713474 container init 39b8434b8cd12a684b3396f74d90d8f08444ea03e3386210fa9fda09112c2a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:00:36 compute-0 podman[154138]: 2025-11-24 20:00:36.919293879 +0000 UTC m=+0.143663677 container start 39b8434b8cd12a684b3396f74d90d8f08444ea03e3386210fa9fda09112c2a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hypatia, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 20:00:36 compute-0 podman[154138]: 2025-11-24 20:00:36.922476734 +0000 UTC m=+0.146846542 container attach 39b8434b8cd12a684b3396f74d90d8f08444ea03e3386210fa9fda09112c2a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hypatia, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 20:00:36 compute-0 relaxed_hypatia[154173]: 167 167
Nov 24 20:00:36 compute-0 systemd[1]: libpod-39b8434b8cd12a684b3396f74d90d8f08444ea03e3386210fa9fda09112c2a8e.scope: Deactivated successfully.
Nov 24 20:00:36 compute-0 podman[154138]: 2025-11-24 20:00:36.928748942 +0000 UTC m=+0.153118750 container died 39b8434b8cd12a684b3396f74d90d8f08444ea03e3386210fa9fda09112c2a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:00:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4999d7115784efeb4a3d3576e039aa2afbf4c2427a98057e750e2303257156d5-merged.mount: Deactivated successfully.
Nov 24 20:00:36 compute-0 podman[154138]: 2025-11-24 20:00:36.964918087 +0000 UTC m=+0.189287885 container remove 39b8434b8cd12a684b3396f74d90d8f08444ea03e3386210fa9fda09112c2a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hypatia, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:00:36 compute-0 systemd[1]: libpod-conmon-39b8434b8cd12a684b3396f74d90d8f08444ea03e3386210fa9fda09112c2a8e.scope: Deactivated successfully.
Nov 24 20:00:37 compute-0 podman[154240]: 2025-11-24 20:00:37.165323088 +0000 UTC m=+0.065211612 container create 132836f3633de8f237f0ed379ad09daaec6f9fee2537efc160f9e040a00f8fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:00:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:37 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 551 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:37 compute-0 ceph-mon[75677]: pgmap v508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:37 compute-0 systemd[1]: Started libpod-conmon-132836f3633de8f237f0ed379ad09daaec6f9fee2537efc160f9e040a00f8fd7.scope.
Nov 24 20:00:37 compute-0 podman[154240]: 2025-11-24 20:00:37.131639178 +0000 UTC m=+0.031527772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:00:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f4c41a2c273b5b0f012b09c74e39fff03b8f9e8ec737adb02aa5a6c2170e48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f4c41a2c273b5b0f012b09c74e39fff03b8f9e8ec737adb02aa5a6c2170e48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f4c41a2c273b5b0f012b09c74e39fff03b8f9e8ec737adb02aa5a6c2170e48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f4c41a2c273b5b0f012b09c74e39fff03b8f9e8ec737adb02aa5a6c2170e48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:37 compute-0 podman[154240]: 2025-11-24 20:00:37.250293146 +0000 UTC m=+0.150181680 container init 132836f3633de8f237f0ed379ad09daaec6f9fee2537efc160f9e040a00f8fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:00:37 compute-0 podman[154240]: 2025-11-24 20:00:37.2612954 +0000 UTC m=+0.161183944 container start 132836f3633de8f237f0ed379ad09daaec6f9fee2537efc160f9e040a00f8fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 20:00:37 compute-0 podman[154240]: 2025-11-24 20:00:37.265304258 +0000 UTC m=+0.165192812 container attach 132836f3633de8f237f0ed379ad09daaec6f9fee2537efc160f9e040a00f8fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 20:00:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:37.460+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:37 compute-0 sudo[154342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krtnviegejgwpyockznswosnmyhykxyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014437.0421152-542-270165164608394/AnsiballZ_container_config_hash.py'
Nov 24 20:00:37 compute-0 sudo[154342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:37.653+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:37 compute-0 python3.9[154344]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 20:00:37 compute-0 sudo[154342]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]: {
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "osd_id": 2,
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "type": "bluestore"
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:     },
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "osd_id": 1,
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "type": "bluestore"
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:     },
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "osd_id": 0,
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:         "type": "bluestore"
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]:     }
Nov 24 20:00:38 compute-0 wonderful_chandrasekhar[154264]: }
Nov 24 20:00:38 compute-0 systemd[1]: libpod-132836f3633de8f237f0ed379ad09daaec6f9fee2537efc160f9e040a00f8fd7.scope: Deactivated successfully.
Nov 24 20:00:38 compute-0 podman[154240]: 2025-11-24 20:00:38.338491621 +0000 UTC m=+1.238380195 container died 132836f3633de8f237f0ed379ad09daaec6f9fee2537efc160f9e040a00f8fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 24 20:00:38 compute-0 systemd[1]: libpod-132836f3633de8f237f0ed379ad09daaec6f9fee2537efc160f9e040a00f8fd7.scope: Consumed 1.084s CPU time.
Nov 24 20:00:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f4c41a2c273b5b0f012b09c74e39fff03b8f9e8ec737adb02aa5a6c2170e48-merged.mount: Deactivated successfully.
Nov 24 20:00:38 compute-0 podman[154240]: 2025-11-24 20:00:38.432323186 +0000 UTC m=+1.332211700 container remove 132836f3633de8f237f0ed379ad09daaec6f9fee2537efc160f9e040a00f8fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:00:38 compute-0 systemd[1]: libpod-conmon-132836f3633de8f237f0ed379ad09daaec6f9fee2537efc160f9e040a00f8fd7.scope: Deactivated successfully.
Nov 24 20:00:38 compute-0 sudo[154015]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:00:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:38.479+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:00:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:00:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:00:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2b26ea4d-3c07-4177-9c09-3d641005da56 does not exist
Nov 24 20:00:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev b89b1ee8-0724-41fe-b97a-bd132422df9e does not exist
Nov 24 20:00:38 compute-0 sudo[154537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duszfpxbwslyshtricbuywvhrbaxgsss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014438.0082598-551-103112084321794/AnsiballZ_podman_container_info.py'
Nov 24 20:00:38 compute-0 sudo[154537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:38 compute-0 sudo[154538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:00:38 compute-0 sudo[154538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:38 compute-0 sudo[154538]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:38 compute-0 sudo[154565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:00:38 compute-0 sudo[154565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:00:38 compute-0 sudo[154565]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:38.688+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:38 compute-0 python3.9[154545]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 20:00:38 compute-0 sudo[154537]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:39.444+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:00:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:00:39 compute-0 ceph-mon[75677]: pgmap v509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:39.653+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:00:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:00:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:00:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:00:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:00:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:40.468+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:40 compute-0 sudo[154767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlmcuacavgofjystzbevakdyvxetgpwy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764014439.972505-564-139710574624922/AnsiballZ_edpm_container_manage.py'
Nov 24 20:00:40 compute-0 sudo[154767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:40.608+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:40 compute-0 python3[154769]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 20:00:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:41 compute-0 ceph-mon[75677]: pgmap v510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:41.518+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 561 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:41.582+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:42 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 561 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:42.544+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:42.551+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:43.523+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:43 compute-0 ceph-mon[75677]: pgmap v511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:43.556+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:44.519+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:44.536+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:44 compute-0 sshd-session[154640]: Invalid user 1234 from 27.79.44.141 port 47960
Nov 24 20:00:45 compute-0 sshd-session[154640]: Connection closed by invalid user 1234 27.79.44.141 port 47960 [preauth]
Nov 24 20:00:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:45.545+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:45.563+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:45 compute-0 ceph-mon[75677]: pgmap v512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:46 compute-0 podman[154782]: 2025-11-24 20:00:46.396337144 +0000 UTC m=+5.431028268 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 24 20:00:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:46.563+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:46.602+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:46 compute-0 podman[154901]: 2025-11-24 20:00:46.631618326 +0000 UTC m=+0.075304261 container create 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:00:46 compute-0 podman[154901]: 2025-11-24 20:00:46.594964678 +0000 UTC m=+0.038650623 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 24 20:00:46 compute-0 python3[154769]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 24 20:00:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:46 compute-0 sudo[154767]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:47 compute-0 sudo[155088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhneckwosoazxtfgtvjflwyziesiwin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014447.0880797-572-179781931960312/AnsiballZ_stat.py'
Nov 24 20:00:47 compute-0 sudo[155088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:47.569+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:47.598+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 566 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:47 compute-0 ceph-mon[75677]: pgmap v513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:47 compute-0 python3.9[155090]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:00:47 compute-0 sudo[155088]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:48 compute-0 sudo[155242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixocdcvigzzwungjvjwmcwhntvnwyvbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014448.0327394-581-224185267985049/AnsiballZ_file.py'
Nov 24 20:00:48 compute-0 sudo[155242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:48.571+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:48 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 566 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:48 compute-0 python3.9[155244]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:48.639+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:48 compute-0 sudo[155242]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:48 compute-0 sudo[155318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvmcnzkfvajjibsgprcqxpfxshcsrgdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014448.0327394-581-224185267985049/AnsiballZ_stat.py'
Nov 24 20:00:48 compute-0 sudo[155318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:49 compute-0 python3.9[155320]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:00:49 compute-0 sudo[155318]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:49.525+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:49.618+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:49 compute-0 ceph-mon[75677]: pgmap v514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:49 compute-0 sudo[155469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beljhiltuhkdtyiravhialxmgebgkmdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014449.237505-581-272603506141157/AnsiballZ_copy.py'
Nov 24 20:00:49 compute-0 sudo[155469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:50 compute-0 python3.9[155471]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764014449.237505-581-272603506141157/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:00:50 compute-0 sudo[155469]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:50 compute-0 sudo[155545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yggjygiztxwocvrbtwxjdewqepplddqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014449.237505-581-272603506141157/AnsiballZ_systemd.py'
Nov 24 20:00:50 compute-0 sudo[155545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:50.535+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:50.592+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:50 compute-0 python3.9[155547]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 20:00:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:50 compute-0 systemd[1]: Reloading.
Nov 24 20:00:50 compute-0 systemd-rc-local-generator[155575]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:00:50 compute-0 systemd-sysv-generator[155580]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:00:51 compute-0 sudo[155545]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:51 compute-0 sudo[155657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btgxswdsdgqezevoxuupumndcssxihwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014449.237505-581-272603506141157/AnsiballZ_systemd.py'
Nov 24 20:00:51 compute-0 sudo[155657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:51.542+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:51.545+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:51 compute-0 ceph-mon[75677]: pgmap v515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:51 compute-0 python3.9[155659]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:00:51 compute-0 systemd[1]: Reloading.
Nov 24 20:00:51 compute-0 systemd-rc-local-generator[155689]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:00:51 compute-0 systemd-sysv-generator[155693]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:00:52 compute-0 systemd[1]: Starting ovn_controller container...
Nov 24 20:00:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306f8525b33bfaf7e8503fcf8f7ed679875f6eb6b585a14a538ffc27b9e5b332/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 24 20:00:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2.
Nov 24 20:00:52 compute-0 podman[155700]: 2025-11-24 20:00:52.427222998 +0000 UTC m=+0.193664821 container init 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 20:00:52 compute-0 ovn_controller[155716]: + sudo -E kolla_set_configs
Nov 24 20:00:52 compute-0 podman[155700]: 2025-11-24 20:00:52.46398735 +0000 UTC m=+0.230429173 container start 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 20:00:52 compute-0 edpm-start-podman-container[155700]: ovn_controller
Nov 24 20:00:52 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 24 20:00:52 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 24 20:00:52 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 24 20:00:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:52.542+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:52 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 24 20:00:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:52.558+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:52 compute-0 systemd[155754]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 24 20:00:52 compute-0 edpm-start-podman-container[155699]: Creating additional drop-in dependency for "ovn_controller" (8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2)
Nov 24 20:00:52 compute-0 podman[155723]: 2025-11-24 20:00:52.593885838 +0000 UTC m=+0.108757874 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 20:00:52 compute-0 systemd[1]: Reloading.
Nov 24 20:00:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:52 compute-0 systemd-sysv-generator[155810]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:00:52 compute-0 systemd-rc-local-generator[155806]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:00:52 compute-0 systemd[155754]: Queued start job for default target Main User Target.
Nov 24 20:00:52 compute-0 systemd[155754]: Created slice User Application Slice.
Nov 24 20:00:52 compute-0 systemd[155754]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 24 20:00:52 compute-0 systemd[155754]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 20:00:52 compute-0 systemd[155754]: Reached target Paths.
Nov 24 20:00:52 compute-0 systemd[155754]: Reached target Timers.
Nov 24 20:00:52 compute-0 systemd[155754]: Starting D-Bus User Message Bus Socket...
Nov 24 20:00:52 compute-0 systemd[155754]: Starting Create User's Volatile Files and Directories...
Nov 24 20:00:52 compute-0 systemd[155754]: Finished Create User's Volatile Files and Directories.
Nov 24 20:00:52 compute-0 systemd[155754]: Listening on D-Bus User Message Bus Socket.
Nov 24 20:00:52 compute-0 systemd[155754]: Reached target Sockets.
Nov 24 20:00:52 compute-0 systemd[155754]: Reached target Basic System.
Nov 24 20:00:52 compute-0 systemd[155754]: Reached target Main User Target.
Nov 24 20:00:52 compute-0 systemd[155754]: Startup finished in 215ms.
Nov 24 20:00:52 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 24 20:00:52 compute-0 systemd[1]: Started ovn_controller container.
Nov 24 20:00:52 compute-0 systemd[1]: 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2-5a9b345f0210f47e.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 20:00:52 compute-0 systemd[1]: 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2-5a9b345f0210f47e.service: Failed with result 'exit-code'.
Nov 24 20:00:52 compute-0 systemd[1]: Started Session c1 of User root.
Nov 24 20:00:52 compute-0 sudo[155657]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:53 compute-0 ovn_controller[155716]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 20:00:53 compute-0 ovn_controller[155716]: INFO:__main__:Validating config file
Nov 24 20:00:53 compute-0 ovn_controller[155716]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 20:00:53 compute-0 ovn_controller[155716]: INFO:__main__:Writing out command to execute
Nov 24 20:00:53 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 24 20:00:53 compute-0 ovn_controller[155716]: ++ cat /run_command
Nov 24 20:00:53 compute-0 ovn_controller[155716]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 20:00:53 compute-0 ovn_controller[155716]: + ARGS=
Nov 24 20:00:53 compute-0 ovn_controller[155716]: + sudo kolla_copy_cacerts
Nov 24 20:00:53 compute-0 systemd[1]: Started Session c2 of User root.
Nov 24 20:00:53 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 24 20:00:53 compute-0 ovn_controller[155716]: + [[ ! -n '' ]]
Nov 24 20:00:53 compute-0 ovn_controller[155716]: + . kolla_extend_start
Nov 24 20:00:53 compute-0 ovn_controller[155716]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 20:00:53 compute-0 ovn_controller[155716]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 24 20:00:53 compute-0 ovn_controller[155716]: + umask 0022
Nov 24 20:00:53 compute-0 ovn_controller[155716]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 24 20:00:53 compute-0 NetworkManager[49557]: <info>  [1764014453.1478] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 24 20:00:53 compute-0 NetworkManager[49557]: <info>  [1764014453.1491] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 20:00:53 compute-0 NetworkManager[49557]: <info>  [1764014453.1512] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 24 20:00:53 compute-0 NetworkManager[49557]: <info>  [1764014453.1523] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 24 20:00:53 compute-0 NetworkManager[49557]: <info>  [1764014453.1531] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 20:00:53 compute-0 kernel: br-int: entered promiscuous mode
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 20:00:53 compute-0 ovn_controller[155716]: 2025-11-24T20:00:53Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 20:00:53 compute-0 NetworkManager[49557]: <info>  [1764014453.1816] manager: (ovn-995a86-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 24 20:00:53 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 24 20:00:53 compute-0 NetworkManager[49557]: <info>  [1764014453.2051] device (genev_sys_6081): carrier: link connected
Nov 24 20:00:53 compute-0 NetworkManager[49557]: <info>  [1764014453.2059] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 24 20:00:53 compute-0 systemd-udevd[155880]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 20:00:53 compute-0 systemd-udevd[155886]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 20:00:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:53.494+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:53 compute-0 sudo[155985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iohdbajnbosgvgfcvrvcxzgnzxdqsqfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014453.1855495-609-137747674959545/AnsiballZ_command.py'
Nov 24 20:00:53 compute-0 sudo[155985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:53.576+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:53 compute-0 ceph-mon[75677]: pgmap v516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:53 compute-0 python3.9[155987]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:00:53 compute-0 ovs-vsctl[155988]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 24 20:00:53 compute-0 sudo[155985]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:54 compute-0 sudo[156138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmqmgpusylznytuwtlspgapulozepojg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014453.9891124-617-212992867626163/AnsiballZ_command.py'
Nov 24 20:00:54 compute-0 sudo[156138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:00:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:54.477+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:54.578+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:54 compute-0 python3.9[156140]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:00:54 compute-0 ovs-vsctl[156142]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 24 20:00:54 compute-0 sudo[156138]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:55 compute-0 sudo[156293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbhkeizdqngejqwuyvevvwpenkiykjja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014455.0395796-631-167864115752862/AnsiballZ_command.py'
Nov 24 20:00:55 compute-0 sudo[156293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:00:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:55.442+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:55.624+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:55 compute-0 python3.9[156295]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:00:55 compute-0 ovs-vsctl[156296]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 24 20:00:55 compute-0 sudo[156293]: pam_unix(sudo:session): session closed for user root
Nov 24 20:00:55 compute-0 ceph-mon[75677]: pgmap v517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:56 compute-0 sshd-session[144226]: Connection closed by 192.168.122.30 port 55736
Nov 24 20:00:56 compute-0 sshd-session[144223]: pam_unix(sshd:session): session closed for user zuul
Nov 24 20:00:56 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Nov 24 20:00:56 compute-0 systemd[1]: session-46.scope: Consumed 1min 10.101s CPU time.
Nov 24 20:00:56 compute-0 systemd-logind[795]: Session 46 logged out. Waiting for processes to exit.
Nov 24 20:00:56 compute-0 systemd-logind[795]: Removed session 46.
Nov 24 20:00:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:56.464+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 571 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:00:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:56.576+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:56 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 571 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:00:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:56 compute-0 ceph-mon[75677]: pgmap v518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:57.447+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:57.534+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:58.474+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:58.539+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:58 compute-0 ceph-mon[75677]: pgmap v519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:00:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:00:59.448+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:00:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:00:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:00:59.521+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:00:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:00:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:00.480+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:00.498+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:00 compute-0 ceph-mon[75677]: pgmap v520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:01 compute-0 CROND[156323]: (root) CMD (run-parts /etc/cron.hourly)
Nov 24 20:01:01 compute-0 run-parts[156326]: (/etc/cron.hourly) starting 0anacron
Nov 24 20:01:01 compute-0 anacron[156334]: Anacron started on 2025-11-24
Nov 24 20:01:01 compute-0 anacron[156334]: Will run job `cron.daily' in 17 min.
Nov 24 20:01:01 compute-0 anacron[156334]: Will run job `cron.weekly' in 37 min.
Nov 24 20:01:01 compute-0 anacron[156334]: Will run job `cron.monthly' in 57 min.
Nov 24 20:01:01 compute-0 anacron[156334]: Jobs will be executed sequentially
Nov 24 20:01:01 compute-0 run-parts[156336]: (/etc/cron.hourly) finished 0anacron
Nov 24 20:01:01 compute-0 CROND[156322]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 24 20:01:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:01.458+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:01.460+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 581 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:01 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 581 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:01 compute-0 sshd-session[156337]: Accepted publickey for zuul from 192.168.122.30 port 49532 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 20:01:01 compute-0 systemd-logind[795]: New session 48 of user zuul.
Nov 24 20:01:01 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 24 20:01:01 compute-0 sshd-session[156337]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 20:01:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:02.432+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:02.498+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:02 compute-0 ceph-mon[75677]: pgmap v521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:03 compute-0 python3.9[156490]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 20:01:03 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 24 20:01:03 compute-0 systemd[155754]: Activating special unit Exit the Session...
Nov 24 20:01:03 compute-0 systemd[155754]: Stopped target Main User Target.
Nov 24 20:01:03 compute-0 systemd[155754]: Stopped target Basic System.
Nov 24 20:01:03 compute-0 systemd[155754]: Stopped target Paths.
Nov 24 20:01:03 compute-0 systemd[155754]: Stopped target Sockets.
Nov 24 20:01:03 compute-0 systemd[155754]: Stopped target Timers.
Nov 24 20:01:03 compute-0 systemd[155754]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 20:01:03 compute-0 systemd[155754]: Closed D-Bus User Message Bus Socket.
Nov 24 20:01:03 compute-0 systemd[155754]: Stopped Create User's Volatile Files and Directories.
Nov 24 20:01:03 compute-0 systemd[155754]: Removed slice User Application Slice.
Nov 24 20:01:03 compute-0 systemd[155754]: Reached target Shutdown.
Nov 24 20:01:03 compute-0 systemd[155754]: Finished Exit the Session.
Nov 24 20:01:03 compute-0 systemd[155754]: Reached target Exit the Session.
Nov 24 20:01:03 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 24 20:01:03 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 24 20:01:03 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 24 20:01:03 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 24 20:01:03 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 24 20:01:03 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 24 20:01:03 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 24 20:01:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:03.416+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:03.484+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:04 compute-0 sudo[156645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-injmzpwvazlshiayjeqwrzmqdqmjwsfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014463.6850283-34-147554437047512/AnsiballZ_file.py'
Nov 24 20:01:04 compute-0 sudo[156645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:04.374+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:04 compute-0 python3.9[156647]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:04.485+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:04 compute-0 sudo[156645]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:04 compute-0 ceph-mon[75677]: pgmap v522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:05 compute-0 sudo[156797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxgiscwojbvojuhistwiiodkcdqcmqak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014464.6994672-34-168271390222132/AnsiballZ_file.py'
Nov 24 20:01:05 compute-0 sudo[156797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:05 compute-0 python3.9[156799]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:05 compute-0 sudo[156797]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:05.404+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:05.507+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:05 compute-0 sudo[156949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skuaooxznmykxdrmqlnosidehldcqvsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014465.5546553-34-270304479541406/AnsiballZ_file.py'
Nov 24 20:01:05 compute-0 sudo[156949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:06 compute-0 python3.9[156951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:06 compute-0 sudo[156949]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:06.414+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:06.514+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:06 compute-0 sudo[157101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saujpwvnozjpcrmbtjllccjvheqyibhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014466.3924794-34-278292952044135/AnsiballZ_file.py'
Nov 24 20:01:06 compute-0 sudo[157101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 586 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:06 compute-0 ceph-mon[75677]: pgmap v523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:06 compute-0 python3.9[157103]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:07 compute-0 sudo[157101]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:07.464+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:07.558+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:07 compute-0 sudo[157253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsinqhmfnwdvdnmbillmxgtffyqdgdcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014467.1964192-34-108118903953658/AnsiballZ_file.py'
Nov 24 20:01:07 compute-0 sudo[157253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:08 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 586 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:08.427+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:08.516+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:09 compute-0 ceph-mon[75677]: pgmap v524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:09 compute-0 python3.9[157255]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:09 compute-0 sudo[157253]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:09.411+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:09.476+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:10 compute-0 python3.9[157410]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 20:01:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:10.428+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:10.474+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:11 compute-0 sudo[157560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgmjlwmwxalideoohztktszfjwtblwea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014470.5739396-78-15523963708080/AnsiballZ_seboolean.py'
Nov 24 20:01:11 compute-0 sudo[157560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:11 compute-0 python3.9[157562]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 20:01:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:11 compute-0 ceph-mon[75677]: pgmap v525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:11.413+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:11.461+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:11 compute-0 sudo[157560]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:12.378+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:12.498+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:12 compute-0 python3.9[157713]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:13.355+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:13 compute-0 ceph-mon[75677]: pgmap v526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:13.469+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:13 compute-0 python3.9[157834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764014472.1861606-86-160911047014144/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:14.369+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:14.455+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:14 compute-0 python3.9[157984]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:15 compute-0 python3.9[158107]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764014474.0558186-101-65239138934001/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:15.377+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:15.427+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:15 compute-0 ceph-mon[75677]: pgmap v527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.487259) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014475487368, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2309, "num_deletes": 251, "total_data_size": 2845940, "memory_usage": 2893568, "flush_reason": "Manual Compaction"}
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014475509788, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2789243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10569, "largest_seqno": 12877, "table_properties": {"data_size": 2779552, "index_size": 5545, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 26687, "raw_average_key_size": 21, "raw_value_size": 2757105, "raw_average_value_size": 2252, "num_data_blocks": 245, "num_entries": 1224, "num_filter_entries": 1224, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764014309, "oldest_key_time": 1764014309, "file_creation_time": 1764014475, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 22593 microseconds, and 12045 cpu microseconds.
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.509871) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2789243 bytes OK
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.509905) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.511830) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.511853) EVENT_LOG_v1 {"time_micros": 1764014475511845, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.511881) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2835550, prev total WAL file size 2835550, number of live WAL files 2.
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.513365) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2723KB)], [26(6303KB)]
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014475513463, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9243715, "oldest_snapshot_seqno": -1}
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4879 keys, 7256873 bytes, temperature: kUnknown
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014475571409, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7256873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7223937, "index_size": 19642, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 122582, "raw_average_key_size": 25, "raw_value_size": 7134908, "raw_average_value_size": 1462, "num_data_blocks": 827, "num_entries": 4879, "num_filter_entries": 4879, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764014475, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.572017) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7256873 bytes
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.573993) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.5 rd, 124.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 6.2 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(5.9) write-amplify(2.6) OK, records in: 5393, records dropped: 514 output_compression: NoCompression
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.574024) EVENT_LOG_v1 {"time_micros": 1764014475574008, "job": 10, "event": "compaction_finished", "compaction_time_micros": 58305, "compaction_time_cpu_micros": 35957, "output_level": 6, "num_output_files": 1, "total_output_size": 7256873, "num_input_records": 5393, "num_output_records": 4879, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014475575928, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014475578784, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.513259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.579058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.579068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.579071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.579074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:01:15 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:01:15.579078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:01:16 compute-0 sudo[158257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjwtpyonnjjxjemdxgmhmcxyazlyeoog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014475.6919725-118-117237279696765/AnsiballZ_setup.py'
Nov 24 20:01:16 compute-0 sudo[158257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:16.370+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:16 compute-0 python3.9[158259]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 20:01:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:16.423+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 591 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:16 compute-0 sudo[158257]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:17 compute-0 sudo[158341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwqjyboufiymnbzeryiypxnxihcnwmzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014475.6919725-118-117237279696765/AnsiballZ_dnf.py'
Nov 24 20:01:17 compute-0 sudo[158341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:17.384+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:17.426+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:17 compute-0 python3.9[158343]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 20:01:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:17 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 591 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:17 compute-0 ceph-mon[75677]: pgmap v528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:18.410+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:18.423+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:18 compute-0 sudo[158341]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:19.387+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:19.455+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:19 compute-0 sudo[158495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbtaapfdrkjuudeyjlecmtyqqhkizyke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014478.8414507-130-119233550584864/AnsiballZ_systemd.py'
Nov 24 20:01:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:19 compute-0 sudo[158495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:19 compute-0 ceph-mon[75677]: pgmap v529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:19 compute-0 python3.9[158497]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 20:01:19 compute-0 sudo[158495]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:20.368+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:20.448+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:20 compute-0 python3.9[158651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:21.365+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:21.481+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:21 compute-0 ceph-mon[75677]: pgmap v530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 601 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:21 compute-0 python3.9[158772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764014480.3573267-138-255391636035289/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:22.390+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:22 compute-0 python3.9[158922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:22.497+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:22 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 601 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:23 compute-0 python3.9[159043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764014481.8060646-138-114571472926805/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:23 compute-0 ovn_controller[155716]: 2025-11-24T20:01:23Z|00025|memory|INFO|16128 kB peak resident set size after 30.1 seconds
Nov 24 20:01:23 compute-0 ovn_controller[155716]: 2025-11-24T20:01:23Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 24 20:01:23 compute-0 podman[159044]: 2025-11-24 20:01:23.270918675 +0000 UTC m=+0.157593895 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 20:01:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:23.419+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:23.496+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:23 compute-0 ceph-mon[75677]: pgmap v531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:01:24
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'images']
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:01:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:24.435+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:24 compute-0 python3.9[159219]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:24.502+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:25 compute-0 python3.9[159340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764014483.9050963-182-179120210591304/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:25.400+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:25.540+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:25 compute-0 ceph-mon[75677]: pgmap v532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:25 compute-0 python3.9[159490]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:26.373+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:26 compute-0 python3.9[159611]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764014485.3665514-182-66677356212646/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:26.555+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:27.387+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:27 compute-0 python3.9[159761]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:01:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:27.538+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 606 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:27 compute-0 ceph-mon[75677]: pgmap v533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:28 compute-0 sshd-session[158444]: Connection closed by authenticating user sync 27.79.44.141 port 42020 [preauth]
Nov 24 20:01:28 compute-0 sudo[159913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stgkgsysjpmanpfzhilbizokdvfkfrdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014487.7334464-220-13248720973680/AnsiballZ_file.py'
Nov 24 20:01:28 compute-0 sudo[159913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:28.368+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:28 compute-0 python3.9[159915]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:28 compute-0 sudo[159913]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:28.533+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:28 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 606 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:29 compute-0 sudo[160065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljaxxbijyxykrliykzujftrvrtjeknjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014488.6632686-228-79348659960861/AnsiballZ_stat.py'
Nov 24 20:01:29 compute-0 sudo[160065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:29 compute-0 python3.9[160067]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:29 compute-0 sudo[160065]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:29.344+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:29.515+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:29 compute-0 sudo[160143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrviackmngmbierdbmvqzrydziahvrhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014488.6632686-228-79348659960861/AnsiballZ_file.py'
Nov 24 20:01:29 compute-0 sudo[160143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:29 compute-0 ceph-mon[75677]: pgmap v534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:29 compute-0 sshd-session[158067]: Connection closed by authenticating user root 27.79.44.141 port 42000 [preauth]
Nov 24 20:01:29 compute-0 python3.9[160145]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:29 compute-0 sudo[160143]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:30.320+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:30 compute-0 sudo[160295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mismvundzzhbateohozalwbvkaxcmxak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014490.0246954-228-122217375595730/AnsiballZ_stat.py'
Nov 24 20:01:30 compute-0 sudo[160295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:30.499+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:30 compute-0 python3.9[160297]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:30 compute-0 sudo[160295]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:30 compute-0 sudo[160375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhfmslemwzffrvfdxxlncckilhvdvgyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014490.0246954-228-122217375595730/AnsiballZ_file.py'
Nov 24 20:01:30 compute-0 sudo[160375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:31 compute-0 python3.9[160377]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:31 compute-0 sudo[160375]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:31.360+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:31.485+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:31 compute-0 ceph-mon[75677]: pgmap v535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:31 compute-0 sudo[160527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmobpuptlzolkzksvxzaskhckauwfamv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014491.4006057-251-105286688172220/AnsiballZ_file.py'
Nov 24 20:01:31 compute-0 sudo[160527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:32 compute-0 python3.9[160529]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:01:32 compute-0 sudo[160527]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:32.367+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:32.496+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:32 compute-0 sudo[160679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnkqjjddjnrrwyndyecjfiusirenhtzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014492.2720003-259-194790185150486/AnsiballZ_stat.py'
Nov 24 20:01:32 compute-0 sudo[160679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:32 compute-0 python3.9[160681]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:32 compute-0 sudo[160679]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:33 compute-0 sudo[160757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzypatgzqilfphidpfrjyqucevhzyirr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014492.2720003-259-194790185150486/AnsiballZ_file.py'
Nov 24 20:01:33 compute-0 sudo[160757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:33 compute-0 sshd-session[160298]: Invalid user nikita from 27.79.44.141 port 48232
Nov 24 20:01:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:33.405+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:33 compute-0 python3.9[160759]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:01:33 compute-0 sudo[160757]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:33.448+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:33 compute-0 sshd-session[160298]: Connection closed by invalid user nikita 27.79.44.141 port 48232 [preauth]
Nov 24 20:01:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:33 compute-0 ceph-mon[75677]: pgmap v536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:34 compute-0 sudo[160909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkjnppkcmwpljigwvjqmgzczdilfipiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014493.6997259-271-266924213621063/AnsiballZ_stat.py'
Nov 24 20:01:34 compute-0 sudo[160909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:01:34 compute-0 python3.9[160911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:34 compute-0 sudo[160909]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:34.402+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:34.435+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:34 compute-0 sudo[160987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqwrxgqijhheavqlqchfbhiqplqetwja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014493.6997259-271-266924213621063/AnsiballZ_file.py'
Nov 24 20:01:34 compute-0 sudo[160987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:34 compute-0 python3.9[160989]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:01:34 compute-0 sudo[160987]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:35.422+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:35.424+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:35 compute-0 sudo[161139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htiblgaceonygjcbymygrgdusbdonppe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014495.2041314-283-49487978142098/AnsiballZ_systemd.py'
Nov 24 20:01:35 compute-0 sudo[161139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:35 compute-0 ceph-mon[75677]: pgmap v537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:35 compute-0 python3.9[161141]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:01:35 compute-0 systemd[1]: Reloading.
Nov 24 20:01:36 compute-0 systemd-rc-local-generator[161168]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:01:36 compute-0 systemd-sysv-generator[161171]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:01:36 compute-0 sudo[161139]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:36.407+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:36.442+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 611 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:36 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 611 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:37 compute-0 sudo[161327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eegvldwtdlvsojhycoztfwbouwhfgpii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014496.6150444-291-188356993415841/AnsiballZ_stat.py'
Nov 24 20:01:37 compute-0 sudo[161327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:37 compute-0 python3.9[161329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:37 compute-0 sudo[161327]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:37.394+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:37.446+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:37 compute-0 sudo[161405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxmydcuixsoevnkxlaheirwhjperbace ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014496.6150444-291-188356993415841/AnsiballZ_file.py'
Nov 24 20:01:37 compute-0 sudo[161405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:37 compute-0 ceph-mon[75677]: pgmap v538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:37 compute-0 python3.9[161407]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:01:37 compute-0 sudo[161405]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:38.370+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:38 compute-0 sudo[161557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqwszvgqgxdqhainvmcmsfivnokedjch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014497.9824219-303-97475828142555/AnsiballZ_stat.py'
Nov 24 20:01:38 compute-0 sudo[161557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:38.430+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:38 compute-0 python3.9[161559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:38 compute-0 sudo[161557]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:38 compute-0 ceph-mon[75677]: pgmap v539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:38 compute-0 sudo[161563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:38 compute-0 sudo[161563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:38 compute-0 sudo[161563]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:38 compute-0 sudo[161615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:01:38 compute-0 sudo[161615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:38 compute-0 sudo[161615]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:38 compute-0 sudo[161699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgmrqxvcfqrbrdikcsiogwkzvymfmmaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014497.9824219-303-97475828142555/AnsiballZ_file.py'
Nov 24 20:01:38 compute-0 sudo[161699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:38 compute-0 sudo[161670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:38 compute-0 sudo[161670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:38 compute-0 sudo[161670]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:39 compute-0 sudo[161713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 20:01:39 compute-0 sudo[161713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:39 compute-0 python3.9[161710]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:01:39 compute-0 sudo[161699]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:39.407+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:39.480+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:39 compute-0 podman[161909]: 2025-11-24 20:01:39.729031905 +0000 UTC m=+0.079297335 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 20:01:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:39 compute-0 sudo[161978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlmojpyvkxyxmjjjmhidonenwcxriqlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014499.3765647-315-121614144132860/AnsiballZ_systemd.py'
Nov 24 20:01:39 compute-0 sudo[161978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:39 compute-0 podman[161909]: 2025-11-24 20:01:39.850913828 +0000 UTC m=+0.201179238 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 20:01:40 compute-0 python3.9[161980]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:01:40 compute-0 systemd[1]: Reloading.
Nov 24 20:01:40 compute-0 systemd-rc-local-generator[162062]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:01:40 compute-0 systemd-sysv-generator[162065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:01:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:01:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:01:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:01:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:01:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:01:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:40.454+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:40.511+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:40 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 20:01:40 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 20:01:40 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 20:01:40 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 20:01:40 compute-0 sudo[161978]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:40 compute-0 ceph-mon[75677]: pgmap v540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:41 compute-0 sudo[161713]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:01:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:01:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:01:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:01:41 compute-0 sudo[162257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:41 compute-0 sudo[162257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:41 compute-0 sudo[162257]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:41 compute-0 sudo[162306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:01:41 compute-0 sudo[162306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:41 compute-0 sudo[162306]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:41 compute-0 sudo[162356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgiodyjsnloqverjeihqyduqqzmlpnqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014500.869359-325-213020586202250/AnsiballZ_file.py'
Nov 24 20:01:41 compute-0 sudo[162356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:41 compute-0 sudo[162358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:41 compute-0 sudo[162358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:41 compute-0 sudo[162358]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:41 compute-0 sudo[162385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:01:41 compute-0 sudo[162385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:41 compute-0 python3.9[162365]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:41 compute-0 sudo[162356]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:41.439+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:41.525+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 621 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:41 compute-0 sudo[162385]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:41 compute-0 sudo[162591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovkdvgmcuddsjiaerjatevfateqiwlqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014501.596459-333-139092552308055/AnsiballZ_stat.py'
Nov 24 20:01:41 compute-0 sudo[162591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:01:41 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:01:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:01:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:01:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:01:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:01:41 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 15238058-8c50-4255-9b11-5a0a1ee8da30 does not exist
Nov 24 20:01:41 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 88800d82-1eba-4ca8-8408-2199e0aa8258 does not exist
Nov 24 20:01:41 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 11ed8013-a658-470e-aee0-1adbf5302808 does not exist
Nov 24 20:01:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:01:41 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:01:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:01:41 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:01:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:01:41 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:01:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:01:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:01:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:42 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 621 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:01:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:01:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:01:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:01:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:01:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:01:42 compute-0 sudo[162594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:42 compute-0 sudo[162594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:42 compute-0 sudo[162594]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:42 compute-0 sudo[162619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:01:42 compute-0 sudo[162619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:42 compute-0 sudo[162619]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:42 compute-0 python3.9[162593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:42 compute-0 sudo[162591]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:42 compute-0 sudo[162644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:42 compute-0 sudo[162644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:42 compute-0 sudo[162644]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:42 compute-0 sudo[162692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:01:42 compute-0 sudo[162692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:42.475+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:42.503+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:42 compute-0 sudo[162840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkbbikzvpuojlplafmhfjvtsgwfyzokr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014501.596459-333-139092552308055/AnsiballZ_copy.py'
Nov 24 20:01:42 compute-0 sudo[162840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:42 compute-0 podman[162857]: 2025-11-24 20:01:42.706341697 +0000 UTC m=+0.053537532 container create 705b0f13c950df47c2a7a006cf7a4d2b748a2e68ae07cb0f5b6069e1d9df0a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:01:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:42 compute-0 systemd[1]: Started libpod-conmon-705b0f13c950df47c2a7a006cf7a4d2b748a2e68ae07cb0f5b6069e1d9df0a87.scope.
Nov 24 20:01:42 compute-0 podman[162857]: 2025-11-24 20:01:42.679237537 +0000 UTC m=+0.026433452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:01:42 compute-0 python3.9[162848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764014501.596459-333-139092552308055/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:01:42 compute-0 sudo[162840]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:42 compute-0 podman[162857]: 2025-11-24 20:01:42.81623178 +0000 UTC m=+0.163427695 container init 705b0f13c950df47c2a7a006cf7a4d2b748a2e68ae07cb0f5b6069e1d9df0a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:01:42 compute-0 podman[162857]: 2025-11-24 20:01:42.824839922 +0000 UTC m=+0.172035787 container start 705b0f13c950df47c2a7a006cf7a4d2b748a2e68ae07cb0f5b6069e1d9df0a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 20:01:42 compute-0 podman[162857]: 2025-11-24 20:01:42.831901554 +0000 UTC m=+0.179097489 container attach 705b0f13c950df47c2a7a006cf7a4d2b748a2e68ae07cb0f5b6069e1d9df0a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:01:42 compute-0 awesome_noether[162874]: 167 167
Nov 24 20:01:42 compute-0 systemd[1]: libpod-705b0f13c950df47c2a7a006cf7a4d2b748a2e68ae07cb0f5b6069e1d9df0a87.scope: Deactivated successfully.
Nov 24 20:01:42 compute-0 podman[162857]: 2025-11-24 20:01:42.834658626 +0000 UTC m=+0.181854471 container died 705b0f13c950df47c2a7a006cf7a4d2b748a2e68ae07cb0f5b6069e1d9df0a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:01:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-81ffc21820ff76a3a73afdcc8ffd2d1179929cf5adeb642b8c73a8447fed22a8-merged.mount: Deactivated successfully.
Nov 24 20:01:42 compute-0 podman[162857]: 2025-11-24 20:01:42.903460789 +0000 UTC m=+0.250656634 container remove 705b0f13c950df47c2a7a006cf7a4d2b748a2e68ae07cb0f5b6069e1d9df0a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:01:42 compute-0 systemd[1]: libpod-conmon-705b0f13c950df47c2a7a006cf7a4d2b748a2e68ae07cb0f5b6069e1d9df0a87.scope: Deactivated successfully.
Nov 24 20:01:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:43 compute-0 ceph-mon[75677]: pgmap v541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:43 compute-0 podman[162920]: 2025-11-24 20:01:43.155749074 +0000 UTC m=+0.066136525 container create 1a43803fcd96ea9e3f1d736692d8695d500300bad261656ec50a88a55a88e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 20:01:43 compute-0 systemd[1]: Started libpod-conmon-1a43803fcd96ea9e3f1d736692d8695d500300bad261656ec50a88a55a88e049.scope.
Nov 24 20:01:43 compute-0 podman[162920]: 2025-11-24 20:01:43.131972682 +0000 UTC m=+0.042360123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:01:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628190c73570b774ad7da57be4437e45575e9e9700a49550cd24b7e93a0a35dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628190c73570b774ad7da57be4437e45575e9e9700a49550cd24b7e93a0a35dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628190c73570b774ad7da57be4437e45575e9e9700a49550cd24b7e93a0a35dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628190c73570b774ad7da57be4437e45575e9e9700a49550cd24b7e93a0a35dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628190c73570b774ad7da57be4437e45575e9e9700a49550cd24b7e93a0a35dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:43 compute-0 podman[162920]: 2025-11-24 20:01:43.290780647 +0000 UTC m=+0.201168108 container init 1a43803fcd96ea9e3f1d736692d8695d500300bad261656ec50a88a55a88e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:01:43 compute-0 podman[162920]: 2025-11-24 20:01:43.302905109 +0000 UTC m=+0.213292570 container start 1a43803fcd96ea9e3f1d736692d8695d500300bad261656ec50a88a55a88e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:01:43 compute-0 podman[162920]: 2025-11-24 20:01:43.309114149 +0000 UTC m=+0.219501620 container attach 1a43803fcd96ea9e3f1d736692d8695d500300bad261656ec50a88a55a88e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:01:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:43.464+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:43.470+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:43 compute-0 sudo[163066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhxvqdobnidakukjgwfiwxhwwiohywvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014503.1917253-350-7092683236598/AnsiballZ_file.py'
Nov 24 20:01:43 compute-0 sudo[163066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:43 compute-0 python3.9[163068]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:01:43 compute-0 sudo[163066]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:44 compute-0 sudo[163235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxpvzbcecjgrapcrnhnhhtflymaoytwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014503.9272733-358-26215782260278/AnsiballZ_stat.py'
Nov 24 20:01:44 compute-0 sudo[163235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:44.437+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:44 compute-0 wonderful_easley[162960]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:01:44 compute-0 wonderful_easley[162960]: --> relative data size: 1.0
Nov 24 20:01:44 compute-0 wonderful_easley[162960]: --> All data devices are unavailable
Nov 24 20:01:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:44.505+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:44 compute-0 systemd[1]: libpod-1a43803fcd96ea9e3f1d736692d8695d500300bad261656ec50a88a55a88e049.scope: Deactivated successfully.
Nov 24 20:01:44 compute-0 systemd[1]: libpod-1a43803fcd96ea9e3f1d736692d8695d500300bad261656ec50a88a55a88e049.scope: Consumed 1.141s CPU time.
Nov 24 20:01:44 compute-0 podman[162920]: 2025-11-24 20:01:44.521399719 +0000 UTC m=+1.431787150 container died 1a43803fcd96ea9e3f1d736692d8695d500300bad261656ec50a88a55a88e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:01:44 compute-0 python3.9[163238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:01:44 compute-0 sudo[163235]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-628190c73570b774ad7da57be4437e45575e9e9700a49550cd24b7e93a0a35dc-merged.mount: Deactivated successfully.
Nov 24 20:01:44 compute-0 podman[162920]: 2025-11-24 20:01:44.596464755 +0000 UTC m=+1.506852176 container remove 1a43803fcd96ea9e3f1d736692d8695d500300bad261656ec50a88a55a88e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:01:44 compute-0 systemd[1]: libpod-conmon-1a43803fcd96ea9e3f1d736692d8695d500300bad261656ec50a88a55a88e049.scope: Deactivated successfully.
Nov 24 20:01:44 compute-0 sudo[162692]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:44 compute-0 sudo[163282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:44 compute-0 sudo[163282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:44 compute-0 sudo[163282]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:44 compute-0 sudo[163330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:01:44 compute-0 sudo[163330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:44 compute-0 sudo[163330]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:44 compute-0 sudo[163379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:44 compute-0 sudo[163379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:44 compute-0 sudo[163379]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:44 compute-0 sudo[163428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:01:44 compute-0 sudo[163428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:44 compute-0 sudo[163479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvzwvlzwobgocomosfrcuwrlxcnboptm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014503.9272733-358-26215782260278/AnsiballZ_copy.py'
Nov 24 20:01:44 compute-0 sudo[163479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:45 compute-0 ceph-mon[75677]: pgmap v542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:45 compute-0 python3.9[163481]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014503.9272733-358-26215782260278/.source.json _original_basename=.ld9izcft follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:01:45 compute-0 sudo[163479]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:45 compute-0 podman[163546]: 2025-11-24 20:01:45.39828337 +0000 UTC m=+0.061912838 container create d97c94eac5f98f6fee0daaf1a119d7ce5a03183d95cbbebf6ff15c0405860503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:01:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:45.414+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:45 compute-0 systemd[1]: Started libpod-conmon-d97c94eac5f98f6fee0daaf1a119d7ce5a03183d95cbbebf6ff15c0405860503.scope.
Nov 24 20:01:45 compute-0 podman[163546]: 2025-11-24 20:01:45.368001459 +0000 UTC m=+0.031630967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:01:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:01:45 compute-0 podman[163546]: 2025-11-24 20:01:45.501264805 +0000 UTC m=+0.164894293 container init d97c94eac5f98f6fee0daaf1a119d7ce5a03183d95cbbebf6ff15c0405860503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:01:45 compute-0 podman[163546]: 2025-11-24 20:01:45.516693263 +0000 UTC m=+0.180322731 container start d97c94eac5f98f6fee0daaf1a119d7ce5a03183d95cbbebf6ff15c0405860503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:01:45 compute-0 podman[163546]: 2025-11-24 20:01:45.522117793 +0000 UTC m=+0.185747271 container attach d97c94eac5f98f6fee0daaf1a119d7ce5a03183d95cbbebf6ff15c0405860503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meninsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:01:45 compute-0 mystifying_meninsky[163603]: 167 167
Nov 24 20:01:45 compute-0 systemd[1]: libpod-d97c94eac5f98f6fee0daaf1a119d7ce5a03183d95cbbebf6ff15c0405860503.scope: Deactivated successfully.
Nov 24 20:01:45 compute-0 podman[163546]: 2025-11-24 20:01:45.524536715 +0000 UTC m=+0.188166193 container died d97c94eac5f98f6fee0daaf1a119d7ce5a03183d95cbbebf6ff15c0405860503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meninsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 20:01:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:45.544+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e8737a4f907ef19c0e42faa52d0152a1b9030fb75ef641061f12a874efcdd51-merged.mount: Deactivated successfully.
Nov 24 20:01:45 compute-0 podman[163546]: 2025-11-24 20:01:45.594326305 +0000 UTC m=+0.257955753 container remove d97c94eac5f98f6fee0daaf1a119d7ce5a03183d95cbbebf6ff15c0405860503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meninsky, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:01:45 compute-0 systemd[1]: libpod-conmon-d97c94eac5f98f6fee0daaf1a119d7ce5a03183d95cbbebf6ff15c0405860503.scope: Deactivated successfully.
Nov 24 20:01:45 compute-0 sudo[163713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkspqrywstibpeglyexeschxvqzycxjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014505.401564-373-18317042228667/AnsiballZ_file.py'
Nov 24 20:01:45 compute-0 sudo[163713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:45 compute-0 podman[163714]: 2025-11-24 20:01:45.798038118 +0000 UTC m=+0.046988343 container create 261ad57d47f96d168c7b52dea4e30b6025a6f91fe905a28dd4d28c72b068d697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cartwright, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:01:45 compute-0 systemd[1]: Started libpod-conmon-261ad57d47f96d168c7b52dea4e30b6025a6f91fe905a28dd4d28c72b068d697.scope.
Nov 24 20:01:45 compute-0 podman[163714]: 2025-11-24 20:01:45.779786367 +0000 UTC m=+0.028736582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:01:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:01:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf54712f4a734095afc32997dedb4d0ff95cb41d9c0e9cb8eb07644358979f4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf54712f4a734095afc32997dedb4d0ff95cb41d9c0e9cb8eb07644358979f4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf54712f4a734095afc32997dedb4d0ff95cb41d9c0e9cb8eb07644358979f4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf54712f4a734095afc32997dedb4d0ff95cb41d9c0e9cb8eb07644358979f4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:45 compute-0 podman[163714]: 2025-11-24 20:01:45.919545941 +0000 UTC m=+0.168496166 container init 261ad57d47f96d168c7b52dea4e30b6025a6f91fe905a28dd4d28c72b068d697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cartwright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:01:45 compute-0 podman[163714]: 2025-11-24 20:01:45.928978654 +0000 UTC m=+0.177928889 container start 261ad57d47f96d168c7b52dea4e30b6025a6f91fe905a28dd4d28c72b068d697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:01:45 compute-0 podman[163714]: 2025-11-24 20:01:45.934854255 +0000 UTC m=+0.183804480 container attach 261ad57d47f96d168c7b52dea4e30b6025a6f91fe905a28dd4d28c72b068d697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cartwright, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 20:01:45 compute-0 python3.9[163722]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:01:45 compute-0 sudo[163713]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:46.432+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:46.539+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:46 compute-0 sudo[163889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnqumtyyrivspedpnzeyzabwpmtcscud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014506.227766-381-112996096566451/AnsiballZ_stat.py'
Nov 24 20:01:46 compute-0 sudo[163889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]: {
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:     "0": [
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:         {
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "devices": [
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "/dev/loop3"
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             ],
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_name": "ceph_lv0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_size": "21470642176",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "name": "ceph_lv0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "tags": {
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.cluster_name": "ceph",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.crush_device_class": "",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.encrypted": "0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.osd_id": "0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.type": "block",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.vdo": "0"
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             },
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "type": "block",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "vg_name": "ceph_vg0"
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:         }
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:     ],
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:     "1": [
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:         {
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "devices": [
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "/dev/loop4"
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             ],
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_name": "ceph_lv1",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_size": "21470642176",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "name": "ceph_lv1",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "tags": {
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.cluster_name": "ceph",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.crush_device_class": "",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.encrypted": "0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.osd_id": "1",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.type": "block",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.vdo": "0"
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             },
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "type": "block",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "vg_name": "ceph_vg1"
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:         }
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:     ],
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:     "2": [
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:         {
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "devices": [
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "/dev/loop5"
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             ],
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_name": "ceph_lv2",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_size": "21470642176",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "name": "ceph_lv2",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "tags": {
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.cluster_name": "ceph",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.crush_device_class": "",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.encrypted": "0",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.osd_id": "2",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.type": "block",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:                 "ceph.vdo": "0"
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             },
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "type": "block",
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:             "vg_name": "ceph_vg2"
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:         }
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]:     ]
Nov 24 20:01:46 compute-0 hardcore_cartwright[163733]: }
Nov 24 20:01:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:46 compute-0 systemd[1]: libpod-261ad57d47f96d168c7b52dea4e30b6025a6f91fe905a28dd4d28c72b068d697.scope: Deactivated successfully.
Nov 24 20:01:46 compute-0 podman[163714]: 2025-11-24 20:01:46.752865018 +0000 UTC m=+1.001815263 container died 261ad57d47f96d168c7b52dea4e30b6025a6f91fe905a28dd4d28c72b068d697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cartwright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 20:01:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf54712f4a734095afc32997dedb4d0ff95cb41d9c0e9cb8eb07644358979f4d-merged.mount: Deactivated successfully.
Nov 24 20:01:46 compute-0 podman[163714]: 2025-11-24 20:01:46.8409747 +0000 UTC m=+1.089924935 container remove 261ad57d47f96d168c7b52dea4e30b6025a6f91fe905a28dd4d28c72b068d697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:01:46 compute-0 systemd[1]: libpod-conmon-261ad57d47f96d168c7b52dea4e30b6025a6f91fe905a28dd4d28c72b068d697.scope: Deactivated successfully.
Nov 24 20:01:46 compute-0 sudo[163428]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:46 compute-0 sudo[163889]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:46 compute-0 sudo[163907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:46 compute-0 sudo[163907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:46 compute-0 sudo[163907]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:47 compute-0 sudo[163955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:01:47 compute-0 sudo[163955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:47 compute-0 sudo[163955]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 626 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:47 compute-0 ceph-mon[75677]: pgmap v543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:47 compute-0 sudo[164004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:47 compute-0 sudo[164004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:47 compute-0 sudo[164004]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:47 compute-0 sudo[164052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:01:47 compute-0 sudo[164052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:47 compute-0 sudo[164127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptyevvjhuyrkslkezndibxziqhfnnsdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014506.227766-381-112996096566451/AnsiballZ_copy.py'
Nov 24 20:01:47 compute-0 sudo[164127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:47.474+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:47.500+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:47 compute-0 sudo[164127]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:47 compute-0 podman[164191]: 2025-11-24 20:01:47.736919664 +0000 UTC m=+0.078291440 container create 9d503ccee912a33349ebbac33411ade2786fd892ba3b0b1bdeddcfe0acf5f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:01:47 compute-0 podman[164191]: 2025-11-24 20:01:47.703788399 +0000 UTC m=+0.045160235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:01:47 compute-0 systemd[1]: Started libpod-conmon-9d503ccee912a33349ebbac33411ade2786fd892ba3b0b1bdeddcfe0acf5f217.scope.
Nov 24 20:01:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:01:47 compute-0 podman[164191]: 2025-11-24 20:01:47.859121485 +0000 UTC m=+0.200493311 container init 9d503ccee912a33349ebbac33411ade2786fd892ba3b0b1bdeddcfe0acf5f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:01:47 compute-0 podman[164191]: 2025-11-24 20:01:47.871060553 +0000 UTC m=+0.212432329 container start 9d503ccee912a33349ebbac33411ade2786fd892ba3b0b1bdeddcfe0acf5f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 20:01:47 compute-0 podman[164191]: 2025-11-24 20:01:47.87519504 +0000 UTC m=+0.216566826 container attach 9d503ccee912a33349ebbac33411ade2786fd892ba3b0b1bdeddcfe0acf5f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:01:47 compute-0 intelligent_khayyam[164208]: 167 167
Nov 24 20:01:47 compute-0 systemd[1]: libpod-9d503ccee912a33349ebbac33411ade2786fd892ba3b0b1bdeddcfe0acf5f217.scope: Deactivated successfully.
Nov 24 20:01:47 compute-0 podman[164191]: 2025-11-24 20:01:47.879501171 +0000 UTC m=+0.220872997 container died 9d503ccee912a33349ebbac33411ade2786fd892ba3b0b1bdeddcfe0acf5f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:01:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-77abba885117b56d81b1fa6035b8b4129fe4cf9653007fc9d19bd491b53fb5ae-merged.mount: Deactivated successfully.
Nov 24 20:01:47 compute-0 podman[164191]: 2025-11-24 20:01:47.946703204 +0000 UTC m=+0.288074990 container remove 9d503ccee912a33349ebbac33411ade2786fd892ba3b0b1bdeddcfe0acf5f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:01:47 compute-0 systemd[1]: libpod-conmon-9d503ccee912a33349ebbac33411ade2786fd892ba3b0b1bdeddcfe0acf5f217.scope: Deactivated successfully.
Nov 24 20:01:48 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 626 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:48 compute-0 podman[164284]: 2025-11-24 20:01:48.162575601 +0000 UTC m=+0.073910358 container create d47936759b5f8ef7d6e0a27c8c106cd8cce087d55b8f44d0fcf0fa017d79fc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 20:01:48 compute-0 systemd[1]: Started libpod-conmon-d47936759b5f8ef7d6e0a27c8c106cd8cce087d55b8f44d0fcf0fa017d79fc4b.scope.
Nov 24 20:01:48 compute-0 podman[164284]: 2025-11-24 20:01:48.130425621 +0000 UTC m=+0.041760418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:01:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021b44bf6e698fbe8a3c06a34da08a5d75f2c4a2ca0b26b69851d98bdbc5014b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021b44bf6e698fbe8a3c06a34da08a5d75f2c4a2ca0b26b69851d98bdbc5014b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021b44bf6e698fbe8a3c06a34da08a5d75f2c4a2ca0b26b69851d98bdbc5014b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021b44bf6e698fbe8a3c06a34da08a5d75f2c4a2ca0b26b69851d98bdbc5014b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:01:48 compute-0 podman[164284]: 2025-11-24 20:01:48.270073152 +0000 UTC m=+0.181407929 container init d47936759b5f8ef7d6e0a27c8c106cd8cce087d55b8f44d0fcf0fa017d79fc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 20:01:48 compute-0 podman[164284]: 2025-11-24 20:01:48.281096827 +0000 UTC m=+0.192431574 container start d47936759b5f8ef7d6e0a27c8c106cd8cce087d55b8f44d0fcf0fa017d79fc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:01:48 compute-0 podman[164284]: 2025-11-24 20:01:48.289114073 +0000 UTC m=+0.200448830 container attach d47936759b5f8ef7d6e0a27c8c106cd8cce087d55b8f44d0fcf0fa017d79fc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curie, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:01:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:48.437+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:48.460+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:48 compute-0 sudo[164379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-revlwtrgfqbtqayllcqlxoriaouezuda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014507.9386442-398-106993577588726/AnsiballZ_container_config_data.py'
Nov 24 20:01:48 compute-0 sudo[164379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:48 compute-0 python3.9[164381]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 24 20:01:48 compute-0 sudo[164379]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:49 compute-0 ceph-mon[75677]: pgmap v544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:49 compute-0 brave_curie[164302]: {
Nov 24 20:01:49 compute-0 brave_curie[164302]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "osd_id": 2,
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "type": "bluestore"
Nov 24 20:01:49 compute-0 brave_curie[164302]:     },
Nov 24 20:01:49 compute-0 brave_curie[164302]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "osd_id": 1,
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "type": "bluestore"
Nov 24 20:01:49 compute-0 brave_curie[164302]:     },
Nov 24 20:01:49 compute-0 brave_curie[164302]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "osd_id": 0,
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:01:49 compute-0 brave_curie[164302]:         "type": "bluestore"
Nov 24 20:01:49 compute-0 brave_curie[164302]:     }
Nov 24 20:01:49 compute-0 brave_curie[164302]: }
Nov 24 20:01:49 compute-0 systemd[1]: libpod-d47936759b5f8ef7d6e0a27c8c106cd8cce087d55b8f44d0fcf0fa017d79fc4b.scope: Deactivated successfully.
Nov 24 20:01:49 compute-0 systemd[1]: libpod-d47936759b5f8ef7d6e0a27c8c106cd8cce087d55b8f44d0fcf0fa017d79fc4b.scope: Consumed 1.111s CPU time.
Nov 24 20:01:49 compute-0 podman[164284]: 2025-11-24 20:01:49.41170093 +0000 UTC m=+1.323035717 container died d47936759b5f8ef7d6e0a27c8c106cd8cce087d55b8f44d0fcf0fa017d79fc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:01:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:49.443+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-021b44bf6e698fbe8a3c06a34da08a5d75f2c4a2ca0b26b69851d98bdbc5014b-merged.mount: Deactivated successfully.
Nov 24 20:01:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:49.478+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:49 compute-0 podman[164284]: 2025-11-24 20:01:49.503373284 +0000 UTC m=+1.414708041 container remove d47936759b5f8ef7d6e0a27c8c106cd8cce087d55b8f44d0fcf0fa017d79fc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:01:49 compute-0 systemd[1]: libpod-conmon-d47936759b5f8ef7d6e0a27c8c106cd8cce087d55b8f44d0fcf0fa017d79fc4b.scope: Deactivated successfully.
Nov 24 20:01:49 compute-0 sudo[164052]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:01:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:01:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:01:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:01:49 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 836b747e-571d-41fe-b3a2-1e2407c838b1 does not exist
Nov 24 20:01:49 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2399691c-41c2-4bfa-8e60-af9309c536ab does not exist
Nov 24 20:01:49 compute-0 sudo[164574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcbwfykbwlamreszkmzrinvpcdqzmfdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014508.9628122-407-253747053376204/AnsiballZ_container_config_hash.py'
Nov 24 20:01:49 compute-0 sudo[164574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:49 compute-0 sudo[164573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:01:49 compute-0 sudo[164573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:49 compute-0 sudo[164573]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:49 compute-0 sudo[164601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:01:49 compute-0 sudo[164601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:01:49 compute-0 sudo[164601]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:49 compute-0 python3.9[164581]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 20:01:49 compute-0 sudo[164574]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:50.445+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:50.460+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:01:50 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:01:50 compute-0 sudo[164775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyfdqnyyoqtpksmhkrijyrhabemjkzjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014510.0815992-416-168809876930224/AnsiballZ_podman_container_info.py'
Nov 24 20:01:50 compute-0 sudo[164775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:50 compute-0 python3.9[164777]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 20:01:51 compute-0 sudo[164775]: pam_unix(sudo:session): session closed for user root
Nov 24 20:01:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:51.445+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:51.451+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:51 compute-0 ceph-mon[75677]: pgmap v545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:52 compute-0 sudo[164953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtlifyocaceatbvebalrzjtfwpqdbxid ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764014511.7337625-429-30774724172650/AnsiballZ_edpm_container_manage.py'
Nov 24 20:01:52 compute-0 sudo[164953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:01:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:52.421+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:52.469+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:52 compute-0 python3[164955]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 20:01:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:53.458+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:53.486+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:53 compute-0 ceph-mon[75677]: pgmap v546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:53 compute-0 podman[164997]: 2025-11-24 20:01:53.877179765 +0000 UTC m=+0.104538958 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 20:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:01:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:54.480+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:54.481+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:55.517+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:55.529+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:55 compute-0 ceph-mon[75677]: pgmap v547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:56.505+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:56.571+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 631 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:01:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:56 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 631 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:01:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:57.506+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:57.610+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:57 compute-0 ceph-mon[75677]: pgmap v548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:58.524+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:58.601+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:58 compute-0 ceph-mon[75677]: pgmap v549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:01:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:01:59.495+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:01:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:01:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:01:59.613+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:01:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:01:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:00.528+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:00.578+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:00 compute-0 ceph-mon[75677]: pgmap v550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:01 compute-0 podman[164969]: 2025-11-24 20:02:01.511742665 +0000 UTC m=+8.773662714 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 24 20:02:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:01.530+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:01.562+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 641 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:01 compute-0 podman[165125]: 2025-11-24 20:02:01.75115801 +0000 UTC m=+0.099728644 container create 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:02:01 compute-0 podman[165125]: 2025-11-24 20:02:01.680715572 +0000 UTC m=+0.029286286 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 24 20:02:01 compute-0 python3[164955]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 24 20:02:01 compute-0 sudo[164953]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:01 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 641 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:02.490+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:02 compute-0 sudo[165313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpfjxhcnabwbygauuhinqlnmvcvtmrne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014522.1553931-437-27344446182440/AnsiballZ_stat.py'
Nov 24 20:02:02 compute-0 sudo[165313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:02.604+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:02 compute-0 python3.9[165315]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:02:02 compute-0 sudo[165313]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:02 compute-0 ceph-mon[75677]: pgmap v551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:03 compute-0 sudo[165467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrlbeilvqlgwyariaptmgiaqhdquerwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014523.113069-446-17081332624637/AnsiballZ_file.py'
Nov 24 20:02:03 compute-0 sudo[165467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:03.497+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:03.588+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:03 compute-0 python3.9[165469]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:03 compute-0 sudo[165467]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:03 compute-0 sudo[165543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zesedwyzniwycbysddyhrxlofnatczqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014523.113069-446-17081332624637/AnsiballZ_stat.py'
Nov 24 20:02:03 compute-0 sudo[165543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:04 compute-0 python3.9[165545]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:02:04 compute-0 sudo[165543]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:04.494+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:04.607+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:04 compute-0 sudo[165694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqpwjsqvvvlomnknfmguyicetsugziye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014524.2982519-446-27529521867343/AnsiballZ_copy.py'
Nov 24 20:02:04 compute-0 sudo[165694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:05 compute-0 ceph-mon[75677]: pgmap v552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:05 compute-0 python3.9[165696]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764014524.2982519-446-27529521867343/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:05 compute-0 sudo[165694]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:05.477+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:05 compute-0 sudo[165770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlqzccopzcclxirtlvxjalrtctcaivuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014524.2982519-446-27529521867343/AnsiballZ_systemd.py'
Nov 24 20:02:05 compute-0 sudo[165770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:05.574+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:05 compute-0 python3.9[165772]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 20:02:05 compute-0 systemd[1]: Reloading.
Nov 24 20:02:05 compute-0 systemd-rc-local-generator[165799]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:02:05 compute-0 systemd-sysv-generator[165802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:02:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:06 compute-0 sudo[165770]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:06.508+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:06.564+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:06 compute-0 sudo[165881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpiqbpfdwbwcdyurtttrrkuzsdejrcsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014524.2982519-446-27529521867343/AnsiballZ_systemd.py'
Nov 24 20:02:06 compute-0 sudo[165881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:06 compute-0 python3.9[165883]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:02:06 compute-0 systemd[1]: Reloading.
Nov 24 20:02:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 646 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:07 compute-0 ceph-mon[75677]: pgmap v553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:07 compute-0 systemd-rc-local-generator[165909]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:02:07 compute-0 systemd-sysv-generator[165914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:02:07 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 24 20:02:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52763a611ba6334ce9138d50495a7518ce49bea1d5a17a1a11e536a42dc39c2/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52763a611ba6334ce9138d50495a7518ce49bea1d5a17a1a11e536a42dc39c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:07.510+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:07 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c.
Nov 24 20:02:07 compute-0 podman[165924]: 2025-11-24 20:02:07.527947564 +0000 UTC m=+0.189119487 container init 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: + sudo -E kolla_set_configs
Nov 24 20:02:07 compute-0 podman[165924]: 2025-11-24 20:02:07.563293675 +0000 UTC m=+0.224465578 container start 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 24 20:02:07 compute-0 edpm-start-podman-container[165924]: ovn_metadata_agent
Nov 24 20:02:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:07.569+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Validating config file
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Copying service configuration files
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Writing out command to execute
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: ++ cat /run_command
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: + CMD=neutron-ovn-metadata-agent
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: + ARGS=
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: + sudo kolla_copy_cacerts
Nov 24 20:02:07 compute-0 edpm-start-podman-container[165923]: Creating additional drop-in dependency for "ovn_metadata_agent" (9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c)
Nov 24 20:02:07 compute-0 podman[165946]: 2025-11-24 20:02:07.69623315 +0000 UTC m=+0.115399920 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 24 20:02:07 compute-0 systemd[1]: Reloading.
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: Running command: 'neutron-ovn-metadata-agent'
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: + [[ ! -n '' ]]
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: + . kolla_extend_start
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: + umask 0022
Nov 24 20:02:07 compute-0 ovn_metadata_agent[165939]: + exec neutron-ovn-metadata-agent
Nov 24 20:02:07 compute-0 systemd-rc-local-generator[166015]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:02:07 compute-0 systemd-sysv-generator[166019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:02:08 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 24 20:02:08 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 646 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:08 compute-0 sudo[165881]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:08 compute-0 sshd-session[156340]: Connection closed by 192.168.122.30 port 49532
Nov 24 20:02:08 compute-0 sshd-session[156337]: pam_unix(sshd:session): session closed for user zuul
Nov 24 20:02:08 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 24 20:02:08 compute-0 systemd[1]: session-48.scope: Consumed 1min 5.798s CPU time.
Nov 24 20:02:08 compute-0 systemd-logind[795]: Session 48 logged out. Waiting for processes to exit.
Nov 24 20:02:08 compute-0 systemd-logind[795]: Removed session 48.
Nov 24 20:02:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:08.506+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:08.605+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:09 compute-0 ceph-mon[75677]: pgmap v554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.316 165944 INFO neutron.common.config [-] Logging enabled!
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.317 165944 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.317 165944 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.317 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.317 165944 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.318 165944 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.318 165944 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.318 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.318 165944 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.318 165944 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.318 165944 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.318 165944 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.318 165944 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.318 165944 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.319 165944 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.319 165944 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.319 165944 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.319 165944 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.319 165944 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.319 165944 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.319 165944 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.319 165944 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.319 165944 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.320 165944 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.320 165944 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.320 165944 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.320 165944 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.320 165944 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.320 165944 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.320 165944 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.320 165944 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.320 165944 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.320 165944 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.321 165944 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.321 165944 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.321 165944 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.321 165944 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.321 165944 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.321 165944 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.321 165944 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.321 165944 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.322 165944 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.322 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.322 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.322 165944 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.322 165944 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.322 165944 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.322 165944 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.322 165944 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.322 165944 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.322 165944 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.323 165944 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.323 165944 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.323 165944 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.323 165944 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.323 165944 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.323 165944 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.323 165944 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.323 165944 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.323 165944 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.323 165944 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.324 165944 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.324 165944 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.324 165944 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.324 165944 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.324 165944 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.324 165944 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.324 165944 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.324 165944 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.325 165944 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.325 165944 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.325 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.325 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.325 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.325 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.325 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.326 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.326 165944 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.326 165944 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.326 165944 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.326 165944 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.326 165944 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.326 165944 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.326 165944 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.327 165944 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.327 165944 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.327 165944 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.327 165944 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.327 165944 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.327 165944 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.327 165944 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.327 165944 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.327 165944 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.327 165944 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.328 165944 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.329 165944 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.329 165944 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.329 165944 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.329 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.329 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.329 165944 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.329 165944 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.329 165944 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.329 165944 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.330 165944 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.330 165944 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.330 165944 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.330 165944 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.330 165944 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.330 165944 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.330 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.330 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.330 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.331 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.331 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.331 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.331 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.331 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.331 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.331 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.331 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.331 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.331 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.332 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.332 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.332 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.332 165944 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.332 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.332 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.332 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.332 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.333 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.333 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.333 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.333 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.333 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.333 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.333 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.333 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.333 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.334 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.334 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.334 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.334 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.334 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.334 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.334 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.334 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.334 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.335 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.335 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.335 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.335 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.335 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.335 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.335 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.335 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.335 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.335 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.336 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.336 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.336 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.336 165944 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.336 165944 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.336 165944 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.336 165944 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.336 165944 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.336 165944 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.337 165944 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.337 165944 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.337 165944 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.337 165944 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.337 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.337 165944 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.337 165944 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.337 165944 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.337 165944 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.337 165944 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.338 165944 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.338 165944 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.338 165944 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.338 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.338 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.338 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.338 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.338 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.338 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.339 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.339 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.339 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.339 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.339 165944 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.339 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.339 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.340 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.340 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.340 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.340 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.340 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.340 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.340 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.341 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.341 165944 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.341 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.341 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.341 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.341 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.341 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.341 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.342 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.342 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.342 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.342 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.342 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.342 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.342 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.342 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.342 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.343 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.343 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.343 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.343 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.343 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.343 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.343 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.343 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.343 165944 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.343 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.344 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.344 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.344 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.344 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.344 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.344 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.344 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.344 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.344 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.345 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.345 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.345 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.345 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.345 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.345 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.345 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.345 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.345 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.346 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.346 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.346 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.346 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.346 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.346 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.346 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.346 165944 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.346 165944 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.347 165944 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.347 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.347 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.347 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.347 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.347 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.347 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.347 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.347 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.348 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.348 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.348 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.348 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.348 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.348 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.348 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.348 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.348 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.349 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.349 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.349 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.349 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.349 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.349 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.349 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.349 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.350 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.350 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.350 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.350 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.350 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.350 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.350 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.350 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.350 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.351 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.351 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.351 165944 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.351 165944 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.360 165944 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.360 165944 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.360 165944 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.360 165944 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.360 165944 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.372 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 2981bd26-4511-4552-b2b8-c2a668887f38 (UUID: 2981bd26-4511-4552-b2b8-c2a668887f38) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.399 165944 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.400 165944 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.400 165944 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.400 165944 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.403 165944 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.409 165944 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.416 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '2981bd26-4511-4552-b2b8-c2a668887f38'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fed94fa3cd0>], external_ids={}, name=2981bd26-4511-4552-b2b8-c2a668887f38, nb_cfg_timestamp=1764014461179, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.417 165944 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fed94fa6b20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.417 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.417 165944 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.418 165944 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.418 165944 INFO oslo_service.service [-] Starting 1 workers
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.421 165944 DEBUG oslo_service.service [-] Started child 166052 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.424 165944 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpkerlua49/privsep.sock']
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.427 166052 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-1949082'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.463 166052 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.464 166052 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.464 166052 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.470 166052 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.480 166052 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 20:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.491 166052 INFO eventlet.wsgi.server [-] (166052) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 24 20:02:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:09.524+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:09.591+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:10 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 24 20:02:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:10 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:10.119 165944 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 20:02:10 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:10.120 165944 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpkerlua49/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 20:02:10 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:09.996 166057 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 20:02:10 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:10.002 166057 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 20:02:10 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:10.006 166057 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 24 20:02:10 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:10.006 166057 INFO oslo.privsep.daemon [-] privsep daemon running as pid 166057
Nov 24 20:02:10 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:10.124 166057 DEBUG oslo.privsep.daemon [-] privsep: reply[702a3f0d-4d9b-4c58-b116-910d0d2229b3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:02:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:10.515+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:10 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:10.544 166057 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:02:10 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:10.544 166057 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:02:10 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:10.544 166057 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:02:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:10.564+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.018 166057 DEBUG oslo.privsep.daemon [-] privsep: reply[f5798e8c-d4a9-44ef-b23e-3bb27ce46448]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.021 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, column=external_ids, values=({'neutron:ovn-metadata-id': '3c2c7dd8-0d44-5e8a-a148-2d7c6067421f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.036 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.048 165944 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.048 165944 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.048 165944 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.048 165944 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.048 165944 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.048 165944 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.049 165944 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.049 165944 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.049 165944 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.049 165944 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.049 165944 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.050 165944 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.050 165944 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.050 165944 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.050 165944 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.050 165944 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.050 165944 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.051 165944 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.051 165944 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.051 165944 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.051 165944 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.051 165944 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.051 165944 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.051 165944 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.052 165944 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.052 165944 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.052 165944 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.052 165944 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.052 165944 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.052 165944 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.053 165944 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.053 165944 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.053 165944 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.053 165944 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.053 165944 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.053 165944 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.054 165944 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.054 165944 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.054 165944 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.054 165944 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.054 165944 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.054 165944 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.055 165944 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.055 165944 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.055 165944 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.055 165944 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.055 165944 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.055 165944 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.055 165944 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.056 165944 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.056 165944 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.056 165944 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.056 165944 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.056 165944 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.056 165944 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.056 165944 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.057 165944 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.057 165944 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.057 165944 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.057 165944 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.057 165944 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.057 165944 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.057 165944 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.058 165944 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.058 165944 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.058 165944 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.058 165944 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.058 165944 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.058 165944 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.059 165944 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.059 165944 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.059 165944 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.059 165944 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.059 165944 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.059 165944 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.059 165944 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.060 165944 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.060 165944 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.060 165944 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.060 165944 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.060 165944 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.060 165944 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.060 165944 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.061 165944 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.061 165944 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.061 165944 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.061 165944 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.061 165944 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.061 165944 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.061 165944 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.061 165944 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.062 165944 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.062 165944 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.062 165944 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.062 165944 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.062 165944 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.062 165944 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.062 165944 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.063 165944 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.063 165944 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.063 165944 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.063 165944 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.063 165944 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.063 165944 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.063 165944 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.063 165944 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.064 165944 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.064 165944 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.064 165944 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.064 165944 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.064 165944 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.064 165944 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.065 165944 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.065 165944 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.065 165944 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.065 165944 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.065 165944 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.065 165944 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.066 165944 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.066 165944 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.066 165944 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.066 165944 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.066 165944 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.066 165944 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.067 165944 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.067 165944 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.067 165944 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.067 165944 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.067 165944 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.067 165944 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.068 165944 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.068 165944 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.068 165944 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.068 165944 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.068 165944 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.068 165944 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.068 165944 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.069 165944 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.069 165944 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.069 165944 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.069 165944 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.069 165944 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.069 165944 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.069 165944 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.070 165944 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.070 165944 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.070 165944 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.070 165944 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.070 165944 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.070 165944 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.070 165944 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.071 165944 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.071 165944 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.071 165944 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.071 165944 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.071 165944 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.071 165944 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.071 165944 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.071 165944 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.072 165944 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.072 165944 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.072 165944 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.072 165944 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.072 165944 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.072 165944 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.072 165944 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.073 165944 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.073 165944 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.073 165944 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.073 165944 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.073 165944 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.073 165944 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.073 165944 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.074 165944 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.074 165944 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.074 165944 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.074 165944 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.074 165944 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.074 165944 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.074 165944 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.075 165944 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.075 165944 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.075 165944 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.075 165944 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.075 165944 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.075 165944 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.076 165944 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.076 165944 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.076 165944 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.076 165944 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.077 165944 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.077 165944 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.077 165944 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.077 165944 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.077 165944 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.078 165944 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.078 165944 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.078 165944 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.078 165944 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.078 165944 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.079 165944 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.079 165944 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.079 165944 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.079 165944 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.079 165944 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.079 165944 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.079 165944 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.080 165944 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.080 165944 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.080 165944 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.080 165944 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.080 165944 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:11 compute-0 ceph-mon[75677]: pgmap v555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.080 165944 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.080 165944 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.080 165944 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.081 165944 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.081 165944 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.081 165944 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.081 165944 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.081 165944 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.081 165944 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.081 165944 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.082 165944 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.082 165944 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.082 165944 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.082 165944 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.082 165944 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.082 165944 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.082 165944 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.082 165944 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.083 165944 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.083 165944 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.083 165944 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.083 165944 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.083 165944 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.083 165944 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.083 165944 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.084 165944 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.084 165944 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.084 165944 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.084 165944 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.084 165944 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.084 165944 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.085 165944 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.085 165944 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.085 165944 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.085 165944 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.085 165944 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.085 165944 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.085 165944 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.086 165944 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.086 165944 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.086 165944 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.086 165944 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.086 165944 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.086 165944 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.086 165944 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.087 165944 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.087 165944 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.087 165944 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.087 165944 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.087 165944 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.087 165944 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.087 165944 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.088 165944 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.088 165944 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.088 165944 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.088 165944 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.088 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.088 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.089 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.089 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.089 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.089 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.089 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.089 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.089 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.090 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.090 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.090 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.090 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.090 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.090 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.091 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.091 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.091 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.091 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.091 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.091 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.091 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.092 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.092 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.092 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.092 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.092 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.092 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.092 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.093 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.093 165944 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.093 165944 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.093 165944 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.093 165944 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.093 165944 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:02:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:02:11.094 165944 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 20:02:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:11.490+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:11.603+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:12.528+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:12.587+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:13 compute-0 ceph-mon[75677]: pgmap v556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:13.512+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:13.577+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:13 compute-0 sshd-session[166062]: Accepted publickey for zuul from 192.168.122.30 port 47916 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 20:02:13 compute-0 systemd-logind[795]: New session 49 of user zuul.
Nov 24 20:02:13 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 24 20:02:13 compute-0 sshd-session[166062]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 20:02:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:14.548+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:14.573+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:15 compute-0 python3.9[166215]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 20:02:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:15 compute-0 ceph-mon[75677]: pgmap v557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:15.553+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:15.554+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:16 compute-0 sudo[166369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nodzftawvdpskksacopmlqbklieisvyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014535.6810832-34-38814739880097/AnsiballZ_command.py'
Nov 24 20:02:16 compute-0 sudo[166369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:16 compute-0 python3.9[166371]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:02:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:16.516+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:16 compute-0 sudo[166369]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:16.597+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 651 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:17 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 651 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:17 compute-0 ceph-mon[75677]: pgmap v558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:17.536+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:17.638+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:17 compute-0 sudo[166534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvmnxszboxoragrdzggipuwxhygtdlym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014536.9885268-45-195712271363251/AnsiballZ_systemd_service.py'
Nov 24 20:02:17 compute-0 sudo[166534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:17 compute-0 python3.9[166536]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 20:02:18 compute-0 systemd[1]: Reloading.
Nov 24 20:02:18 compute-0 systemd-sysv-generator[166566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:02:18 compute-0 systemd-rc-local-generator[166561]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:02:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:18 compute-0 sudo[166534]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:18.521+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:18.677+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:19 compute-0 ceph-mon[75677]: pgmap v559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:19 compute-0 python3.9[166720]: ansible-ansible.builtin.service_facts Invoked
Nov 24 20:02:19 compute-0 network[166737]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 20:02:19 compute-0 network[166738]: 'network-scripts' will be removed from distribution in near future.
Nov 24 20:02:19 compute-0 network[166739]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 20:02:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:19.564+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:19.701+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:20.731+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:20.731+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 661 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:21.684+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:21.698+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:21 compute-0 ceph-mon[75677]: pgmap v560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:21 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 661 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:22.691+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:22.697+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:22 compute-0 ceph-mon[75677]: pgmap v561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:23.646+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:23.649+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:23 compute-0 sudo[166999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfngbdlnhbsevjetelgswnvvcaxfswyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014543.4317417-64-62176441883628/AnsiballZ_systemd_service.py'
Nov 24 20:02:23 compute-0 sudo[166999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:24 compute-0 python3.9[167001]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:02:24 compute-0 sudo[166999]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:02:24
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.log', 'vms', 'default.rgw.meta', 'images']
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:02:24 compute-0 podman[167003]: 2025-11-24 20:02:24.357712091 +0000 UTC m=+0.138795686 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:02:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:24.672+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:24.681+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:24 compute-0 sudo[167178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwphvmwvfrrtovywjggtjlzusrixzapd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014544.4230254-64-241660292591613/AnsiballZ_systemd_service.py'
Nov 24 20:02:24 compute-0 sudo[167178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:25 compute-0 python3.9[167180]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:02:25 compute-0 sudo[167178]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:25 compute-0 ceph-mgr[75975]: client.0 ms_handle_reset on v2:192.168.122.100:6800/103018990
Nov 24 20:02:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:25.686+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:25.704+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:25 compute-0 ceph-mon[75677]: pgmap v562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:25 compute-0 sudo[167331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngfxinaviuwdsfdpwlbxdqjlyjmrchmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014545.3826225-64-29417031187308/AnsiballZ_systemd_service.py'
Nov 24 20:02:25 compute-0 sudo[167331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:26 compute-0 python3.9[167333]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:02:26 compute-0 sudo[167331]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:26.675+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:26 compute-0 sudo[167484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djonyzqalhifofvaoypdlewyabxxuzgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014546.280198-64-110306963708648/AnsiballZ_systemd_service.py'
Nov 24 20:02:26 compute-0 sudo[167484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:26.726+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 666 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:27 compute-0 python3.9[167486]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:02:27 compute-0 sudo[167484]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:27.639+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:27 compute-0 sudo[167637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlfewxqebkzuetiqkgcmtsfaktastxpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014547.292175-64-228854971496172/AnsiballZ_systemd_service.py'
Nov 24 20:02:27 compute-0 sudo[167637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:27.722+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:27 compute-0 ceph-mon[75677]: pgmap v563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 666 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:28 compute-0 python3.9[167639]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:02:28 compute-0 sudo[167637]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:28.640+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:28 compute-0 sudo[167790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psjwqptscdipkybiowydlkiayogbhtml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014548.2778041-64-76061173995499/AnsiballZ_systemd_service.py'
Nov 24 20:02:28 compute-0 sudo[167790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:28.730+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:28 compute-0 python3.9[167792]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:02:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:29.605+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:29.754+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:29 compute-0 ceph-mon[75677]: pgmap v564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:30 compute-0 sudo[167790]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:30.617+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:30 compute-0 sudo[167943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwrvusqfyeewxnszdlxbbytmfzdrbdaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014550.2778225-64-196151095044047/AnsiballZ_systemd_service.py'
Nov 24 20:02:30 compute-0 sudo[167943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:30.744+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:31 compute-0 python3.9[167945]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:02:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:31.657+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:31.709+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:31 compute-0 ceph-mon[75677]: pgmap v565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:32 compute-0 sudo[167943]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:32.611+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:32.677+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:32 compute-0 sudo[168096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djqxayzfaggtupihybxkanapbueyfuvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014552.4067066-116-240704041114188/AnsiballZ_file.py'
Nov 24 20:02:32 compute-0 sudo[168096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:33 compute-0 python3.9[168098]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:33 compute-0 sudo[168096]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:33.620+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:33.704+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:33 compute-0 sudo[168248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvtvoxyoqerokobvzqjptglxnqroulzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014553.374136-116-25012222289601/AnsiballZ_file.py'
Nov 24 20:02:33 compute-0 sudo[168248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:33 compute-0 ceph-mon[75677]: pgmap v566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:34 compute-0 python3.9[168250]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:34 compute-0 sudo[168248]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:02:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:34.655+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:34.664+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:34 compute-0 sudo[168401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvxcnlliiiktsfudcpshqbzdijcwotfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014554.32725-116-82984800025947/AnsiballZ_file.py'
Nov 24 20:02:34 compute-0 sudo[168401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:34 compute-0 python3.9[168403]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:34 compute-0 sudo[168401]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:35 compute-0 sudo[168553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omcglkbwbmsihzwbdemwlvklougyknzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014555.2023656-116-249506988161028/AnsiballZ_file.py'
Nov 24 20:02:35 compute-0 sudo[168553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:35.662+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:35.682+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:35 compute-0 python3.9[168555]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:35 compute-0 sudo[168553]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:35 compute-0 ceph-mon[75677]: pgmap v567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:36 compute-0 sudo[168705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzxstxbemzezchyqyyudytbhkcobvtrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014555.9778621-116-237006273495871/AnsiballZ_file.py'
Nov 24 20:02:36 compute-0 sudo[168705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:36 compute-0 python3.9[168707]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 671 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:36 compute-0 sudo[168705]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:36.655+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:36.664+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:36 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 671 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:37 compute-0 sudo[168857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgylcigxdkqbgmgqvzabwqocfwmllrso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014556.7641492-116-20877243446239/AnsiballZ_file.py'
Nov 24 20:02:37 compute-0 sudo[168857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:37 compute-0 python3.9[168859]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:37 compute-0 sudo[168857]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:37.610+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:37.665+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:37 compute-0 sudo[169009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdmzbjlbxdduwmjpwitpcihidzhrgbef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014557.5248141-116-183873806660177/AnsiballZ_file.py'
Nov 24 20:02:37 compute-0 sudo[169009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:37 compute-0 ceph-mon[75677]: pgmap v568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:38 compute-0 python3.9[169011]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:38 compute-0 sudo[169009]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:38.618+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:38.620+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:38 compute-0 podman[169135]: 2025-11-24 20:02:38.708757265 +0000 UTC m=+0.055752241 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 24 20:02:38 compute-0 sudo[169180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpnczbgmhvligvtmupdgatsqrffepxjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014558.3754191-166-53697005522246/AnsiballZ_file.py'
Nov 24 20:02:38 compute-0 sudo[169180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:38 compute-0 python3.9[169182]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:38 compute-0 sudo[169180]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:39 compute-0 sudo[169332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrrubgduzliwofphwgstzqigyecuvpii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014559.118214-166-231978545704779/AnsiballZ_file.py'
Nov 24 20:02:39 compute-0 sudo[169332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:39.636+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:39.665+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:39 compute-0 python3.9[169334]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:39 compute-0 sudo[169332]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:40 compute-0 ceph-mon[75677]: pgmap v569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:02:40 compute-0 sudo[169484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acvofrvjsikmxsejjfgimcfroqtvsuaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014559.917571-166-99399422186574/AnsiballZ_file.py'
Nov 24 20:02:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:02:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:02:40 compute-0 sudo[169484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:02:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:02:40 compute-0 python3.9[169486]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:40 compute-0 sudo[169484]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:40.634+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:40.680+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:41 compute-0 sudo[169636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhrjiuqnrhfxdunaegftocvorbllzljb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014560.7449658-166-197185981641143/AnsiballZ_file.py'
Nov 24 20:02:41 compute-0 sudo[169636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:41 compute-0 python3.9[169638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:41 compute-0 sudo[169636]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 681 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:41.616+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:41.639+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:41 compute-0 sudo[169788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfdjjgbisyoilijzfssifqvhzbdqxgvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014561.5064225-166-119450806268198/AnsiballZ_file.py'
Nov 24 20:02:41 compute-0 sudo[169788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:42 compute-0 ceph-mon[75677]: pgmap v570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:42 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 681 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:42 compute-0 python3.9[169790]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:42 compute-0 sudo[169788]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:42.635+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:42 compute-0 sudo[169940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otjoqahqgkmviuszgetsqmpwvibdbcrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014562.282172-166-189013322627603/AnsiballZ_file.py'
Nov 24 20:02:42 compute-0 sudo[169940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:42.662+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:42 compute-0 python3.9[169942]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:42 compute-0 sudo[169940]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:43 compute-0 sudo[170092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbhhyhhnubwtgnnczfhqebrmhqjjlnkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014563.0624142-166-202031476093045/AnsiballZ_file.py'
Nov 24 20:02:43 compute-0 sudo[170092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:43 compute-0 python3.9[170094]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:02:43 compute-0 sudo[170092]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:43.647+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:43.670+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:44 compute-0 ceph-mon[75677]: pgmap v571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:44 compute-0 sudo[170244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxsiadbpsktghyseanyrscynkxdsxmew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014563.9374797-217-100876090290339/AnsiballZ_command.py'
Nov 24 20:02:44 compute-0 sudo[170244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:44 compute-0 python3.9[170246]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:02:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:44.641+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:44 compute-0 sudo[170244]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:44.660+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:45 compute-0 python3.9[170398]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 20:02:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:45.616+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:45.697+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:46 compute-0 ceph-mon[75677]: pgmap v572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:46 compute-0 sudo[170548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdybbfpmoayeyrnpyekvwibehyqdkfib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014565.8674123-235-240256332567151/AnsiballZ_systemd_service.py'
Nov 24 20:02:46 compute-0 sudo[170548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:46 compute-0 python3.9[170550]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 20:02:46 compute-0 systemd[1]: Reloading.
Nov 24 20:02:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:46.586+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.622654) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014566622719, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1394, "num_deletes": 258, "total_data_size": 1588439, "memory_usage": 1617776, "flush_reason": "Manual Compaction"}
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014566633251, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1553146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12878, "largest_seqno": 14271, "table_properties": {"data_size": 1546944, "index_size": 3084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16544, "raw_average_key_size": 20, "raw_value_size": 1532884, "raw_average_value_size": 1918, "num_data_blocks": 137, "num_entries": 799, "num_filter_entries": 799, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764014476, "oldest_key_time": 1764014476, "file_creation_time": 1764014566, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 10625 microseconds, and 5026 cpu microseconds.
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.633290) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1553146 bytes OK
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.633305) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.635002) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.635016) EVENT_LOG_v1 {"time_micros": 1764014566635012, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.635031) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1581759, prev total WAL file size 1581759, number of live WAL files 2.
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.635703) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323535' seq:0, type:0; will stop at (end)
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1516KB)], [29(7086KB)]
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014566635733, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 8810019, "oldest_snapshot_seqno": -1}
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 5150 keys, 8436287 bytes, temperature: kUnknown
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014566685087, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 8436287, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8400873, "index_size": 21416, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 130658, "raw_average_key_size": 25, "raw_value_size": 8306158, "raw_average_value_size": 1612, "num_data_blocks": 891, "num_entries": 5150, "num_filter_entries": 5150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764014566, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:02:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:46.685+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.685300) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 8436287 bytes
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.686463) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.3 rd, 170.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 6.9 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(11.1) write-amplify(5.4) OK, records in: 5678, records dropped: 528 output_compression: NoCompression
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.686483) EVENT_LOG_v1 {"time_micros": 1764014566686475, "job": 12, "event": "compaction_finished", "compaction_time_micros": 49425, "compaction_time_cpu_micros": 21996, "output_level": 6, "num_output_files": 1, "total_output_size": 8436287, "num_input_records": 5678, "num_output_records": 5150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014566686867, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014566688189, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.635623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.688311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.688320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.688323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.688326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:02:46 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:02:46.688329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:02:46 compute-0 systemd-rc-local-generator[170577]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:02:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:46 compute-0 systemd-sysv-generator[170580]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:02:46 compute-0 sudo[170548]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:47 compute-0 sudo[170735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvmczcyerytvrascwrgootawmenxbijb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014567.1971698-243-149895370199098/AnsiballZ_command.py'
Nov 24 20:02:47 compute-0 sudo[170735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:47.564+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 686 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:47.690+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:47 compute-0 python3.9[170737]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:02:47 compute-0 sudo[170735]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:48 compute-0 sudo[170888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyvaauthfokiwfmstjcjkuxgxljkqhcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014567.9701-243-157137832228185/AnsiballZ_command.py'
Nov 24 20:02:48 compute-0 sudo[170888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:48.534+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:48 compute-0 python3.9[170890]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:02:48 compute-0 sudo[170888]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:48 compute-0 ceph-mon[75677]: pgmap v573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:48 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 686 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:48.715+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:49 compute-0 sudo[171041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upximxndvdublzhmwuslfcknsdwneuwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014568.7736058-243-3037962547408/AnsiballZ_command.py'
Nov 24 20:02:49 compute-0 sudo[171041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:49 compute-0 python3.9[171043]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:02:49 compute-0 sudo[171041]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:49.550+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:49.756+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:49 compute-0 sudo[171127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:02:49 compute-0 sudo[171127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:49 compute-0 sudo[171127]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:49 compute-0 sudo[171187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:02:49 compute-0 sudo[171187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:49 compute-0 sudo[171187]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:49 compute-0 sudo[171257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlqszgyxzovvyoglkcezfkqlnsrbaoyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014569.5694258-243-242293457401998/AnsiballZ_command.py'
Nov 24 20:02:49 compute-0 sudo[171257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:49 compute-0 sudo[171229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:02:49 compute-0 sudo[171229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:50 compute-0 sudo[171229]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:50 compute-0 sudo[171272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:02:50 compute-0 sudo[171272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:50 compute-0 python3.9[171269]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:02:50 compute-0 sudo[171257]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:50.512+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:50 compute-0 sudo[171272]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:50 compute-0 ceph-mon[75677]: pgmap v574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:02:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:02:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:02:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:02:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:02:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:02:50 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev b18b6e5f-8767-461f-9c3a-a9793fc1a8d6 does not exist
Nov 24 20:02:50 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 713f7bbd-67a9-46bb-a9b5-88f703ca1694 does not exist
Nov 24 20:02:50 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 891a8899-4b36-47dd-8719-da6a0fcefa2e does not exist
Nov 24 20:02:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:02:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:02:50 compute-0 sudo[171480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqbwmajctnlraelltjnsbphymnildvxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014570.3716483-243-178967842872842/AnsiballZ_command.py'
Nov 24 20:02:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:02:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:02:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:02:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:02:50 compute-0 sudo[171480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:50 compute-0 sudo[171483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:02:50 compute-0 sudo[171483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:50 compute-0 sudo[171483]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:50.802+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:50 compute-0 sudo[171508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:02:50 compute-0 sudo[171508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:50 compute-0 sudo[171508]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:50 compute-0 python3.9[171482]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:02:50 compute-0 sudo[171533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:02:50 compute-0 sudo[171533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:50 compute-0 sudo[171533]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:50 compute-0 sudo[171480]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:51 compute-0 sudo[171559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:02:51 compute-0 sudo[171559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:51.471+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:51 compute-0 podman[171743]: 2025-11-24 20:02:51.475553149 +0000 UTC m=+0.073340129 container create e70527c6999a83c37c595345d6a844cdd4585ed7e7f9c35b6bd6cfad95b95f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:02:51 compute-0 systemd[1]: Started libpod-conmon-e70527c6999a83c37c595345d6a844cdd4585ed7e7f9c35b6bd6cfad95b95f80.scope.
Nov 24 20:02:51 compute-0 sudo[171786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwoksajlpeglifkeubnavxkcljbrazpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014571.1434078-243-13795629473917/AnsiballZ_command.py'
Nov 24 20:02:51 compute-0 podman[171743]: 2025-11-24 20:02:51.444019346 +0000 UTC m=+0.041806376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:02:51 compute-0 sudo[171786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:02:51 compute-0 podman[171743]: 2025-11-24 20:02:51.589184924 +0000 UTC m=+0.186971964 container init e70527c6999a83c37c595345d6a844cdd4585ed7e7f9c35b6bd6cfad95b95f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:02:51 compute-0 podman[171743]: 2025-11-24 20:02:51.599337612 +0000 UTC m=+0.197124602 container start e70527c6999a83c37c595345d6a844cdd4585ed7e7f9c35b6bd6cfad95b95f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:02:51 compute-0 podman[171743]: 2025-11-24 20:02:51.603401806 +0000 UTC m=+0.201188836 container attach e70527c6999a83c37c595345d6a844cdd4585ed7e7f9c35b6bd6cfad95b95f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:02:51 compute-0 elated_chatterjee[171790]: 167 167
Nov 24 20:02:51 compute-0 systemd[1]: libpod-e70527c6999a83c37c595345d6a844cdd4585ed7e7f9c35b6bd6cfad95b95f80.scope: Deactivated successfully.
Nov 24 20:02:51 compute-0 podman[171743]: 2025-11-24 20:02:51.610466085 +0000 UTC m=+0.208253055 container died e70527c6999a83c37c595345d6a844cdd4585ed7e7f9c35b6bd6cfad95b95f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 20:02:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c7ba30be85b35a42a8ceacfaf25ea3752e3676c9489a72c73e17226da4ceec7-merged.mount: Deactivated successfully.
Nov 24 20:02:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:51 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:02:51 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:02:51 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:02:51 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:02:51 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:02:51 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:02:51 compute-0 podman[171743]: 2025-11-24 20:02:51.667769995 +0000 UTC m=+0.265556975 container remove e70527c6999a83c37c595345d6a844cdd4585ed7e7f9c35b6bd6cfad95b95f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 20:02:51 compute-0 systemd[1]: libpod-conmon-e70527c6999a83c37c595345d6a844cdd4585ed7e7f9c35b6bd6cfad95b95f80.scope: Deactivated successfully.
Nov 24 20:02:51 compute-0 python3.9[171792]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:02:51 compute-0 sudo[171786]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:51.792+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:51 compute-0 podman[171815]: 2025-11-24 20:02:51.866538888 +0000 UTC m=+0.064480313 container create 668b9577897f7448179fef13bc8afe22418925e4a1c2ed64d67aeb3f62a3fe2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_beaver, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:02:51 compute-0 systemd[1]: Started libpod-conmon-668b9577897f7448179fef13bc8afe22418925e4a1c2ed64d67aeb3f62a3fe2d.scope.
Nov 24 20:02:51 compute-0 podman[171815]: 2025-11-24 20:02:51.844220859 +0000 UTC m=+0.042162284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:02:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25647d8e4f270a159317cc7ccd8eb3ff2f5a9094c8517c6007ecf939e148f15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25647d8e4f270a159317cc7ccd8eb3ff2f5a9094c8517c6007ecf939e148f15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25647d8e4f270a159317cc7ccd8eb3ff2f5a9094c8517c6007ecf939e148f15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25647d8e4f270a159317cc7ccd8eb3ff2f5a9094c8517c6007ecf939e148f15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25647d8e4f270a159317cc7ccd8eb3ff2f5a9094c8517c6007ecf939e148f15/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:51 compute-0 podman[171815]: 2025-11-24 20:02:51.99070374 +0000 UTC m=+0.188645165 container init 668b9577897f7448179fef13bc8afe22418925e4a1c2ed64d67aeb3f62a3fe2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_beaver, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:02:52 compute-0 podman[171815]: 2025-11-24 20:02:52.006075572 +0000 UTC m=+0.204016967 container start 668b9577897f7448179fef13bc8afe22418925e4a1c2ed64d67aeb3f62a3fe2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:02:52 compute-0 podman[171815]: 2025-11-24 20:02:52.010153175 +0000 UTC m=+0.208094570 container attach 668b9577897f7448179fef13bc8afe22418925e4a1c2ed64d67aeb3f62a3fe2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_beaver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:02:52 compute-0 sudo[171986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wahsawexowmtqhxfuymvkarimahpgyqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014571.9415343-243-252236891793574/AnsiballZ_command.py'
Nov 24 20:02:52 compute-0 sudo[171986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:52.430+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:52 compute-0 python3.9[171988]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:02:52 compute-0 ceph-mon[75677]: pgmap v575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:52.756+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:53 compute-0 gifted_beaver[171876]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:02:53 compute-0 gifted_beaver[171876]: --> relative data size: 1.0
Nov 24 20:02:53 compute-0 gifted_beaver[171876]: --> All data devices are unavailable
Nov 24 20:02:53 compute-0 systemd[1]: libpod-668b9577897f7448179fef13bc8afe22418925e4a1c2ed64d67aeb3f62a3fe2d.scope: Deactivated successfully.
Nov 24 20:02:53 compute-0 podman[171815]: 2025-11-24 20:02:53.254466327 +0000 UTC m=+1.452407752 container died 668b9577897f7448179fef13bc8afe22418925e4a1c2ed64d67aeb3f62a3fe2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 20:02:53 compute-0 systemd[1]: libpod-668b9577897f7448179fef13bc8afe22418925e4a1c2ed64d67aeb3f62a3fe2d.scope: Consumed 1.191s CPU time.
Nov 24 20:02:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b25647d8e4f270a159317cc7ccd8eb3ff2f5a9094c8517c6007ecf939e148f15-merged.mount: Deactivated successfully.
Nov 24 20:02:53 compute-0 podman[171815]: 2025-11-24 20:02:53.337831341 +0000 UTC m=+1.535772756 container remove 668b9577897f7448179fef13bc8afe22418925e4a1c2ed64d67aeb3f62a3fe2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_beaver, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 20:02:53 compute-0 systemd[1]: libpod-conmon-668b9577897f7448179fef13bc8afe22418925e4a1c2ed64d67aeb3f62a3fe2d.scope: Deactivated successfully.
Nov 24 20:02:53 compute-0 sudo[171559]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:53.473+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:53 compute-0 sudo[172028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:02:53 compute-0 sudo[172028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:53 compute-0 sudo[172028]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:53 compute-0 sudo[171986]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:53 compute-0 sudo[172053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:02:53 compute-0 sudo[172053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:53 compute-0 sudo[172053]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:53 compute-0 sudo[172095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:02:53 compute-0 sudo[172095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:53 compute-0 sudo[172095]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:53.743+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:53 compute-0 sudo[172127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:02:53 compute-0 sudo[172127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:54 compute-0 podman[172245]: 2025-11-24 20:02:54.255391841 +0000 UTC m=+0.067292605 container create 8fa8fcc75d3049dc8d78a85939f784fce82b100750c5d16d9c569b7d309f7afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:02:54 compute-0 systemd[1]: Started libpod-conmon-8fa8fcc75d3049dc8d78a85939f784fce82b100750c5d16d9c569b7d309f7afb.scope.
Nov 24 20:02:54 compute-0 podman[172245]: 2025-11-24 20:02:54.229167282 +0000 UTC m=+0.041068096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:02:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:02:54 compute-0 podman[172245]: 2025-11-24 20:02:54.353464089 +0000 UTC m=+0.165364853 container init 8fa8fcc75d3049dc8d78a85939f784fce82b100750c5d16d9c569b7d309f7afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 20:02:54 compute-0 podman[172245]: 2025-11-24 20:02:54.364009317 +0000 UTC m=+0.175910081 container start 8fa8fcc75d3049dc8d78a85939f784fce82b100750c5d16d9c569b7d309f7afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 20:02:54 compute-0 podman[172245]: 2025-11-24 20:02:54.367885116 +0000 UTC m=+0.179785880 container attach 8fa8fcc75d3049dc8d78a85939f784fce82b100750c5d16d9c569b7d309f7afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:02:54 compute-0 sad_haibt[172284]: 167 167
Nov 24 20:02:54 compute-0 systemd[1]: libpod-8fa8fcc75d3049dc8d78a85939f784fce82b100750c5d16d9c569b7d309f7afb.scope: Deactivated successfully.
Nov 24 20:02:54 compute-0 podman[172245]: 2025-11-24 20:02:54.372185505 +0000 UTC m=+0.184086279 container died 8fa8fcc75d3049dc8d78a85939f784fce82b100750c5d16d9c569b7d309f7afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:02:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7386cc5472a7696cbed4289f148533caca4c2111b89e598a2530e24a4adfc0cc-merged.mount: Deactivated successfully.
Nov 24 20:02:54 compute-0 podman[172245]: 2025-11-24 20:02:54.423251065 +0000 UTC m=+0.235151829 container remove 8fa8fcc75d3049dc8d78a85939f784fce82b100750c5d16d9c569b7d309f7afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:02:54 compute-0 systemd[1]: libpod-conmon-8fa8fcc75d3049dc8d78a85939f784fce82b100750c5d16d9c569b7d309f7afb.scope: Deactivated successfully.
Nov 24 20:02:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:54.455+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:54 compute-0 podman[172290]: 2025-11-24 20:02:54.583030766 +0000 UTC m=+0.160943371 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 20:02:54 compute-0 sudo[172377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldsedukdlzhfbrqquclnyjgrakjszmqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014573.915398-297-42685251174179/AnsiballZ_getent.py'
Nov 24 20:02:54 compute-0 sudo[172377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:54 compute-0 podman[172385]: 2025-11-24 20:02:54.709574129 +0000 UTC m=+0.070827326 container create b5a45a7f0f8dbc114436cbc0dacf994872407206c15993099eca274b6d06fe96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:02:54 compute-0 ceph-mon[75677]: pgmap v576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:54 compute-0 systemd[1]: Started libpod-conmon-b5a45a7f0f8dbc114436cbc0dacf994872407206c15993099eca274b6d06fe96.scope.
Nov 24 20:02:54 compute-0 podman[172385]: 2025-11-24 20:02:54.68374903 +0000 UTC m=+0.045002247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:02:54 compute-0 python3.9[172386]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 24 20:02:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:54.789+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0828ec074b6759166fd41d374883a797a1d993439c92d01dba91782ab0bbfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0828ec074b6759166fd41d374883a797a1d993439c92d01dba91782ab0bbfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0828ec074b6759166fd41d374883a797a1d993439c92d01dba91782ab0bbfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0828ec074b6759166fd41d374883a797a1d993439c92d01dba91782ab0bbfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:54 compute-0 sudo[172377]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:54 compute-0 podman[172385]: 2025-11-24 20:02:54.832301454 +0000 UTC m=+0.193554701 container init b5a45a7f0f8dbc114436cbc0dacf994872407206c15993099eca274b6d06fe96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hodgkin, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:02:54 compute-0 podman[172385]: 2025-11-24 20:02:54.846105805 +0000 UTC m=+0.207359012 container start b5a45a7f0f8dbc114436cbc0dacf994872407206c15993099eca274b6d06fe96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:02:54 compute-0 podman[172385]: 2025-11-24 20:02:54.852099769 +0000 UTC m=+0.213353016 container attach b5a45a7f0f8dbc114436cbc0dacf994872407206c15993099eca274b6d06fe96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 20:02:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:55.485+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:55 compute-0 sudo[172560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdcvtkksrxeoqaewfiuvdaepcpkjsjwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014575.022473-305-263361847878486/AnsiballZ_group.py'
Nov 24 20:02:55 compute-0 sudo[172560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]: {
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:     "0": [
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:         {
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "devices": [
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "/dev/loop3"
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             ],
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_name": "ceph_lv0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_size": "21470642176",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "name": "ceph_lv0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "tags": {
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.cluster_name": "ceph",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.crush_device_class": "",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.encrypted": "0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.osd_id": "0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.type": "block",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.vdo": "0"
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             },
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "type": "block",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "vg_name": "ceph_vg0"
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:         }
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:     ],
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:     "1": [
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:         {
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "devices": [
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "/dev/loop4"
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             ],
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_name": "ceph_lv1",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_size": "21470642176",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "name": "ceph_lv1",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "tags": {
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.cluster_name": "ceph",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.crush_device_class": "",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.encrypted": "0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.osd_id": "1",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.type": "block",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.vdo": "0"
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             },
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "type": "block",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "vg_name": "ceph_vg1"
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:         }
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:     ],
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:     "2": [
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:         {
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "devices": [
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "/dev/loop5"
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             ],
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_name": "ceph_lv2",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_size": "21470642176",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "name": "ceph_lv2",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "tags": {
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.cluster_name": "ceph",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.crush_device_class": "",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.encrypted": "0",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.osd_id": "2",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.type": "block",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:                 "ceph.vdo": "0"
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             },
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "type": "block",
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:             "vg_name": "ceph_vg2"
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:         }
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]:     ]
Nov 24 20:02:55 compute-0 busy_hodgkin[172403]: }
Nov 24 20:02:55 compute-0 systemd[1]: libpod-b5a45a7f0f8dbc114436cbc0dacf994872407206c15993099eca274b6d06fe96.scope: Deactivated successfully.
Nov 24 20:02:55 compute-0 podman[172385]: 2025-11-24 20:02:55.665375232 +0000 UTC m=+1.026628429 container died b5a45a7f0f8dbc114436cbc0dacf994872407206c15993099eca274b6d06fe96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 20:02:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0828ec074b6759166fd41d374883a797a1d993439c92d01dba91782ab0bbfb-merged.mount: Deactivated successfully.
Nov 24 20:02:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:55 compute-0 podman[172385]: 2025-11-24 20:02:55.757877948 +0000 UTC m=+1.119131125 container remove b5a45a7f0f8dbc114436cbc0dacf994872407206c15993099eca274b6d06fe96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 24 20:02:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:55.758+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:55 compute-0 python3.9[172564]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 20:02:55 compute-0 systemd[1]: libpod-conmon-b5a45a7f0f8dbc114436cbc0dacf994872407206c15993099eca274b6d06fe96.scope: Deactivated successfully.
Nov 24 20:02:55 compute-0 sudo[172127]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:55 compute-0 groupadd[172577]: group added to /etc/group: name=libvirt, GID=42473
Nov 24 20:02:55 compute-0 groupadd[172577]: group added to /etc/gshadow: name=libvirt
Nov 24 20:02:55 compute-0 groupadd[172577]: new group: name=libvirt, GID=42473
Nov 24 20:02:55 compute-0 sudo[172560]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:55 compute-0 sudo[172578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:02:55 compute-0 sudo[172578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:55 compute-0 sudo[172578]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:55 compute-0 sudo[172608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:02:55 compute-0 sudo[172608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:55 compute-0 sudo[172608]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:56 compute-0 sudo[172657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:02:56 compute-0 sudo[172657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:56 compute-0 sudo[172657]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:56 compute-0 sudo[172682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:02:56 compute-0 sudo[172682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:56.524+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:56 compute-0 podman[172821]: 2025-11-24 20:02:56.551343787 +0000 UTC m=+0.064056772 container create d6be8550052d722109178560c77c27dccbf9eafe00650d924e622f4d7dfdd9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:02:56 compute-0 systemd[1]: Started libpod-conmon-d6be8550052d722109178560c77c27dccbf9eafe00650d924e622f4d7dfdd9f9.scope.
Nov 24 20:02:56 compute-0 podman[172821]: 2025-11-24 20:02:56.524259187 +0000 UTC m=+0.036972222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:02:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 691 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:02:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:02:56 compute-0 podman[172821]: 2025-11-24 20:02:56.663379201 +0000 UTC m=+0.176092226 container init d6be8550052d722109178560c77c27dccbf9eafe00650d924e622f4d7dfdd9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:02:56 compute-0 podman[172821]: 2025-11-24 20:02:56.674296099 +0000 UTC m=+0.187009084 container start d6be8550052d722109178560c77c27dccbf9eafe00650d924e622f4d7dfdd9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:02:56 compute-0 podman[172821]: 2025-11-24 20:02:56.67943693 +0000 UTC m=+0.192149985 container attach d6be8550052d722109178560c77c27dccbf9eafe00650d924e622f4d7dfdd9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 20:02:56 compute-0 condescending_knuth[172861]: 167 167
Nov 24 20:02:56 compute-0 systemd[1]: libpod-d6be8550052d722109178560c77c27dccbf9eafe00650d924e622f4d7dfdd9f9.scope: Deactivated successfully.
Nov 24 20:02:56 compute-0 podman[172821]: 2025-11-24 20:02:56.681760409 +0000 UTC m=+0.194473394 container died d6be8550052d722109178560c77c27dccbf9eafe00650d924e622f4d7dfdd9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:02:56 compute-0 sudo[172891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oicvjnbmbinzahvfqdfgotqqdconhkxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014576.0991216-313-237252435019538/AnsiballZ_user.py'
Nov 24 20:02:56 compute-0 sudo[172891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6263bf6aab7a7896439025b4301ed678fd8edbf6ccd23e01db3a793a951b2f9a-merged.mount: Deactivated successfully.
Nov 24 20:02:56 compute-0 ceph-mon[75677]: pgmap v577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:56 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 691 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:02:56 compute-0 podman[172821]: 2025-11-24 20:02:56.739948711 +0000 UTC m=+0.252661686 container remove d6be8550052d722109178560c77c27dccbf9eafe00650d924e622f4d7dfdd9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:02:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:56 compute-0 systemd[1]: libpod-conmon-d6be8550052d722109178560c77c27dccbf9eafe00650d924e622f4d7dfdd9f9.scope: Deactivated successfully.
Nov 24 20:02:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:56.760+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:56 compute-0 python3.9[172901]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 20:02:56 compute-0 podman[172913]: 2025-11-24 20:02:56.977075471 +0000 UTC m=+0.077127275 container create c859c804b5d3cd584c32bbde0569d12231ea2432ed0458a9499ff82b7168637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 20:02:56 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:02:56 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:02:57 compute-0 systemd[1]: Started libpod-conmon-c859c804b5d3cd584c32bbde0569d12231ea2432ed0458a9499ff82b7168637d.scope.
Nov 24 20:02:57 compute-0 useradd[172928]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 20:02:57 compute-0 podman[172913]: 2025-11-24 20:02:56.947283291 +0000 UTC m=+0.047335185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:02:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf10adeee0a3b9dedc049f13459f4b948f8ad2556db080ef5b48d32b404dc7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf10adeee0a3b9dedc049f13459f4b948f8ad2556db080ef5b48d32b404dc7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf10adeee0a3b9dedc049f13459f4b948f8ad2556db080ef5b48d32b404dc7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf10adeee0a3b9dedc049f13459f4b948f8ad2556db080ef5b48d32b404dc7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:02:57 compute-0 podman[172913]: 2025-11-24 20:02:57.079324425 +0000 UTC m=+0.179376279 container init c859c804b5d3cd584c32bbde0569d12231ea2432ed0458a9499ff82b7168637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:02:57 compute-0 podman[172913]: 2025-11-24 20:02:57.094719817 +0000 UTC m=+0.194771651 container start c859c804b5d3cd584c32bbde0569d12231ea2432ed0458a9499ff82b7168637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:02:57 compute-0 podman[172913]: 2025-11-24 20:02:57.098637646 +0000 UTC m=+0.198689470 container attach c859c804b5d3cd584c32bbde0569d12231ea2432ed0458a9499ff82b7168637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 20:02:57 compute-0 sudo[172891]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:57.531+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:57.792+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:57 compute-0 sudo[173104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzkwhqqmilosphjqrtcadrrydzrgthfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014577.4816053-324-234014863444996/AnsiballZ_setup.py'
Nov 24 20:02:57 compute-0 sudo[173104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]: {
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "osd_id": 2,
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "type": "bluestore"
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:     },
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "osd_id": 1,
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "type": "bluestore"
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:     },
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "osd_id": 0,
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:         "type": "bluestore"
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]:     }
Nov 24 20:02:58 compute-0 intelligent_nightingale[172933]: }
Nov 24 20:02:58 compute-0 python3.9[173106]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 20:02:58 compute-0 systemd[1]: libpod-c859c804b5d3cd584c32bbde0569d12231ea2432ed0458a9499ff82b7168637d.scope: Deactivated successfully.
Nov 24 20:02:58 compute-0 systemd[1]: libpod-c859c804b5d3cd584c32bbde0569d12231ea2432ed0458a9499ff82b7168637d.scope: Consumed 1.120s CPU time.
Nov 24 20:02:58 compute-0 podman[173124]: 2025-11-24 20:02:58.275237084 +0000 UTC m=+0.040721468 container died c859c804b5d3cd584c32bbde0569d12231ea2432ed0458a9499ff82b7168637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:02:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bf10adeee0a3b9dedc049f13459f4b948f8ad2556db080ef5b48d32b404dc7c-merged.mount: Deactivated successfully.
Nov 24 20:02:58 compute-0 podman[173124]: 2025-11-24 20:02:58.344074837 +0000 UTC m=+0.109559261 container remove c859c804b5d3cd584c32bbde0569d12231ea2432ed0458a9499ff82b7168637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 20:02:58 compute-0 systemd[1]: libpod-conmon-c859c804b5d3cd584c32bbde0569d12231ea2432ed0458a9499ff82b7168637d.scope: Deactivated successfully.
Nov 24 20:02:58 compute-0 sudo[172682]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:02:58 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:02:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:02:58 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:02:58 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 445afe12-89c5-4b04-9e35-3188d9ee7cab does not exist
Nov 24 20:02:58 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0934445f-8644-472a-b4cf-f213183afa57 does not exist
Nov 24 20:02:58 compute-0 sudo[173145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:02:58 compute-0 sudo[173145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:58 compute-0 sudo[173145]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:58 compute-0 sudo[173104]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:58.576+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:58 compute-0 sudo[173173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:02:58 compute-0 sudo[173173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:02:58 compute-0 sudo[173173]: pam_unix(sudo:session): session closed for user root
Nov 24 20:02:58 compute-0 ceph-mon[75677]: pgmap v578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:02:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:02:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:58.816+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:59 compute-0 sudo[173271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbjorjwtzkvenjlmmmuguhbuzlpzybsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014577.4816053-324-234014863444996/AnsiballZ_dnf.py'
Nov 24 20:02:59 compute-0 sudo[173271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:02:59 compute-0 python3.9[173273]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 20:02:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:02:59.529+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:02:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:59 compute-0 ceph-mon[75677]: pgmap v579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:02:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:02:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:02:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:02:59.828+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:02:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:00.528+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:00.835+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:01.486+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 701 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:01 compute-0 ceph-mon[75677]: pgmap v580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:01 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 701 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:01.873+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:02.461+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:02.827+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:03.461+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:03 compute-0 ceph-mon[75677]: pgmap v581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:03.823+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:04.472+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:04.796+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:05.491+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:05.777+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:05 compute-0 ceph-mon[75677]: pgmap v582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:06.501+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:06.803+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 706 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:07.518+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:07 compute-0 ceph-mon[75677]: pgmap v583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 706 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:07.850+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:08.567+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:08 compute-0 podman[173316]: 2025-11-24 20:03:08.874950435 +0000 UTC m=+0.095744632 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:03:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:08.894+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:03:09.352 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:03:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:03:09.353 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:03:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:03:09.354 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:03:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:09.553+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:09 compute-0 ceph-mon[75677]: pgmap v584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:09.920+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:10.556+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:10.961+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:11.603+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:11 compute-0 ceph-mon[75677]: pgmap v585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:11.922+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:12.648+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:12.913+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:13.679+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:13 compute-0 ceph-mon[75677]: pgmap v586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:13.927+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:14.685+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:14.909+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:15.717+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:15 compute-0 ceph-mon[75677]: pgmap v587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:15.909+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 711 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:16.691+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:16 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 711 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:16.920+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:17.658+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:17.882+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:17 compute-0 ceph-mon[75677]: pgmap v588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:18.626+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:18.858+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:19.586+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:19.863+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:19 compute-0 ceph-mon[75677]: pgmap v589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:20.621+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:20.844+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:21.623+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 721 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:21.882+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:21 compute-0 ceph-mon[75677]: pgmap v590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:21 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 721 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:22.606+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:22.868+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:23.626+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:23.824+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:23 compute-0 ceph-mon[75677]: pgmap v591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:03:24
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data']
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:03:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:24.641+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:24.825+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:24 compute-0 podman[173509]: 2025-11-24 20:03:24.97095996 +0000 UTC m=+0.197127664 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 24 20:03:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:25.683+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:25.796+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:25 compute-0 ceph-mon[75677]: pgmap v592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:26.716+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:26.762+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 726 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:27.759+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:27.765+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:28 compute-0 ceph-mon[75677]: pgmap v593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:28 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 726 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:28.734+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:28.813+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:29.694+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:29.854+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:30 compute-0 ceph-mon[75677]: pgmap v594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:30.685+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:30.812+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:31.698+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:31.815+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:32 compute-0 ceph-mon[75677]: pgmap v595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:32.660+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:32.783+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:33.665+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:33.828+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:34 compute-0 ceph-mon[75677]: pgmap v596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:03:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:34.698+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:34.843+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:35.660+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:35.795+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:36 compute-0 ceph-mon[75677]: pgmap v597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 732 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:36.707+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:36.840+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:37 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 732 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:37.716+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:37.791+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:38 compute-0 ceph-mon[75677]: pgmap v598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:38 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 24 20:03:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 20:03:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 20:03:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 20:03:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 20:03:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 20:03:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 20:03:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 20:03:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:38.728+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:38.826+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:39 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 24 20:03:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:39.751+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:39.781+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:39 compute-0 podman[173548]: 2025-11-24 20:03:39.855064537 +0000 UTC m=+0.081338735 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 20:03:40 compute-0 ceph-mon[75677]: pgmap v599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:03:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:03:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:03:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:03:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:03:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:40.745+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:40.791+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 741 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:41.707+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:41.765+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:42 compute-0 ceph-mon[75677]: pgmap v600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:42 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 741 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:42.757+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:42.796+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:43.731+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:43.772+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:44 compute-0 ceph-mon[75677]: pgmap v601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:44.755+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:44.816+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:45.724+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:45.772+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:46 compute-0 ceph-mon[75677]: pgmap v602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:46.700+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:46.727+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 746 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:47.698+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:47.723+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:47 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 24 20:03:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 20:03:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 20:03:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 20:03:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 20:03:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 20:03:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 20:03:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 20:03:48 compute-0 ceph-mon[75677]: pgmap v603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:48 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 746 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:48.650+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:48.686+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:49.615+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:49.732+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:50 compute-0 ceph-mon[75677]: pgmap v604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:50.625+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:50.749+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:51.582+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:51.703+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:52 compute-0 ceph-mon[75677]: pgmap v605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:52.574+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:52.746+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:53 compute-0 sshd-session[173575]: Invalid user username from 27.79.44.141 port 57214
Nov 24 20:03:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:53.526+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:53.745+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:54 compute-0 sshd-session[173575]: Connection closed by invalid user username 27.79.44.141 port 57214 [preauth]
Nov 24 20:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:03:54 compute-0 ceph-mon[75677]: pgmap v606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:54.554+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:54.782+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:55.562+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:55 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 24 20:03:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:55.768+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:55 compute-0 podman[173579]: 2025-11-24 20:03:55.905978138 +0000 UTC m=+0.122325346 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:03:56 compute-0 ceph-mon[75677]: pgmap v607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:56.537+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 751 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:03:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:56.739+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:57 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 751 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:03:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:57.523+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:57.713+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:58 compute-0 ceph-mon[75677]: pgmap v608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:58.558+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:58.686+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:58 compute-0 sudo[173605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:03:58 compute-0 sudo[173605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:03:58 compute-0 sudo[173605]: pam_unix(sudo:session): session closed for user root
Nov 24 20:03:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:03:58 compute-0 sudo[173630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:03:58 compute-0 sudo[173630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:03:58 compute-0 sudo[173630]: pam_unix(sudo:session): session closed for user root
Nov 24 20:03:58 compute-0 sudo[173655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:03:58 compute-0 sudo[173655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:03:58 compute-0 sudo[173655]: pam_unix(sudo:session): session closed for user root
Nov 24 20:03:59 compute-0 sudo[173680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:03:59 compute-0 sudo[173680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:03:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:03:59.562+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:03:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:59 compute-0 sudo[173680]: pam_unix(sudo:session): session closed for user root
Nov 24 20:03:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:03:59.674+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:03:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:03:59 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:03:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:03:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:03:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:03:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:03:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:03:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:03:59 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev adb67b88-19ad-466c-a98a-70e733dd7289 does not exist
Nov 24 20:03:59 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 8f6a3824-4f43-437c-9b07-d275aa914701 does not exist
Nov 24 20:03:59 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e56031dd-1189-4700-8867-7e658b477a8c does not exist
Nov 24 20:03:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:03:59 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:03:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:03:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:03:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:03:59 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:03:59 compute-0 sudo[173742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:03:59 compute-0 sudo[173742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:03:59 compute-0 sudo[173742]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:00 compute-0 sshd-session[173577]: Invalid user ftpuser from 27.79.44.141 port 57226
Nov 24 20:04:00 compute-0 sudo[173809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:04:00 compute-0 sudo[173809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:00 compute-0 sudo[173809]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:00 compute-0 sudo[173848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:04:00 compute-0 sudo[173848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:00 compute-0 sudo[173848]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:00 compute-0 sudo[173905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:04:00 compute-0 sudo[173905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:00.570+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:00 compute-0 podman[174139]: 2025-11-24 20:04:00.645886541 +0000 UTC m=+0.055545785 container create 2d1765e016f837f5ada98e09d763875ebc200dd2562f35e8b7ca278a4807bbb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:04:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:00.661+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:00 compute-0 systemd[1]: Started libpod-conmon-2d1765e016f837f5ada98e09d763875ebc200dd2562f35e8b7ca278a4807bbb6.scope.
Nov 24 20:04:00 compute-0 podman[174139]: 2025-11-24 20:04:00.622353277 +0000 UTC m=+0.032012531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:04:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:04:00 compute-0 podman[174139]: 2025-11-24 20:04:00.749396914 +0000 UTC m=+0.159056168 container init 2d1765e016f837f5ada98e09d763875ebc200dd2562f35e8b7ca278a4807bbb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:04:00 compute-0 podman[174139]: 2025-11-24 20:04:00.756159636 +0000 UTC m=+0.165818880 container start 2d1765e016f837f5ada98e09d763875ebc200dd2562f35e8b7ca278a4807bbb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:04:00 compute-0 podman[174139]: 2025-11-24 20:04:00.764742776 +0000 UTC m=+0.174402030 container attach 2d1765e016f837f5ada98e09d763875ebc200dd2562f35e8b7ca278a4807bbb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 20:04:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:00 compute-0 busy_kare[174207]: 167 167
Nov 24 20:04:00 compute-0 systemd[1]: libpod-2d1765e016f837f5ada98e09d763875ebc200dd2562f35e8b7ca278a4807bbb6.scope: Deactivated successfully.
Nov 24 20:04:00 compute-0 podman[174139]: 2025-11-24 20:04:00.778296154 +0000 UTC m=+0.187955358 container died 2d1765e016f837f5ada98e09d763875ebc200dd2562f35e8b7ca278a4807bbb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:04:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a415d8fcc6b194d9caca2cfe68dd33828a610ca2cc846831ce83c389a8c49733-merged.mount: Deactivated successfully.
Nov 24 20:04:00 compute-0 podman[174139]: 2025-11-24 20:04:00.830192885 +0000 UTC m=+0.239852129 container remove 2d1765e016f837f5ada98e09d763875ebc200dd2562f35e8b7ca278a4807bbb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 20:04:00 compute-0 systemd[1]: libpod-conmon-2d1765e016f837f5ada98e09d763875ebc200dd2562f35e8b7ca278a4807bbb6.scope: Deactivated successfully.
Nov 24 20:04:00 compute-0 ceph-mon[75677]: pgmap v609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:04:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:04:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:04:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:04:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:04:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:04:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:01 compute-0 podman[174338]: 2025-11-24 20:04:01.011437419 +0000 UTC m=+0.043967867 container create 10e17ffcd61c60fa00d66d532432d1a1400e66f4e4a29e86a00d66fa60c14911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:04:01 compute-0 systemd[1]: Started libpod-conmon-10e17ffcd61c60fa00d66d532432d1a1400e66f4e4a29e86a00d66fa60c14911.scope.
Nov 24 20:04:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be5f0c528c0a9a15517084114246d2c94a84f5c085bcee8407baaa027b50e02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be5f0c528c0a9a15517084114246d2c94a84f5c085bcee8407baaa027b50e02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be5f0c528c0a9a15517084114246d2c94a84f5c085bcee8407baaa027b50e02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be5f0c528c0a9a15517084114246d2c94a84f5c085bcee8407baaa027b50e02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be5f0c528c0a9a15517084114246d2c94a84f5c085bcee8407baaa027b50e02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:01 compute-0 podman[174338]: 2025-11-24 20:04:00.992841193 +0000 UTC m=+0.025371731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:04:01 compute-0 podman[174338]: 2025-11-24 20:04:01.098400548 +0000 UTC m=+0.130931086 container init 10e17ffcd61c60fa00d66d532432d1a1400e66f4e4a29e86a00d66fa60c14911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:04:01 compute-0 podman[174338]: 2025-11-24 20:04:01.107509822 +0000 UTC m=+0.140040280 container start 10e17ffcd61c60fa00d66d532432d1a1400e66f4e4a29e86a00d66fa60c14911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 20:04:01 compute-0 podman[174338]: 2025-11-24 20:04:01.111647068 +0000 UTC m=+0.144177556 container attach 10e17ffcd61c60fa00d66d532432d1a1400e66f4e4a29e86a00d66fa60c14911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:04:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:01.527+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 761 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:01.670+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:01 compute-0 ceph-mon[75677]: pgmap v610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:01 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 761 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:02 compute-0 vibrant_feistel[174397]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:04:02 compute-0 vibrant_feistel[174397]: --> relative data size: 1.0
Nov 24 20:04:02 compute-0 vibrant_feistel[174397]: --> All data devices are unavailable
Nov 24 20:04:02 compute-0 systemd[1]: libpod-10e17ffcd61c60fa00d66d532432d1a1400e66f4e4a29e86a00d66fa60c14911.scope: Deactivated successfully.
Nov 24 20:04:02 compute-0 podman[174338]: 2025-11-24 20:04:02.31154344 +0000 UTC m=+1.344073888 container died 10e17ffcd61c60fa00d66d532432d1a1400e66f4e4a29e86a00d66fa60c14911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feistel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 20:04:02 compute-0 systemd[1]: libpod-10e17ffcd61c60fa00d66d532432d1a1400e66f4e4a29e86a00d66fa60c14911.scope: Consumed 1.086s CPU time.
Nov 24 20:04:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6be5f0c528c0a9a15517084114246d2c94a84f5c085bcee8407baaa027b50e02-merged.mount: Deactivated successfully.
Nov 24 20:04:02 compute-0 podman[174338]: 2025-11-24 20:04:02.379202425 +0000 UTC m=+1.411732883 container remove 10e17ffcd61c60fa00d66d532432d1a1400e66f4e4a29e86a00d66fa60c14911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feistel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 20:04:02 compute-0 systemd[1]: libpod-conmon-10e17ffcd61c60fa00d66d532432d1a1400e66f4e4a29e86a00d66fa60c14911.scope: Deactivated successfully.
Nov 24 20:04:02 compute-0 sudo[173905]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:02 compute-0 sudo[175156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:04:02 compute-0 sudo[175156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:02 compute-0 sudo[175156]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:02.544+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:02 compute-0 sudo[175221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:04:02 compute-0 sudo[175221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:02 compute-0 sudo[175221]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:02.642+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:02 compute-0 sudo[175286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:04:02 compute-0 sudo[175286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:02 compute-0 sudo[175286]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:02 compute-0 sudo[175346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:04:02 compute-0 sudo[175346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:03 compute-0 podman[175586]: 2025-11-24 20:04:03.095552844 +0000 UTC m=+0.049201831 container create eac989c730f64302da7ef6dccd9f71b335d8a9a17999366a5d36b00e5c706ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:04:03 compute-0 systemd[1]: Started libpod-conmon-eac989c730f64302da7ef6dccd9f71b335d8a9a17999366a5d36b00e5c706ab3.scope.
Nov 24 20:04:03 compute-0 podman[175586]: 2025-11-24 20:04:03.074374882 +0000 UTC m=+0.028023909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:04:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:04:03 compute-0 podman[175586]: 2025-11-24 20:04:03.193888265 +0000 UTC m=+0.147537322 container init eac989c730f64302da7ef6dccd9f71b335d8a9a17999366a5d36b00e5c706ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:04:03 compute-0 podman[175586]: 2025-11-24 20:04:03.20031881 +0000 UTC m=+0.153967837 container start eac989c730f64302da7ef6dccd9f71b335d8a9a17999366a5d36b00e5c706ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:04:03 compute-0 podman[175586]: 2025-11-24 20:04:03.204366343 +0000 UTC m=+0.158015350 container attach eac989c730f64302da7ef6dccd9f71b335d8a9a17999366a5d36b00e5c706ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:04:03 compute-0 clever_morse[175649]: 167 167
Nov 24 20:04:03 compute-0 systemd[1]: libpod-eac989c730f64302da7ef6dccd9f71b335d8a9a17999366a5d36b00e5c706ab3.scope: Deactivated successfully.
Nov 24 20:04:03 compute-0 podman[175586]: 2025-11-24 20:04:03.209630929 +0000 UTC m=+0.163279926 container died eac989c730f64302da7ef6dccd9f71b335d8a9a17999366a5d36b00e5c706ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:04:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-480ddb4da3829f53554e15c0f7f4c3d8a20af23b372fd73a7776bfd847d6501a-merged.mount: Deactivated successfully.
Nov 24 20:04:03 compute-0 podman[175586]: 2025-11-24 20:04:03.248608868 +0000 UTC m=+0.202257845 container remove eac989c730f64302da7ef6dccd9f71b335d8a9a17999366a5d36b00e5c706ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:04:03 compute-0 systemd[1]: libpod-conmon-eac989c730f64302da7ef6dccd9f71b335d8a9a17999366a5d36b00e5c706ab3.scope: Deactivated successfully.
Nov 24 20:04:03 compute-0 podman[175777]: 2025-11-24 20:04:03.495538756 +0000 UTC m=+0.075411934 container create f646b9875fdbc2fa083729415c6fd9f5491dbb69f01260397ff022a61acc3d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:04:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:03.542+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:03 compute-0 systemd[1]: Started libpod-conmon-f646b9875fdbc2fa083729415c6fd9f5491dbb69f01260397ff022a61acc3d0b.scope.
Nov 24 20:04:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:03 compute-0 podman[175777]: 2025-11-24 20:04:03.463420953 +0000 UTC m=+0.043294171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:04:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f41c8abc0ce8ce39fef8dc0d80843b73d99d4fdd19cdaaa147c8e4cd096c97a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f41c8abc0ce8ce39fef8dc0d80843b73d99d4fdd19cdaaa147c8e4cd096c97a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f41c8abc0ce8ce39fef8dc0d80843b73d99d4fdd19cdaaa147c8e4cd096c97a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f41c8abc0ce8ce39fef8dc0d80843b73d99d4fdd19cdaaa147c8e4cd096c97a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:03 compute-0 podman[175777]: 2025-11-24 20:04:03.597265673 +0000 UTC m=+0.177138861 container init f646b9875fdbc2fa083729415c6fd9f5491dbb69f01260397ff022a61acc3d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:04:03 compute-0 podman[175777]: 2025-11-24 20:04:03.610822641 +0000 UTC m=+0.190695799 container start f646b9875fdbc2fa083729415c6fd9f5491dbb69f01260397ff022a61acc3d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 20:04:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:03.610+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:03 compute-0 podman[175777]: 2025-11-24 20:04:03.616694181 +0000 UTC m=+0.196567339 container attach f646b9875fdbc2fa083729415c6fd9f5491dbb69f01260397ff022a61acc3d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:04:03 compute-0 ceph-mon[75677]: pgmap v611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]: {
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:     "0": [
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:         {
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "devices": [
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "/dev/loop3"
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             ],
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_name": "ceph_lv0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_size": "21470642176",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "name": "ceph_lv0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "tags": {
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.cluster_name": "ceph",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.crush_device_class": "",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.encrypted": "0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.osd_id": "0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.type": "block",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.vdo": "0"
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             },
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "type": "block",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "vg_name": "ceph_vg0"
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:         }
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:     ],
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:     "1": [
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:         {
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "devices": [
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "/dev/loop4"
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             ],
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_name": "ceph_lv1",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_size": "21470642176",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "name": "ceph_lv1",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "tags": {
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.cluster_name": "ceph",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.crush_device_class": "",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.encrypted": "0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.osd_id": "1",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.type": "block",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.vdo": "0"
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             },
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "type": "block",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "vg_name": "ceph_vg1"
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:         }
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:     ],
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:     "2": [
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:         {
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "devices": [
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "/dev/loop5"
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             ],
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_name": "ceph_lv2",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_size": "21470642176",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "name": "ceph_lv2",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "tags": {
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.cluster_name": "ceph",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.crush_device_class": "",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.encrypted": "0",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.osd_id": "2",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.type": "block",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:                 "ceph.vdo": "0"
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             },
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "type": "block",
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:             "vg_name": "ceph_vg2"
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:         }
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]:     ]
Nov 24 20:04:04 compute-0 laughing_meninsky[175854]: }
Nov 24 20:04:04 compute-0 systemd[1]: libpod-f646b9875fdbc2fa083729415c6fd9f5491dbb69f01260397ff022a61acc3d0b.scope: Deactivated successfully.
Nov 24 20:04:04 compute-0 podman[176265]: 2025-11-24 20:04:04.475531183 +0000 UTC m=+0.041824323 container died f646b9875fdbc2fa083729415c6fd9f5491dbb69f01260397ff022a61acc3d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f41c8abc0ce8ce39fef8dc0d80843b73d99d4fdd19cdaaa147c8e4cd096c97a-merged.mount: Deactivated successfully.
Nov 24 20:04:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:04.536+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:04 compute-0 podman[176265]: 2025-11-24 20:04:04.54994356 +0000 UTC m=+0.116236650 container remove f646b9875fdbc2fa083729415c6fd9f5491dbb69f01260397ff022a61acc3d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:04:04 compute-0 systemd[1]: libpod-conmon-f646b9875fdbc2fa083729415c6fd9f5491dbb69f01260397ff022a61acc3d0b.scope: Deactivated successfully.
Nov 24 20:04:04 compute-0 sudo[175346]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:04.605+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:04 compute-0 sudo[176359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:04:04 compute-0 sudo[176359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:04 compute-0 sudo[176359]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:04 compute-0 sudo[176419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:04:04 compute-0 sudo[176419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:04 compute-0 sudo[176419]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:04 compute-0 sudo[176479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:04:04 compute-0 sudo[176479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:04 compute-0 sudo[176479]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:04 compute-0 sudo[176541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:04:04 compute-0 sudo[176541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:05 compute-0 podman[176772]: 2025-11-24 20:04:05.362963818 +0000 UTC m=+0.044305257 container create d9f03f59dfe2207fb347445855827421c01b0c905d1c72be4f809e30777a4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:04:05 compute-0 systemd[1]: Started libpod-conmon-d9f03f59dfe2207fb347445855827421c01b0c905d1c72be4f809e30777a4b35.scope.
Nov 24 20:04:05 compute-0 podman[176772]: 2025-11-24 20:04:05.34119444 +0000 UTC m=+0.022535869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:04:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:04:05 compute-0 sshd-session[173577]: Connection closed by invalid user ftpuser 27.79.44.141 port 57226 [preauth]
Nov 24 20:04:05 compute-0 podman[176772]: 2025-11-24 20:04:05.467853776 +0000 UTC m=+0.149195205 container init d9f03f59dfe2207fb347445855827421c01b0c905d1c72be4f809e30777a4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:04:05 compute-0 podman[176772]: 2025-11-24 20:04:05.479091604 +0000 UTC m=+0.160433033 container start d9f03f59dfe2207fb347445855827421c01b0c905d1c72be4f809e30777a4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:04:05 compute-0 trusting_carson[176829]: 167 167
Nov 24 20:04:05 compute-0 systemd[1]: libpod-d9f03f59dfe2207fb347445855827421c01b0c905d1c72be4f809e30777a4b35.scope: Deactivated successfully.
Nov 24 20:04:05 compute-0 podman[176772]: 2025-11-24 20:04:05.48632539 +0000 UTC m=+0.167666829 container attach d9f03f59dfe2207fb347445855827421c01b0c905d1c72be4f809e30777a4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:04:05 compute-0 podman[176772]: 2025-11-24 20:04:05.486653798 +0000 UTC m=+0.167995207 container died d9f03f59dfe2207fb347445855827421c01b0c905d1c72be4f809e30777a4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dc93a4d72f86d685263aafe4f0f52e67f2cbad874fd68b2bab497494fa0f72a-merged.mount: Deactivated successfully.
Nov 24 20:04:05 compute-0 podman[176772]: 2025-11-24 20:04:05.535477189 +0000 UTC m=+0.216818608 container remove d9f03f59dfe2207fb347445855827421c01b0c905d1c72be4f809e30777a4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:04:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:05.532+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:05 compute-0 systemd[1]: libpod-conmon-d9f03f59dfe2207fb347445855827421c01b0c905d1c72be4f809e30777a4b35.scope: Deactivated successfully.
Nov 24 20:04:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:05.619+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:05 compute-0 podman[176996]: 2025-11-24 20:04:05.734386567 +0000 UTC m=+0.049805548 container create 21ded5e3f2c0fa073d632b227879d0cfde1d3bc68449cbebce357430b221ad83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:04:05 compute-0 systemd[1]: Started libpod-conmon-21ded5e3f2c0fa073d632b227879d0cfde1d3bc68449cbebce357430b221ad83.scope.
Nov 24 20:04:05 compute-0 podman[176996]: 2025-11-24 20:04:05.707603771 +0000 UTC m=+0.023022792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:04:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135b6600bf1b8120c1c706cf9351d2b9d27b201a5c409ce13ffc9292e2b1f5c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135b6600bf1b8120c1c706cf9351d2b9d27b201a5c409ce13ffc9292e2b1f5c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135b6600bf1b8120c1c706cf9351d2b9d27b201a5c409ce13ffc9292e2b1f5c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135b6600bf1b8120c1c706cf9351d2b9d27b201a5c409ce13ffc9292e2b1f5c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:04:05 compute-0 podman[176996]: 2025-11-24 20:04:05.845509796 +0000 UTC m=+0.160928797 container init 21ded5e3f2c0fa073d632b227879d0cfde1d3bc68449cbebce357430b221ad83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:04:05 compute-0 podman[176996]: 2025-11-24 20:04:05.856726683 +0000 UTC m=+0.172145664 container start 21ded5e3f2c0fa073d632b227879d0cfde1d3bc68449cbebce357430b221ad83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:04:05 compute-0 podman[176996]: 2025-11-24 20:04:05.862785858 +0000 UTC m=+0.178204909 container attach 21ded5e3f2c0fa073d632b227879d0cfde1d3bc68449cbebce357430b221ad83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:04:05 compute-0 ceph-mon[75677]: pgmap v612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:06.534+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:06.652+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]: {
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "osd_id": 2,
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "type": "bluestore"
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:     },
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "osd_id": 1,
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "type": "bluestore"
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:     },
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "osd_id": 0,
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:         "type": "bluestore"
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]:     }
Nov 24 20:04:06 compute-0 intelligent_almeida[177079]: }
Nov 24 20:04:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 766 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:06 compute-0 systemd[1]: libpod-21ded5e3f2c0fa073d632b227879d0cfde1d3bc68449cbebce357430b221ad83.scope: Deactivated successfully.
Nov 24 20:04:06 compute-0 podman[176996]: 2025-11-24 20:04:06.948322511 +0000 UTC m=+1.263741492 container died 21ded5e3f2c0fa073d632b227879d0cfde1d3bc68449cbebce357430b221ad83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:04:06 compute-0 systemd[1]: libpod-21ded5e3f2c0fa073d632b227879d0cfde1d3bc68449cbebce357430b221ad83.scope: Consumed 1.093s CPU time.
Nov 24 20:04:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-135b6600bf1b8120c1c706cf9351d2b9d27b201a5c409ce13ffc9292e2b1f5c4-merged.mount: Deactivated successfully.
Nov 24 20:04:07 compute-0 podman[176996]: 2025-11-24 20:04:07.014856205 +0000 UTC m=+1.330275186 container remove 21ded5e3f2c0fa073d632b227879d0cfde1d3bc68449cbebce357430b221ad83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 20:04:07 compute-0 systemd[1]: libpod-conmon-21ded5e3f2c0fa073d632b227879d0cfde1d3bc68449cbebce357430b221ad83.scope: Deactivated successfully.
Nov 24 20:04:07 compute-0 sudo[176541]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:04:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:04:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:04:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:04:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 602c58f8-35d2-4c30-b1eb-b00e97de0b5d does not exist
Nov 24 20:04:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 080a8689-9e49-46c7-baba-3d0245757edb does not exist
Nov 24 20:04:07 compute-0 sudo[177774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:04:07 compute-0 sudo[177774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:07 compute-0 sudo[177774]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:07 compute-0 sudo[177841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:04:07 compute-0 sudo[177841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:04:07 compute-0 sudo[177841]: pam_unix(sudo:session): session closed for user root
Nov 24 20:04:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:07.508+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:07.610+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:07 compute-0 ceph-mon[75677]: pgmap v613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 766 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:04:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:04:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:08.498+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:08.655+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:04:09.354 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:04:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:04:09.354 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:04:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:04:09.354 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:04:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:09.509+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:09.627+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:09 compute-0 ceph-mon[75677]: pgmap v614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:10.484+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:10.645+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:10 compute-0 podman[179534]: 2025-11-24 20:04:10.845875898 +0000 UTC m=+0.077042241 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:04:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:11.455+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:11.656+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:11 compute-0 ceph-mon[75677]: pgmap v615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:12.494+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:12.636+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:13.489+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:13.618+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:14 compute-0 ceph-mon[75677]: pgmap v616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:14.454+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:14.619+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:15.491+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:15.669+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:16 compute-0 ceph-mon[75677]: pgmap v617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:16.497+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 771 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:16.689+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:17 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 771 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:17.474+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:17.740+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:18 compute-0 ceph-mon[75677]: pgmap v618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:18.446+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:18.725+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:19.463+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:19.725+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:20.427+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:20.689+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:21 compute-0 ceph-mon[75677]: pgmap v619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:21.402+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 776 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:21.708+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:22 compute-0 ceph-mon[75677]: pgmap v620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:22 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 776 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:22.448+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:22.719+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:23.491+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:23.760+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:24 compute-0 ceph-mon[75677]: pgmap v621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:04:24
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.rgw.root', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'images']
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:04:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:24.541+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:24.720+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:25.563+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:25.675+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:26 compute-0 ceph-mon[75677]: pgmap v622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:26.597+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 781 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:26.688+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:26 compute-0 podman[186802]: 2025-11-24 20:04:26.900146924 +0000 UTC m=+0.120371459 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Nov 24 20:04:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 781 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:27.583+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:27.729+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:28 compute-0 ceph-mon[75677]: pgmap v623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:28.550+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:28.766+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:29.522+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:29.757+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:30 compute-0 ceph-mon[75677]: pgmap v624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:30.529+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:30.759+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:31.520+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 791 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:31.777+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:32 compute-0 ceph-mon[75677]: pgmap v625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 791 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:32.568+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:32.812+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:33.613+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:33.842+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:04:34 compute-0 ceph-mon[75677]: pgmap v626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:34.590+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:34.817+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:35.552+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:35.792+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:36 compute-0 ceph-mon[75677]: pgmap v627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:36.535+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:36.840+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 797 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:37.512+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:37.793+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:38 compute-0 ceph-mon[75677]: pgmap v628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 797 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:38.495+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:38.777+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:39.483+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:39 compute-0 sshd-session[188701]: Connection closed by authenticating user root 27.79.44.141 port 43784 [preauth]
Nov 24 20:04:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:39.747+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:04:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:04:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:04:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:04:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:04:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:40.474+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:40 compute-0 ceph-mon[75677]: pgmap v629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:40.786+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:41.492+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:41.830+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:41 compute-0 podman[191334]: 2025-11-24 20:04:41.850611713 +0000 UTC m=+0.068982673 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:04:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:42.499+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:42 compute-0 ceph-mon[75677]: pgmap v630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:42.802+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:43.501+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:43.809+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:44.521+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:44 compute-0 ceph-mon[75677]: pgmap v631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:44.840+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:45.507+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:45.865+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:46.547+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:46 compute-0 ceph-mon[75677]: pgmap v632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 801 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:46.900+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:47.553+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 801 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:47.929+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:48.572+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:48 compute-0 ceph-mon[75677]: pgmap v633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:48 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Nov 24 20:04:48 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 20:04:48 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 20:04:48 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 20:04:48 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 20:04:48 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 20:04:48 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 20:04:48 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 20:04:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:48.956+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:49.541+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:49 compute-0 groupadd[191366]: group added to /etc/group: name=dnsmasq, GID=991
Nov 24 20:04:49 compute-0 groupadd[191366]: group added to /etc/gshadow: name=dnsmasq
Nov 24 20:04:49 compute-0 groupadd[191366]: new group: name=dnsmasq, GID=991
Nov 24 20:04:49 compute-0 useradd[191373]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 24 20:04:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:49.996+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:50 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Nov 24 20:04:50 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 24 20:04:50 compute-0 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Nov 24 20:04:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:50.519+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:50 compute-0 ceph-mon[75677]: pgmap v634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:50.961+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:50 compute-0 sshd-session[179379]: Invalid user admin from 27.79.44.141 port 48594
Nov 24 20:04:51 compute-0 groupadd[191386]: group added to /etc/group: name=clevis, GID=990
Nov 24 20:04:51 compute-0 groupadd[191386]: group added to /etc/gshadow: name=clevis
Nov 24 20:04:51 compute-0 groupadd[191386]: new group: name=clevis, GID=990
Nov 24 20:04:51 compute-0 useradd[191393]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 24 20:04:51 compute-0 usermod[191403]: add 'clevis' to group 'tss'
Nov 24 20:04:51 compute-0 usermod[191403]: add 'clevis' to shadow group 'tss'
Nov 24 20:04:51 compute-0 sshd-session[179379]: Connection closed by invalid user admin 27.79.44.141 port 48594 [preauth]
Nov 24 20:04:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:51.515+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 807 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:51 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 807 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:52.008+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:52.529+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:52 compute-0 ceph-mon[75677]: pgmap v635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:53.045+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:53.559+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:53 compute-0 polkitd[44045]: Reloading rules
Nov 24 20:04:53 compute-0 polkitd[44045]: Collecting garbage unconditionally...
Nov 24 20:04:53 compute-0 polkitd[44045]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 20:04:53 compute-0 polkitd[44045]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 20:04:53 compute-0 polkitd[44045]: Finished loading, compiling and executing 3 rules
Nov 24 20:04:53 compute-0 polkitd[44045]: Reloading rules
Nov 24 20:04:53 compute-0 polkitd[44045]: Collecting garbage unconditionally...
Nov 24 20:04:53 compute-0 polkitd[44045]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 20:04:53 compute-0 polkitd[44045]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 20:04:53 compute-0 polkitd[44045]: Finished loading, compiling and executing 3 rules
Nov 24 20:04:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:54.007+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:04:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:54.570+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:54 compute-0 ceph-mon[75677]: pgmap v636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:55.056+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:55 compute-0 groupadd[191590]: group added to /etc/group: name=ceph, GID=167
Nov 24 20:04:55 compute-0 groupadd[191590]: group added to /etc/gshadow: name=ceph
Nov 24 20:04:55 compute-0 groupadd[191590]: new group: name=ceph, GID=167
Nov 24 20:04:55 compute-0 useradd[191596]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 24 20:04:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:55.544+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:56.080+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:56.515+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 812 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:04:56 compute-0 ceph-mon[75677]: pgmap v637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:56 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 812 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:04:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:57.045+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:57 compute-0 podman[191605]: 2025-11-24 20:04:57.227017631 +0000 UTC m=+0.123459040 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:04:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:57.530+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:58.055+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:58.533+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:58 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 24 20:04:58 compute-0 sshd[1004]: Received signal 15; terminating.
Nov 24 20:04:58 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 24 20:04:58 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 24 20:04:58 compute-0 systemd[1]: sshd.service: Consumed 5.124s CPU time, read 32.0K from disk, written 84.0K to disk.
Nov 24 20:04:58 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 24 20:04:58 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 24 20:04:58 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 20:04:58 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 20:04:58 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 20:04:58 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 24 20:04:58 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 24 20:04:58 compute-0 ceph-mon[75677]: pgmap v638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:58 compute-0 sshd[192247]: Server listening on 0.0.0.0 port 22.
Nov 24 20:04:58 compute-0 sshd[192247]: Server listening on :: port 22.
Nov 24 20:04:58 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 24 20:04:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:04:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:04:59.085+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:04:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:04:59.497+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:04:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:04:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:04:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:00.065+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:00.531+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:00 compute-0 ceph-mon[75677]: pgmap v639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:01.044+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:01.546+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 20:05:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 20:05:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 817 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:01 compute-0 systemd[1]: Reloading.
Nov 24 20:05:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:01 compute-0 ceph-mon[75677]: pgmap v640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:01 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 817 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:01 compute-0 systemd-sysv-generator[192506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:05:01 compute-0 systemd-rc-local-generator[192501]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:05:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:02.026+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 20:05:02 compute-0 auditd[703]: Audit daemon rotating log files
Nov 24 20:05:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:02.507+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:02.992+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:03.475+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:03.985+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:04 compute-0 ceph-mon[75677]: pgmap v641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:04.500+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:04 compute-0 sudo[173271]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:05.025+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:05.501+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:05 compute-0 sudo[196013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhwbrzafqoecwfdcvxgsqqorgageixiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014705.099815-336-131552048605197/AnsiballZ_systemd.py'
Nov 24 20:05:05 compute-0 sudo[196013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:06.042+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:06 compute-0 python3.9[196040]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 20:05:06 compute-0 systemd[1]: Reloading.
Nov 24 20:05:06 compute-0 ceph-mon[75677]: pgmap v642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:06 compute-0 systemd-sysv-generator[196490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:05:06 compute-0 systemd-rc-local-generator[196485]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:05:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:06.520+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:06 compute-0 sudo[196013]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 822 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:07 compute-0 sudo[197285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzhdzrxcqttxpfuwqwlyprcjzmzvyphx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014706.6949575-336-132465494553093/AnsiballZ_systemd.py'
Nov 24 20:05:07 compute-0 sudo[197285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:07.065+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 822 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:07 compute-0 sudo[197571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:05:07 compute-0 sudo[197571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:07 compute-0 python3.9[197308]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 20:05:07 compute-0 sudo[197571]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:07 compute-0 systemd[1]: Reloading.
Nov 24 20:05:07 compute-0 systemd-rc-local-generator[197823]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:05:07 compute-0 systemd-sysv-generator[197826]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:05:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:07.497+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:07 compute-0 sudo[197673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:05:07 compute-0 sudo[197673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:07 compute-0 sudo[197673]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:07 compute-0 sudo[197285]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:07 compute-0 sudo[198053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:05:07 compute-0 sudo[198053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:07 compute-0 sudo[198053]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:07 compute-0 sudo[198126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:05:07 compute-0 sudo[198126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:08.021+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:08 compute-0 sudo[198683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybaaajiplnytjshfxdiouoghqdcvrowi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014707.905992-336-240536989163552/AnsiballZ_systemd.py'
Nov 24 20:05:08 compute-0 sudo[198683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:08 compute-0 ceph-mon[75677]: pgmap v643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:08 compute-0 sudo[198126]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:05:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:05:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:05:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:05:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:05:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:05:08 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d9693827-ffee-4278-9438-42bffe31734f does not exist
Nov 24 20:05:08 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3609fc51-5a92-4247-8387-6bd7aa1a9ecf does not exist
Nov 24 20:05:08 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 378058c8-2759-42a6-a223-8fdad325e77b does not exist
Nov 24 20:05:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:05:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:05:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:05:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:05:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:05:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:05:08 compute-0 sudo[198818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:05:08 compute-0 sudo[198818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:08 compute-0 sudo[198818]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:08 compute-0 python3.9[198698]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 20:05:08 compute-0 sudo[198899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:05:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:08.543+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:08 compute-0 sudo[198899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:08 compute-0 sudo[198899]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:08 compute-0 systemd[1]: Reloading.
Nov 24 20:05:08 compute-0 systemd-sysv-generator[199155]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:05:08 compute-0 systemd-rc-local-generator[199151]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:05:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:08 compute-0 sudo[198990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:05:08 compute-0 sudo[198990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:08 compute-0 sudo[198990]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:08 compute-0 sudo[198683]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:09.001+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:09 compute-0 sudo[199340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:05:09 compute-0 sudo[199340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:05:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:05:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:05:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:05:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:05:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:05:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:05:09.355 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:05:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:05:09.355 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:05:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:05:09.355 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:05:09 compute-0 podman[199765]: 2025-11-24 20:05:09.356323949 +0000 UTC m=+0.037795837 container create 498b2552464593753aa43a4e0d1eaca30837bb30849fd3dcbf323c69a63b22a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:05:09 compute-0 systemd[1]: Started libpod-conmon-498b2552464593753aa43a4e0d1eaca30837bb30849fd3dcbf323c69a63b22a8.scope.
Nov 24 20:05:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:05:09 compute-0 podman[199765]: 2025-11-24 20:05:09.338996382 +0000 UTC m=+0.020468250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:05:09 compute-0 podman[199765]: 2025-11-24 20:05:09.440221526 +0000 UTC m=+0.121693394 container init 498b2552464593753aa43a4e0d1eaca30837bb30849fd3dcbf323c69a63b22a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:05:09 compute-0 podman[199765]: 2025-11-24 20:05:09.447139545 +0000 UTC m=+0.128611393 container start 498b2552464593753aa43a4e0d1eaca30837bb30849fd3dcbf323c69a63b22a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:05:09 compute-0 podman[199765]: 2025-11-24 20:05:09.450994694 +0000 UTC m=+0.132466562 container attach 498b2552464593753aa43a4e0d1eaca30837bb30849fd3dcbf323c69a63b22a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 20:05:09 compute-0 zealous_edison[199882]: 167 167
Nov 24 20:05:09 compute-0 systemd[1]: libpod-498b2552464593753aa43a4e0d1eaca30837bb30849fd3dcbf323c69a63b22a8.scope: Deactivated successfully.
Nov 24 20:05:09 compute-0 podman[199765]: 2025-11-24 20:05:09.452840972 +0000 UTC m=+0.134312830 container died 498b2552464593753aa43a4e0d1eaca30837bb30849fd3dcbf323c69a63b22a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 20:05:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-84e5b5d458ce79478b8733de524a5eb38d02bf844d97b670f1cea18a129d813f-merged.mount: Deactivated successfully.
Nov 24 20:05:09 compute-0 podman[199765]: 2025-11-24 20:05:09.494848627 +0000 UTC m=+0.176320475 container remove 498b2552464593753aa43a4e0d1eaca30837bb30849fd3dcbf323c69a63b22a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:05:09 compute-0 systemd[1]: libpod-conmon-498b2552464593753aa43a4e0d1eaca30837bb30849fd3dcbf323c69a63b22a8.scope: Deactivated successfully.
Nov 24 20:05:09 compute-0 sudo[200042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yajmftgpurjvkqwuzxfgdqkkuqedrtmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014709.1598787-336-166345877299723/AnsiballZ_systemd.py'
Nov 24 20:05:09 compute-0 sudo[200042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:09.549+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:09 compute-0 podman[200201]: 2025-11-24 20:05:09.685431069 +0000 UTC m=+0.050429943 container create 12896ca3c719cc52fa8fdad7d1fbd673e140f834a15335b5f1aae087e9c5f495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:05:09 compute-0 systemd[1]: Started libpod-conmon-12896ca3c719cc52fa8fdad7d1fbd673e140f834a15335b5f1aae087e9c5f495.scope.
Nov 24 20:05:09 compute-0 podman[200201]: 2025-11-24 20:05:09.660037413 +0000 UTC m=+0.025036377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:05:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a0b7efaae91db372c508c56a915d99b07dc784e676489a8480437813c64e23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a0b7efaae91db372c508c56a915d99b07dc784e676489a8480437813c64e23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a0b7efaae91db372c508c56a915d99b07dc784e676489a8480437813c64e23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a0b7efaae91db372c508c56a915d99b07dc784e676489a8480437813c64e23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a0b7efaae91db372c508c56a915d99b07dc784e676489a8480437813c64e23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:09 compute-0 podman[200201]: 2025-11-24 20:05:09.813762623 +0000 UTC m=+0.178761487 container init 12896ca3c719cc52fa8fdad7d1fbd673e140f834a15335b5f1aae087e9c5f495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:05:09 compute-0 podman[200201]: 2025-11-24 20:05:09.820050466 +0000 UTC m=+0.185049330 container start 12896ca3c719cc52fa8fdad7d1fbd673e140f834a15335b5f1aae087e9c5f495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:05:09 compute-0 podman[200201]: 2025-11-24 20:05:09.823831063 +0000 UTC m=+0.188829927 container attach 12896ca3c719cc52fa8fdad7d1fbd673e140f834a15335b5f1aae087e9c5f495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:05:09 compute-0 python3.9[200064]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 20:05:09 compute-0 systemd[1]: Reloading.
Nov 24 20:05:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:09.990+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:10 compute-0 systemd-rc-local-generator[200613]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:05:10 compute-0 systemd-sysv-generator[200623]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:05:10 compute-0 sudo[200042]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:10 compute-0 ceph-mon[75677]: pgmap v644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:10.503+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:10 compute-0 happy_hugle[200342]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:05:10 compute-0 happy_hugle[200342]: --> relative data size: 1.0
Nov 24 20:05:10 compute-0 happy_hugle[200342]: --> All data devices are unavailable
Nov 24 20:05:10 compute-0 systemd[1]: libpod-12896ca3c719cc52fa8fdad7d1fbd673e140f834a15335b5f1aae087e9c5f495.scope: Deactivated successfully.
Nov 24 20:05:10 compute-0 systemd[1]: libpod-12896ca3c719cc52fa8fdad7d1fbd673e140f834a15335b5f1aae087e9c5f495.scope: Consumed 1.022s CPU time.
Nov 24 20:05:10 compute-0 podman[200201]: 2025-11-24 20:05:10.910313273 +0000 UTC m=+1.275312167 container died 12896ca3c719cc52fa8fdad7d1fbd673e140f834a15335b5f1aae087e9c5f495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:05:10 compute-0 sudo[201635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhzvmtqhwhinpfqgutcbxhmbrrkxgwhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014710.532081-365-265779191030972/AnsiballZ_systemd.py'
Nov 24 20:05:10 compute-0 sudo[201635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-12a0b7efaae91db372c508c56a915d99b07dc784e676489a8480437813c64e23-merged.mount: Deactivated successfully.
Nov 24 20:05:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:10.965+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:11 compute-0 podman[200201]: 2025-11-24 20:05:11.003602032 +0000 UTC m=+1.368600916 container remove 12896ca3c719cc52fa8fdad7d1fbd673e140f834a15335b5f1aae087e9c5f495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 20:05:11 compute-0 systemd[1]: libpod-conmon-12896ca3c719cc52fa8fdad7d1fbd673e140f834a15335b5f1aae087e9c5f495.scope: Deactivated successfully.
Nov 24 20:05:11 compute-0 sudo[199340]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:11 compute-0 sudo[201756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:05:11 compute-0 sudo[201756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:11 compute-0 sudo[201756]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:11 compute-0 sudo[201838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:05:11 compute-0 sudo[201838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:11 compute-0 sudo[201838]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:11 compute-0 python3.9[201668]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:11 compute-0 sudo[201939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:05:11 compute-0 sudo[201939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:11 compute-0 sudo[201939]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:11 compute-0 sudo[202014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:05:11 compute-0 sudo[202014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:11 compute-0 systemd[1]: Reloading.
Nov 24 20:05:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:11 compute-0 systemd-rc-local-generator[202182]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:05:11 compute-0 systemd-sysv-generator[202186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:05:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:11.507+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:11 compute-0 podman[202338]: 2025-11-24 20:05:11.693244763 +0000 UTC m=+0.048557794 container create 792e7a8dd93ecc623a5ce59bbd694bdab798a18c414d99b9404ca9d03cdb2c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:05:11 compute-0 systemd[1]: Started libpod-conmon-792e7a8dd93ecc623a5ce59bbd694bdab798a18c414d99b9404ca9d03cdb2c69.scope.
Nov 24 20:05:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 20:05:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 20:05:11 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.753s CPU time.
Nov 24 20:05:11 compute-0 systemd[1]: run-r77dafc6fadbf43c6bc20b42b511f8307.service: Deactivated successfully.
Nov 24 20:05:11 compute-0 sudo[201635]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:11 compute-0 podman[202338]: 2025-11-24 20:05:11.672553959 +0000 UTC m=+0.027867000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:05:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:05:11 compute-0 podman[202338]: 2025-11-24 20:05:11.795103675 +0000 UTC m=+0.150416796 container init 792e7a8dd93ecc623a5ce59bbd694bdab798a18c414d99b9404ca9d03cdb2c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:05:11 compute-0 podman[202338]: 2025-11-24 20:05:11.804283412 +0000 UTC m=+0.159596423 container start 792e7a8dd93ecc623a5ce59bbd694bdab798a18c414d99b9404ca9d03cdb2c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 20:05:11 compute-0 podman[202338]: 2025-11-24 20:05:11.807503004 +0000 UTC m=+0.162816025 container attach 792e7a8dd93ecc623a5ce59bbd694bdab798a18c414d99b9404ca9d03cdb2c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hermann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:05:11 compute-0 wonderful_hermann[202356]: 167 167
Nov 24 20:05:11 compute-0 systemd[1]: libpod-792e7a8dd93ecc623a5ce59bbd694bdab798a18c414d99b9404ca9d03cdb2c69.scope: Deactivated successfully.
Nov 24 20:05:11 compute-0 podman[202338]: 2025-11-24 20:05:11.810943703 +0000 UTC m=+0.166256734 container died 792e7a8dd93ecc623a5ce59bbd694bdab798a18c414d99b9404ca9d03cdb2c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hermann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:05:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e24416239f831ee05c672e1ea866f3a76b3bba2d1ac8c18055df6cba1ce186b1-merged.mount: Deactivated successfully.
Nov 24 20:05:11 compute-0 podman[202338]: 2025-11-24 20:05:11.840963369 +0000 UTC m=+0.196276390 container remove 792e7a8dd93ecc623a5ce59bbd694bdab798a18c414d99b9404ca9d03cdb2c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hermann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:05:11 compute-0 systemd[1]: libpod-conmon-792e7a8dd93ecc623a5ce59bbd694bdab798a18c414d99b9404ca9d03cdb2c69.scope: Deactivated successfully.
Nov 24 20:05:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:11.977+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:11 compute-0 podman[202456]: 2025-11-24 20:05:11.993339305 +0000 UTC m=+0.046651206 container create 66bbaa021a466faebf7e37345bf98fd0af5ff61386865e3491f0bf9e032034b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 20:05:12 compute-0 systemd[1]: Started libpod-conmon-66bbaa021a466faebf7e37345bf98fd0af5ff61386865e3491f0bf9e032034b7.scope.
Nov 24 20:05:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c46f6940f51ef9a1e91b23ac476d9b23aec3eddb3a06d2c40901d415c246681/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c46f6940f51ef9a1e91b23ac476d9b23aec3eddb3a06d2c40901d415c246681/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c46f6940f51ef9a1e91b23ac476d9b23aec3eddb3a06d2c40901d415c246681/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c46f6940f51ef9a1e91b23ac476d9b23aec3eddb3a06d2c40901d415c246681/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:12 compute-0 podman[202456]: 2025-11-24 20:05:11.974382815 +0000 UTC m=+0.027694706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:05:12 compute-0 podman[202456]: 2025-11-24 20:05:12.074783298 +0000 UTC m=+0.128095179 container init 66bbaa021a466faebf7e37345bf98fd0af5ff61386865e3491f0bf9e032034b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:05:12 compute-0 podman[202456]: 2025-11-24 20:05:12.081539652 +0000 UTC m=+0.134851523 container start 66bbaa021a466faebf7e37345bf98fd0af5ff61386865e3491f0bf9e032034b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:05:12 compute-0 podman[202456]: 2025-11-24 20:05:12.086128691 +0000 UTC m=+0.139440562 container attach 66bbaa021a466faebf7e37345bf98fd0af5ff61386865e3491f0bf9e032034b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:05:12 compute-0 podman[202494]: 2025-11-24 20:05:12.095332368 +0000 UTC m=+0.065229235 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 20:05:12 compute-0 sudo[202569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idlenvwifwvoeygnaaxyfaadnzcnmyyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014711.8903189-365-47582390365083/AnsiballZ_systemd.py'
Nov 24 20:05:12 compute-0 sudo[202569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:12 compute-0 ceph-mon[75677]: pgmap v645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:12 compute-0 python3.9[202571]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:12.531+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:12 compute-0 systemd[1]: Reloading.
Nov 24 20:05:12 compute-0 systemd-rc-local-generator[202601]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:05:12 compute-0 systemd-sysv-generator[202605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:05:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:12 compute-0 vigilant_cray[202497]: {
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:     "0": [
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:         {
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "devices": [
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "/dev/loop3"
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             ],
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_name": "ceph_lv0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_size": "21470642176",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "name": "ceph_lv0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "tags": {
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.cluster_name": "ceph",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.crush_device_class": "",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.encrypted": "0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.osd_id": "0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.type": "block",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.vdo": "0"
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             },
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "type": "block",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "vg_name": "ceph_vg0"
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:         }
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:     ],
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:     "1": [
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:         {
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "devices": [
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "/dev/loop4"
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             ],
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_name": "ceph_lv1",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_size": "21470642176",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "name": "ceph_lv1",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "tags": {
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.cluster_name": "ceph",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.crush_device_class": "",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.encrypted": "0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.osd_id": "1",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.type": "block",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.vdo": "0"
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             },
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "type": "block",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "vg_name": "ceph_vg1"
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:         }
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:     ],
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:     "2": [
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:         {
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "devices": [
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "/dev/loop5"
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             ],
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_name": "ceph_lv2",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_size": "21470642176",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "name": "ceph_lv2",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "tags": {
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.cluster_name": "ceph",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.crush_device_class": "",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.encrypted": "0",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.osd_id": "2",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.type": "block",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:                 "ceph.vdo": "0"
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             },
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "type": "block",
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:             "vg_name": "ceph_vg2"
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:         }
Nov 24 20:05:12 compute-0 vigilant_cray[202497]:     ]
Nov 24 20:05:12 compute-0 vigilant_cray[202497]: }
Nov 24 20:05:12 compute-0 podman[202456]: 2025-11-24 20:05:12.928781573 +0000 UTC m=+0.982093504 container died 66bbaa021a466faebf7e37345bf98fd0af5ff61386865e3491f0bf9e032034b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 20:05:12 compute-0 systemd[1]: libpod-66bbaa021a466faebf7e37345bf98fd0af5ff61386865e3491f0bf9e032034b7.scope: Deactivated successfully.
Nov 24 20:05:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:12.959+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c46f6940f51ef9a1e91b23ac476d9b23aec3eddb3a06d2c40901d415c246681-merged.mount: Deactivated successfully.
Nov 24 20:05:13 compute-0 sudo[202569]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:13 compute-0 podman[202456]: 2025-11-24 20:05:13.010646768 +0000 UTC m=+1.063958649 container remove 66bbaa021a466faebf7e37345bf98fd0af5ff61386865e3491f0bf9e032034b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:05:13 compute-0 systemd[1]: libpod-conmon-66bbaa021a466faebf7e37345bf98fd0af5ff61386865e3491f0bf9e032034b7.scope: Deactivated successfully.
Nov 24 20:05:13 compute-0 sudo[202014]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:13 compute-0 sudo[202644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:05:13 compute-0 sudo[202644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:13 compute-0 sudo[202644]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:13 compute-0 sudo[202691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:05:13 compute-0 sudo[202691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:13 compute-0 sudo[202691]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:13 compute-0 sudo[202734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:05:13 compute-0 sudo[202734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:13 compute-0 sudo[202734]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:13 compute-0 sudo[202786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:05:13 compute-0 sudo[202786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:13.574+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:13 compute-0 sudo[202890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pixzjzvmcuqdeapfglttcsxjkwhskaqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014713.200408-365-24472925354420/AnsiballZ_systemd.py'
Nov 24 20:05:13 compute-0 sudo[202890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:13 compute-0 podman[202921]: 2025-11-24 20:05:13.83082137 +0000 UTC m=+0.069307821 container create 7942540f8ba4bdacee72efedac72142a460dc6b10f6436fe021c7e871aa70e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:05:13 compute-0 systemd[1]: Started libpod-conmon-7942540f8ba4bdacee72efedac72142a460dc6b10f6436fe021c7e871aa70e17.scope.
Nov 24 20:05:13 compute-0 podman[202921]: 2025-11-24 20:05:13.8017685 +0000 UTC m=+0.040255001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:05:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:05:13 compute-0 python3.9[202894]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:13 compute-0 podman[202921]: 2025-11-24 20:05:13.928199795 +0000 UTC m=+0.166686286 container init 7942540f8ba4bdacee72efedac72142a460dc6b10f6436fe021c7e871aa70e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:05:13 compute-0 podman[202921]: 2025-11-24 20:05:13.940734179 +0000 UTC m=+0.179220630 container start 7942540f8ba4bdacee72efedac72142a460dc6b10f6436fe021c7e871aa70e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 20:05:13 compute-0 podman[202921]: 2025-11-24 20:05:13.944999199 +0000 UTC m=+0.183485640 container attach 7942540f8ba4bdacee72efedac72142a460dc6b10f6436fe021c7e871aa70e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:05:13 compute-0 elegant_brahmagupta[202938]: 167 167
Nov 24 20:05:13 compute-0 systemd[1]: libpod-7942540f8ba4bdacee72efedac72142a460dc6b10f6436fe021c7e871aa70e17.scope: Deactivated successfully.
Nov 24 20:05:13 compute-0 podman[202921]: 2025-11-24 20:05:13.949775262 +0000 UTC m=+0.188261743 container died 7942540f8ba4bdacee72efedac72142a460dc6b10f6436fe021c7e871aa70e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-45b077e16a3c2846c2a51e9d8e5f7ea623c37f0c41bd082a5862a89f7c3d6b30-merged.mount: Deactivated successfully.
Nov 24 20:05:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:13.990+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:14 compute-0 podman[202921]: 2025-11-24 20:05:14.008342755 +0000 UTC m=+0.246829206 container remove 7942540f8ba4bdacee72efedac72142a460dc6b10f6436fe021c7e871aa70e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:05:14 compute-0 systemd[1]: Reloading.
Nov 24 20:05:14 compute-0 systemd-sysv-generator[202988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:05:14 compute-0 systemd-rc-local-generator[202983]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:05:14 compute-0 podman[202999]: 2025-11-24 20:05:14.22108971 +0000 UTC m=+0.040362224 container create 064dd4f70220b87e9c8e99e351695fc8cafa7a35938224552275c10f4c009a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:05:14 compute-0 podman[202999]: 2025-11-24 20:05:14.204551232 +0000 UTC m=+0.023823766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:05:14 compute-0 systemd[1]: libpod-conmon-7942540f8ba4bdacee72efedac72142a460dc6b10f6436fe021c7e871aa70e17.scope: Deactivated successfully.
Nov 24 20:05:14 compute-0 systemd[1]: Started libpod-conmon-064dd4f70220b87e9c8e99e351695fc8cafa7a35938224552275c10f4c009a62.scope.
Nov 24 20:05:14 compute-0 sudo[202890]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511dc38935ba586ce673c1e7c684182e0ae1f3ff2a98d06dd3e8f5b67b34f0b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511dc38935ba586ce673c1e7c684182e0ae1f3ff2a98d06dd3e8f5b67b34f0b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511dc38935ba586ce673c1e7c684182e0ae1f3ff2a98d06dd3e8f5b67b34f0b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511dc38935ba586ce673c1e7c684182e0ae1f3ff2a98d06dd3e8f5b67b34f0b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:05:14 compute-0 podman[202999]: 2025-11-24 20:05:14.439236534 +0000 UTC m=+0.258509128 container init 064dd4f70220b87e9c8e99e351695fc8cafa7a35938224552275c10f4c009a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 20:05:14 compute-0 podman[202999]: 2025-11-24 20:05:14.452117386 +0000 UTC m=+0.271389900 container start 064dd4f70220b87e9c8e99e351695fc8cafa7a35938224552275c10f4c009a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 20:05:14 compute-0 podman[202999]: 2025-11-24 20:05:14.45539083 +0000 UTC m=+0.274663354 container attach 064dd4f70220b87e9c8e99e351695fc8cafa7a35938224552275c10f4c009a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:05:14 compute-0 ceph-mon[75677]: pgmap v646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:14.532+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:14.991+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:15 compute-0 sudo[203170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvpjjhgcsklzidgjggdnjqhchefpxdil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014714.602407-365-15417300306625/AnsiballZ_systemd.py'
Nov 24 20:05:15 compute-0 sudo[203170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:15 compute-0 python3.9[203172]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:15 compute-0 sudo[203170]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:15 compute-0 vigorous_noether[203016]: {
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "osd_id": 2,
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "type": "bluestore"
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:     },
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "osd_id": 1,
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "type": "bluestore"
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:     },
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "osd_id": 0,
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:         "type": "bluestore"
Nov 24 20:05:15 compute-0 vigorous_noether[203016]:     }
Nov 24 20:05:15 compute-0 vigorous_noether[203016]: }
Nov 24 20:05:15 compute-0 systemd[1]: libpod-064dd4f70220b87e9c8e99e351695fc8cafa7a35938224552275c10f4c009a62.scope: Deactivated successfully.
Nov 24 20:05:15 compute-0 systemd[1]: libpod-064dd4f70220b87e9c8e99e351695fc8cafa7a35938224552275c10f4c009a62.scope: Consumed 1.040s CPU time.
Nov 24 20:05:15 compute-0 podman[202999]: 2025-11-24 20:05:15.488634345 +0000 UTC m=+1.307906869 container died 064dd4f70220b87e9c8e99e351695fc8cafa7a35938224552275c10f4c009a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:05:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:15.495+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-511dc38935ba586ce673c1e7c684182e0ae1f3ff2a98d06dd3e8f5b67b34f0b4-merged.mount: Deactivated successfully.
Nov 24 20:05:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:15 compute-0 podman[202999]: 2025-11-24 20:05:15.543936874 +0000 UTC m=+1.363209388 container remove 064dd4f70220b87e9c8e99e351695fc8cafa7a35938224552275c10f4c009a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 20:05:15 compute-0 systemd[1]: libpod-conmon-064dd4f70220b87e9c8e99e351695fc8cafa7a35938224552275c10f4c009a62.scope: Deactivated successfully.
Nov 24 20:05:15 compute-0 sudo[202786]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:05:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:05:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:05:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:05:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 03bf2f93-a5e1-4fef-82ab-b1029157bc04 does not exist
Nov 24 20:05:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 8a89d21f-c16c-4c94-8fb7-7e883283efd1 does not exist
Nov 24 20:05:15 compute-0 sudo[203277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:05:15 compute-0 sudo[203277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:15 compute-0 sudo[203277]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:15 compute-0 sudo[203319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:05:15 compute-0 sudo[203319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:05:15 compute-0 sudo[203319]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:15 compute-0 sudo[203417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huoodcunkmkkerlhooewfsjfnnyjgurt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014715.5653944-365-157661948687987/AnsiballZ_systemd.py'
Nov 24 20:05:15 compute-0 sudo[203417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:15.949+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:16 compute-0 python3.9[203419]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:16 compute-0 systemd[1]: Reloading.
Nov 24 20:05:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:16.459+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:16 compute-0 systemd-rc-local-generator[203448]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:05:16 compute-0 systemd-sysv-generator[203452]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:05:16 compute-0 ceph-mon[75677]: pgmap v647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:05:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:05:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:16 compute-0 sudo[203417]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:16.917+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:17 compute-0 sudo[203607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxkvguqrvxfecwxzvojyqfmlpqkmkltr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014717.0046864-401-133848975501189/AnsiballZ_systemd.py'
Nov 24 20:05:17 compute-0 sudo[203607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:17.438+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 837 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:17 compute-0 python3.9[203609]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 20:05:17 compute-0 systemd[1]: Reloading.
Nov 24 20:05:17 compute-0 systemd-sysv-generator[203645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:05:17 compute-0 systemd-rc-local-generator[203640]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:05:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:17.956+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:18 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 24 20:05:18 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 24 20:05:18 compute-0 sudo[203607]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:18.437+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:18 compute-0 ceph-mon[75677]: pgmap v648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:18 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 837 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:18 compute-0 sudo[203801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhafzvgulmjfsozxwlptmtrjjwsjnujd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014718.3944247-409-30597158211844/AnsiballZ_systemd.py'
Nov 24 20:05:18 compute-0 sudo[203801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:18.912+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:19 compute-0 python3.9[203803]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:19.448+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:19.887+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:20 compute-0 sudo[203801]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:20.498+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:20 compute-0 sudo[203956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sstoogfrndnlepjagkrzruaopetcztvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014720.3581142-409-232351423895267/AnsiballZ_systemd.py'
Nov 24 20:05:20 compute-0 sudo[203956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:20 compute-0 ceph-mon[75677]: pgmap v649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:20 compute-0 ceph-mon[75677]: pgmap v650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:20.906+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:21 compute-0 python3.9[203958]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:21 compute-0 sudo[203956]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:21.532+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:21 compute-0 sudo[204111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhrpilztsbirhdoiyqmmlazqjulyxbrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014721.3509855-409-194957642243189/AnsiballZ_systemd.py'
Nov 24 20:05:21 compute-0 sudo[204111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:21.867+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:22 compute-0 python3.9[204113]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:22.546+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:22.847+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:22 compute-0 ceph-mon[75677]: pgmap v651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:23 compute-0 sudo[204111]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:23.562+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:23 compute-0 sudo[204268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpwcnojllkbmorwkdkrpywpagpeuzgop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014723.3331087-409-57065151318499/AnsiballZ_systemd.py'
Nov 24 20:05:23 compute-0 sudo[204268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:23.863+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:24 compute-0 python3.9[204270]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:24 compute-0 sudo[204268]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:05:24
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta']
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:05:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:24.529+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:24 compute-0 sudo[204423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmawezheoyvnxhknhgczdugpgcefktsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014724.2828145-409-260129313809617/AnsiballZ_systemd.py'
Nov 24 20:05:24 compute-0 sudo[204423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:24.876+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:24 compute-0 ceph-mon[75677]: pgmap v652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:25 compute-0 python3.9[204425]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:25 compute-0 sudo[204423]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:25.572+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:25 compute-0 sudo[204578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imyuictbuvgoaylipwgtdkmvsoigjqad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014725.280033-409-219022161630395/AnsiballZ_systemd.py'
Nov 24 20:05:25 compute-0 sudo[204578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:25.862+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:26 compute-0 python3.9[204580]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:26 compute-0 sudo[204578]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:26.618+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 842 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:26 compute-0 sudo[204733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpkzybevykvxwkukurkkkgowzjyqluxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014726.308803-409-196368744242819/AnsiballZ_systemd.py'
Nov 24 20:05:26 compute-0 sudo[204733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:26.904+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:26 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 842 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:26 compute-0 ceph-mon[75677]: pgmap v653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:27 compute-0 python3.9[204735]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:27.666+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:27.887+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:27 compute-0 podman[204737]: 2025-11-24 20:05:27.917435608 +0000 UTC m=+0.140466366 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 24 20:05:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:28 compute-0 sudo[204733]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:28.642+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:28 compute-0 sudo[204914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvakjfkwlayabhgalovskeczwjavlide ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014728.338995-409-97324657447431/AnsiballZ_systemd.py'
Nov 24 20:05:28 compute-0 sudo[204914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:28.906+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:28 compute-0 ceph-mon[75677]: pgmap v654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:29 compute-0 python3.9[204916]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:29 compute-0 sudo[204914]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:29 compute-0 sudo[205069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbwoxyyedwhcsdhyomkseifzdncfmsgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014729.3081207-409-115124020501665/AnsiballZ_systemd.py'
Nov 24 20:05:29 compute-0 sudo[205069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:29.662+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:29.914+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:29 compute-0 python3.9[205071]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:30 compute-0 sudo[205069]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:30 compute-0 sudo[205224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trwwpsqyrzaymqijmuzuruaiwwdrqkxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014730.2366805-409-94455851886183/AnsiballZ_systemd.py'
Nov 24 20:05:30 compute-0 sudo[205224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:30.668+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:30 compute-0 python3.9[205226]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:30.936+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:30 compute-0 ceph-mon[75677]: pgmap v655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:31 compute-0 sudo[205224]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:31 compute-0 sudo[205379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evieakwqvjegfuqqsepghqxruyurtaoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014731.1635325-409-167207724027558/AnsiballZ_systemd.py'
Nov 24 20:05:31 compute-0 sudo[205379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 852 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:31.675+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:31 compute-0 python3.9[205381]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:31 compute-0 sudo[205379]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:31.960+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:31 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 852 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:32 compute-0 sudo[205534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gixbrfjmrnuqevytjpzvptscbnagyyhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014732.0779338-409-97027161711772/AnsiballZ_systemd.py'
Nov 24 20:05:32 compute-0 sudo[205534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:32.669+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:32 compute-0 python3.9[205536]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:32 compute-0 sudo[205534]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:32.942+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:32 compute-0 ceph-mon[75677]: pgmap v656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:33 compute-0 sudo[205689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrkitssrfjgenlncwncxfltwudcyfsku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014733.0022645-409-139557729929991/AnsiballZ_systemd.py'
Nov 24 20:05:33 compute-0 sudo[205689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:33 compute-0 python3.9[205691]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:33.682+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:33.985+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:05:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:34.681+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:34 compute-0 sudo[205689]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:34 compute-0 sshd-session[204116]: Received disconnect from 14.63.196.175 port 33746:11: Bye Bye [preauth]
Nov 24 20:05:34 compute-0 sshd-session[204116]: Disconnected from authenticating user root 14.63.196.175 port 33746 [preauth]
Nov 24 20:05:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:35 compute-0 ceph-mon[75677]: pgmap v657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:35.033+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:35 compute-0 sudo[205844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uckinljakurghsveadgvrkrusiyrgwnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014734.9647443-409-72687721629598/AnsiballZ_systemd.py'
Nov 24 20:05:35 compute-0 sudo[205844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:35.642+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:35 compute-0 python3.9[205846]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 20:05:35 compute-0 sudo[205844]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:36.080+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:36.662+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:36 compute-0 sudo[205999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arqcootdaltrwvtjrryltwvsvjgiodnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014736.2928267-511-63092858153389/AnsiballZ_file.py'
Nov 24 20:05:36 compute-0 sudo[205999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:36 compute-0 python3.9[206001]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:05:36 compute-0 sudo[205999]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 857 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:37 compute-0 ceph-mon[75677]: pgmap v658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:37.113+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:37 compute-0 sudo[206151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hurxuqyjyvkopfyfkkxmvcpujertyfzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014737.0828211-511-249052114139062/AnsiballZ_file.py'
Nov 24 20:05:37 compute-0 sudo[206151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:37 compute-0 python3.9[206153]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:05:37 compute-0 sudo[206151]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:37.666+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 857 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:38.096+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:38 compute-0 sudo[206303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owudfcxfjhkhhuxhyviwdymplemsqqhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014737.8397305-511-159210769961599/AnsiballZ_file.py'
Nov 24 20:05:38 compute-0 sudo[206303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:38 compute-0 python3.9[206305]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:05:38 compute-0 sudo[206303]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:38.665+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:38 compute-0 sudo[206455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lousbdmtcutthhiimrrxfzcivcqnhgjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014738.5927515-511-35179968557566/AnsiballZ_file.py'
Nov 24 20:05:38 compute-0 sudo[206455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:39 compute-0 ceph-mon[75677]: pgmap v659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:39.113+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:39 compute-0 python3.9[206457]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:05:39 compute-0 sudo[206455]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:39.624+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:39 compute-0 sudo[206607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmtodepwzuruuskrdmjfzttinujvfqlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014739.3773596-511-245064694321661/AnsiballZ_file.py'
Nov 24 20:05:39 compute-0 sudo[206607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:39 compute-0 python3.9[206609]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:05:39 compute-0 sudo[206607]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:40.079+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:05:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:05:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:05:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:05:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:05:40 compute-0 sudo[206759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdblellfpxdzpmcpgnyvdxcfdzgpvnjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014740.157468-511-178922916381676/AnsiballZ_file.py'
Nov 24 20:05:40 compute-0 sudo[206759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:40.576+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:40 compute-0 python3.9[206761]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:05:40 compute-0 sudo[206759]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:41.068+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:41 compute-0 ceph-mon[75677]: pgmap v660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:41 compute-0 sudo[206911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxvfvnafblibpgwwqlttrtjxfqxqksde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014740.9438376-554-12117823191307/AnsiballZ_stat.py'
Nov 24 20:05:41 compute-0 sudo[206911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:41.542+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:41 compute-0 python3.9[206913]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:05:41 compute-0 sudo[206911]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:42.087+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:42 compute-0 sudo[207053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqhofdqbhezbedxpeuyhuoiylabybput ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014740.9438376-554-12117823191307/AnsiballZ_copy.py'
Nov 24 20:05:42 compute-0 sudo[207053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:42 compute-0 podman[207010]: 2025-11-24 20:05:42.32069582 +0000 UTC m=+0.102967338 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 24 20:05:42 compute-0 python3.9[207057]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764014740.9438376-554-12117823191307/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:42 compute-0 sudo[207053]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:42.560+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:43 compute-0 sudo[207207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lthnhmjueuzddzgedzpzprutffzakbpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014742.7117217-554-250395327436045/AnsiballZ_stat.py'
Nov 24 20:05:43 compute-0 sudo[207207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:43.116+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:43 compute-0 ceph-mon[75677]: pgmap v661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:43 compute-0 python3.9[207209]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:05:43 compute-0 sudo[207207]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:43.608+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:43 compute-0 sudo[207332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqkmdluilnzhlkxiemtbkaqfbbxjmkus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014742.7117217-554-250395327436045/AnsiballZ_copy.py'
Nov 24 20:05:43 compute-0 sudo[207332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:44 compute-0 python3.9[207334]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764014742.7117217-554-250395327436045/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:44 compute-0 sudo[207332]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:44.069+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:44 compute-0 sudo[207484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poyvjprqyzfwypgmaoylhvoskchxcsuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014744.2046573-554-1706931787130/AnsiballZ_stat.py'
Nov 24 20:05:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:44.570+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:44 compute-0 sudo[207484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:44 compute-0 python3.9[207486]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:05:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:44 compute-0 sudo[207484]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:45.098+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:45 compute-0 ceph-mon[75677]: pgmap v662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:45 compute-0 sudo[207609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilrwdhzgymxopajkejfupfinghpkopuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014744.2046573-554-1706931787130/AnsiballZ_copy.py'
Nov 24 20:05:45 compute-0 sudo[207609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:45 compute-0 python3.9[207611]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764014744.2046573-554-1706931787130/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:45 compute-0 sudo[207609]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:45.550+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:46.105+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:46 compute-0 sudo[207761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xogjanywjummmtcwtcyvcniugxawfycy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014745.750853-554-37291857568140/AnsiballZ_stat.py'
Nov 24 20:05:46 compute-0 sudo[207761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:46 compute-0 python3.9[207763]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:05:46 compute-0 sudo[207761]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:46.516+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 862 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:46 compute-0 sudo[207886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwsdtqomztmfrsotiltvkypwcsbxlvev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014745.750853-554-37291857568140/AnsiballZ_copy.py'
Nov 24 20:05:46 compute-0 sudo[207886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:47 compute-0 python3.9[207888]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764014745.750853-554-37291857568140/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:47 compute-0 sudo[207886]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:47.119+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 862 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:47 compute-0 ceph-mon[75677]: pgmap v663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:47.489+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:47 compute-0 sudo[208038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktjccuzmuecsvrhrvfzunwrqnakpfobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014747.3021688-554-196958969949217/AnsiballZ_stat.py'
Nov 24 20:05:47 compute-0 sudo[208038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:47 compute-0 python3.9[208040]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:05:47 compute-0 sudo[208038]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:48.132+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:48 compute-0 sudo[208163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lriqxmfzcxprjxtgmwqmroyninjywjgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014747.3021688-554-196958969949217/AnsiballZ_copy.py'
Nov 24 20:05:48 compute-0 sudo[208163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:48.462+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:48 compute-0 python3.9[208165]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764014747.3021688-554-196958969949217/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:48 compute-0 sudo[208163]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:49 compute-0 sudo[208315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xludcxykxjhhobnrzqfrowsamwsihgsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014748.7552063-554-13426636591938/AnsiballZ_stat.py'
Nov 24 20:05:49 compute-0 sudo[208315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:49.144+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:49 compute-0 ceph-mon[75677]: pgmap v664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:49 compute-0 python3.9[208317]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:05:49 compute-0 sudo[208315]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:49.416+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:49 compute-0 sudo[208440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuvxqqdaaoivegqnoegtddwxahmuxczt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014748.7552063-554-13426636591938/AnsiballZ_copy.py'
Nov 24 20:05:49 compute-0 sudo[208440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:50 compute-0 python3.9[208442]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764014748.7552063-554-13426636591938/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:50 compute-0 sudo[208440]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:50.122+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:50.426+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:50 compute-0 sudo[208592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esgwphlqshbwrqigvfrzqdjfeuzyzxag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014750.2475083-554-19233758208418/AnsiballZ_stat.py'
Nov 24 20:05:50 compute-0 sudo[208592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:50 compute-0 python3.9[208594]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:05:50 compute-0 sudo[208592]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:51.074+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:51 compute-0 sudo[208715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skeeqqmucqzzfejzemtqnsnwuxgnzetz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014750.2475083-554-19233758208418/AnsiballZ_copy.py'
Nov 24 20:05:51 compute-0 sudo[208715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:51 compute-0 python3.9[208717]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764014750.2475083-554-19233758208418/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:51 compute-0 sudo[208715]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:51 compute-0 ceph-mon[75677]: pgmap v665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:51.456+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 872 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:52.048+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:52 compute-0 sudo[208867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnciaysehycvsraasdwpibepwdoxxxlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014751.7597735-554-42472225674716/AnsiballZ_stat.py'
Nov 24 20:05:52 compute-0 sudo[208867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:52 compute-0 python3.9[208869]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:05:52 compute-0 sudo[208867]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:52.460+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 872 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:52 compute-0 sudo[208992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdjhngoelpmaxtpzutpbqwvcvnhbfsaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014751.7597735-554-42472225674716/AnsiballZ_copy.py'
Nov 24 20:05:52 compute-0 sudo[208992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:52 compute-0 python3.9[208994]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764014751.7597735-554-42472225674716/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:53 compute-0 sudo[208992]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:53.092+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:53.418+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:53 compute-0 ceph-mon[75677]: pgmap v666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:53 compute-0 sudo[209144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgdpdxeflepjoofegxxnwtufcedpddki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014753.2041247-667-79430571344027/AnsiballZ_command.py'
Nov 24 20:05:53 compute-0 sudo[209144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:53 compute-0 python3.9[209146]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 24 20:05:53 compute-0 sudo[209144]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:54.074+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:54.369+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:05:54 compute-0 sudo[209297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkflpdnurgojsjogwlcdiofqaslzwwrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014754.1269562-676-249346576438732/AnsiballZ_file.py'
Nov 24 20:05:54 compute-0 sudo[209297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:54 compute-0 python3.9[209299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:54 compute-0 sudo[209297]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:55.047+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:55 compute-0 sudo[209449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlhocuvxokkesszlfcpatmvapjazbtkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014754.8222744-676-253650181331982/AnsiballZ_file.py'
Nov 24 20:05:55 compute-0 sudo[209449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:55.364+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:55 compute-0 python3.9[209451]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:55 compute-0 sudo[209449]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:55 compute-0 ceph-mon[75677]: pgmap v667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:56.097+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:56 compute-0 sudo[209601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjvxcgobonrsgyfziteucvgwqxcewjqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014755.6345646-676-276509213529594/AnsiballZ_file.py'
Nov 24 20:05:56 compute-0 sudo[209601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:56.325+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:56 compute-0 python3.9[209603]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:56 compute-0 sudo[209601]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:05:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:56 compute-0 sudo[209753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxyjhhbjkstrorstqoptqagfukyjgprd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014756.564155-676-137592907343759/AnsiballZ_file.py'
Nov 24 20:05:56 compute-0 sudo[209753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:57.059+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:57 compute-0 python3.9[209755]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:57 compute-0 sudo[209753]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:57.366+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 877 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:57 compute-0 ceph-mon[75677]: pgmap v668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:57 compute-0 sudo[209905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piwfkhurfxfjtmzruvbiguopjbmsgmwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014757.290275-676-127051362787223/AnsiballZ_file.py'
Nov 24 20:05:57 compute-0 sudo[209905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:57 compute-0 python3.9[209907]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:57 compute-0 sudo[209905]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:58.048+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:58.349+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:58 compute-0 sudo[210077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ornfyiwtfbjncieygakczyrjfyzgakso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014758.0408466-676-230220156883688/AnsiballZ_file.py'
Nov 24 20:05:58 compute-0 sudo[210077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:58 compute-0 podman[210031]: 2025-11-24 20:05:58.463825509 +0000 UTC m=+0.166084783 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 24 20:05:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:58 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 877 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:05:58 compute-0 python3.9[210082]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:58 compute-0 sudo[210077]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:59 compute-0 sudo[210235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skavtsqcyamlkrkzqsqefbadeuzwuprx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014758.7168295-676-62079855976276/AnsiballZ_file.py'
Nov 24 20:05:59 compute-0 sudo[210235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:05:59.089+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:05:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:59 compute-0 python3.9[210237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:59 compute-0 sudo[210235]: pam_unix(sudo:session): session closed for user root
Nov 24 20:05:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:05:59.375+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:05:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:05:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:05:59 compute-0 ceph-mon[75677]: pgmap v669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:05:59 compute-0 sudo[210387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apdapiiiyskqhmiuzaqpufyxhoaezqop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014759.3895736-676-192351717860570/AnsiballZ_file.py'
Nov 24 20:05:59 compute-0 sudo[210387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:05:59 compute-0 python3.9[210389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:05:59 compute-0 sudo[210387]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:00.110+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:00.419+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:00 compute-0 sudo[210539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgvhrofvyzfkzhvjznfqmbxujcxyqmvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014760.1234865-676-113840297724275/AnsiballZ_file.py'
Nov 24 20:06:00 compute-0 sudo[210539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:00 compute-0 python3.9[210541]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:00 compute-0 sudo[210539]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:01.114+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:01 compute-0 sudo[210691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpibtyiqacoymiovropgjirlgstweige ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014760.958693-676-147253169350829/AnsiballZ_file.py'
Nov 24 20:06:01 compute-0 sudo[210691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:01.374+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:01 compute-0 python3.9[210693]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:01 compute-0 sudo[210691]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:01 compute-0 ceph-mon[75677]: pgmap v670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:02 compute-0 sudo[210843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osfualwrnxpkeliyohhdnudkmiqlqzzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014761.6890192-676-36537753961099/AnsiballZ_file.py'
Nov 24 20:06:02 compute-0 sudo[210843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:02.117+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:02 compute-0 python3.9[210845]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:02 compute-0 sudo[210843]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:02.421+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:02 compute-0 sudo[210995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wytidknupldqlxwutmtnzzwxqaupvdav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014762.5012572-676-214173203118897/AnsiballZ_file.py'
Nov 24 20:06:02 compute-0 sudo[210995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:03.074+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:03 compute-0 python3.9[210997]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:03 compute-0 sudo[210995]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:03.431+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:03 compute-0 ceph-mon[75677]: pgmap v671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:03 compute-0 sudo[211147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbsmhdhmzfpxmrqrspilkqjyzfjjdkfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014763.298659-676-263022057567119/AnsiballZ_file.py'
Nov 24 20:06:03 compute-0 sudo[211147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:03 compute-0 python3.9[211149]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:03 compute-0 sudo[211147]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:04.025+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:04 compute-0 sudo[211299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvrjppbwyvvhsrojfimunhgpjivivnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014764.0639913-676-194148453351549/AnsiballZ_file.py'
Nov 24 20:06:04 compute-0 sudo[211299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:04.446+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:04 compute-0 python3.9[211301]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:04 compute-0 sudo[211299]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:05.060+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:05 compute-0 sudo[211451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvqsyybtyuavtuhqdoyapszskpeeflrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014764.9202812-775-89849210454102/AnsiballZ_stat.py'
Nov 24 20:06:05 compute-0 sudo[211451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:05.414+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:05 compute-0 ceph-mon[75677]: pgmap v672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:05 compute-0 python3.9[211453]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:05 compute-0 sudo[211451]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:06 compute-0 sudo[211574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfngqkeerjsxhpozssrlsxvgnxqlehbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014764.9202812-775-89849210454102/AnsiballZ_copy.py'
Nov 24 20:06:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:06.028+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:06 compute-0 sudo[211574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:06 compute-0 python3.9[211576]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014764.9202812-775-89849210454102/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:06 compute-0 sudo[211574]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:06.413+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 882 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:06 compute-0 sudo[211726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuvbgyogkpdixgodrksdzbhivalzdhjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014766.4376895-775-276461633940404/AnsiballZ_stat.py'
Nov 24 20:06:06 compute-0 sudo[211726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:06 compute-0 python3.9[211728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:06 compute-0 sudo[211726]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:07.028+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:07 compute-0 sudo[211849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evaervfawzqdnjoixhpsqoiwgzwrgsul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014766.4376895-775-276461633940404/AnsiballZ_copy.py'
Nov 24 20:06:07 compute-0 sudo[211849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:07.446+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:07 compute-0 python3.9[211851]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014766.4376895-775-276461633940404/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:07 compute-0 sudo[211849]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 882 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:07 compute-0 ceph-mon[75677]: pgmap v673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:08.012+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:08 compute-0 sudo[212001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxcizogapqfbvmhzubfonvwshprywxkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014767.7328138-775-145181261936258/AnsiballZ_stat.py'
Nov 24 20:06:08 compute-0 sudo[212001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:08 compute-0 python3.9[212003]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:08 compute-0 sudo[212001]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:08.418+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:08 compute-0 sudo[212124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftadtuufmeybymrouviribanxplevuzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014767.7328138-775-145181261936258/AnsiballZ_copy.py'
Nov 24 20:06:08 compute-0 sudo[212124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:08 compute-0 python3.9[212126]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014767.7328138-775-145181261936258/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:08 compute-0 sudo[212124]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:09.042+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:06:09.356 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:06:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:06:09.356 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:06:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:06:09.356 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:06:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:09.402+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:09 compute-0 sudo[212276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxrhwsexikilbrtmxsinqjscbstjykij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014769.1153107-775-36402009607755/AnsiballZ_stat.py'
Nov 24 20:06:09 compute-0 sudo[212276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:09 compute-0 ceph-mon[75677]: pgmap v674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:09 compute-0 python3.9[212278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:09 compute-0 sudo[212276]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:10.060+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:10 compute-0 sudo[212399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atsgeqtqwkvttkaaoxzolzaqbghrtqak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014769.1153107-775-36402009607755/AnsiballZ_copy.py'
Nov 24 20:06:10 compute-0 sudo[212399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:10 compute-0 python3.9[212401]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014769.1153107-775-36402009607755/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:10 compute-0 sudo[212399]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:10.438+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:10 compute-0 sudo[212551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgdpnmtyrdunevejqtukperdkuwnhjdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014770.496579-775-18802782656591/AnsiballZ_stat.py'
Nov 24 20:06:10 compute-0 sudo[212551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:11.064+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:11 compute-0 python3.9[212553]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:11 compute-0 sudo[212551]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:11.401+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:11 compute-0 sudo[212674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrhyfxeitidekydktfnnsmcsmhnbmakh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014770.496579-775-18802782656591/AnsiballZ_copy.py'
Nov 24 20:06:11 compute-0 sudo[212674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 892 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:11 compute-0 ceph-mon[75677]: pgmap v675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:11 compute-0 python3.9[212676]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014770.496579-775-18802782656591/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:11 compute-0 sudo[212674]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:12.088+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:12 compute-0 sudo[212826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrmhpdetisjcydegxvvxbipdpmxpselt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014771.9776978-775-102126685621548/AnsiballZ_stat.py'
Nov 24 20:06:12 compute-0 sudo[212826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:12.386+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:12 compute-0 podman[212828]: 2025-11-24 20:06:12.470289361 +0000 UTC m=+0.073025782 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 20:06:12 compute-0 python3.9[212829]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:12 compute-0 sudo[212826]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 892 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:13 compute-0 sudo[212968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iztymheeltuivvotwxipfnzwsnxxrwtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014771.9776978-775-102126685621548/AnsiballZ_copy.py'
Nov 24 20:06:13 compute-0 sudo[212968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:13.105+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:13 compute-0 python3.9[212970]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014771.9776978-775-102126685621548/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:13 compute-0 sudo[212968]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:13.380+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:13 compute-0 ceph-mon[75677]: pgmap v676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:13 compute-0 sudo[213120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmvizhickkgvzkvpsusjqfkommapbklx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014773.4327555-775-127248672753863/AnsiballZ_stat.py'
Nov 24 20:06:13 compute-0 sudo[213120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:14 compute-0 python3.9[213122]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:14 compute-0 sudo[213120]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:14.127+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:14.410+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:14 compute-0 sudo[213243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avhdsfptcebejnywqudbqjcgrlyvfjti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014773.4327555-775-127248672753863/AnsiballZ_copy.py'
Nov 24 20:06:14 compute-0 sudo[213243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:14 compute-0 python3.9[213245]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014773.4327555-775-127248672753863/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:14 compute-0 sudo[213243]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:15.086+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:15.403+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:15 compute-0 sudo[213395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icdvmfakrvoliiipaxsghctapgajjslw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014775.0868804-775-149431335244758/AnsiballZ_stat.py'
Nov 24 20:06:15 compute-0 sudo[213395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:15 compute-0 python3.9[213397]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:15 compute-0 sudo[213395]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:15 compute-0 ceph-mon[75677]: pgmap v677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:15 compute-0 sudo[213421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:06:15 compute-0 sudo[213421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:15 compute-0 sudo[213421]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:15 compute-0 sudo[213470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:06:15 compute-0 sudo[213470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:15 compute-0 sudo[213470]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:16 compute-0 sudo[213518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:06:16 compute-0 sudo[213518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:16 compute-0 sudo[213518]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:16 compute-0 sudo[213567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:06:16 compute-0 sudo[213567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:16.119+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:16 compute-0 sudo[213617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxlacianvjdswcoxmbnxflnwulaugsmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014775.0868804-775-149431335244758/AnsiballZ_copy.py'
Nov 24 20:06:16 compute-0 sudo[213617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:16 compute-0 python3.9[213620]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014775.0868804-775-149431335244758/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:16.365+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:16 compute-0 sudo[213617]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:16 compute-0 sudo[213567]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:06:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:06:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:06:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:06:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:06:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:06:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0e539bbb-f6da-4f9a-8383-a976f6559a30 does not exist
Nov 24 20:06:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e762698a-4d29-43c0-b52c-d288f022d3c6 does not exist
Nov 24 20:06:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ff160664-f5f3-43e8-988d-310896da571c does not exist
Nov 24 20:06:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:06:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:06:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:06:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:06:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:06:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:06:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:16 compute-0 sudo[213770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:06:16 compute-0 sudo[213770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:16 compute-0 sudo[213770]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:16 compute-0 sudo[213828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbkvrfhnzvsanjgbjknhtcssuifylroc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014776.553827-775-280798069842488/AnsiballZ_stat.py'
Nov 24 20:06:16 compute-0 sudo[213828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:16 compute-0 sudo[213827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:06:16 compute-0 sudo[213827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:16 compute-0 sudo[213827]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:17 compute-0 sudo[213855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:06:17 compute-0 sudo[213855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:17 compute-0 sudo[213855]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:17 compute-0 sudo[213880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:06:17 compute-0 sudo[213880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:17 compute-0 python3.9[213849]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:17 compute-0 sudo[213828]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:17.168+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:17.330+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:17 compute-0 podman[214013]: 2025-11-24 20:06:17.432014544 +0000 UTC m=+0.045705298 container create 1a7e3e86e23c91683b3728ae2d44b61ca8d8120b8ec35bc53980c37d0d64e721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 20:06:17 compute-0 systemd[1]: Started libpod-conmon-1a7e3e86e23c91683b3728ae2d44b61ca8d8120b8ec35bc53980c37d0d64e721.scope.
Nov 24 20:06:17 compute-0 podman[214013]: 2025-11-24 20:06:17.410955405 +0000 UTC m=+0.024646189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:06:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:06:17 compute-0 podman[214013]: 2025-11-24 20:06:17.529813488 +0000 UTC m=+0.143504242 container init 1a7e3e86e23c91683b3728ae2d44b61ca8d8120b8ec35bc53980c37d0d64e721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:06:17 compute-0 podman[214013]: 2025-11-24 20:06:17.537289516 +0000 UTC m=+0.150980260 container start 1a7e3e86e23c91683b3728ae2d44b61ca8d8120b8ec35bc53980c37d0d64e721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:06:17 compute-0 podman[214013]: 2025-11-24 20:06:17.541491324 +0000 UTC m=+0.155182058 container attach 1a7e3e86e23c91683b3728ae2d44b61ca8d8120b8ec35bc53980c37d0d64e721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:06:17 compute-0 practical_ganguly[214053]: 167 167
Nov 24 20:06:17 compute-0 systemd[1]: libpod-1a7e3e86e23c91683b3728ae2d44b61ca8d8120b8ec35bc53980c37d0d64e721.scope: Deactivated successfully.
Nov 24 20:06:17 compute-0 conmon[214053]: conmon 1a7e3e86e23c91683b37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a7e3e86e23c91683b3728ae2d44b61ca8d8120b8ec35bc53980c37d0d64e721.scope/container/memory.events
Nov 24 20:06:17 compute-0 podman[214013]: 2025-11-24 20:06:17.546652138 +0000 UTC m=+0.160342892 container died 1a7e3e86e23c91683b3728ae2d44b61ca8d8120b8ec35bc53980c37d0d64e721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:06:17 compute-0 sudo[214082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvsmlptysoyqimbynaklzlfkwubqvqna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014776.553827-775-280798069842488/AnsiballZ_copy.py'
Nov 24 20:06:17 compute-0 sudo[214082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c30e26f0ab4b75cdd87bf57e2a833c7571c510880d7770bba19a7847508b47a-merged.mount: Deactivated successfully.
Nov 24 20:06:17 compute-0 podman[214013]: 2025-11-24 20:06:17.593136057 +0000 UTC m=+0.206826801 container remove 1a7e3e86e23c91683b3728ae2d44b61ca8d8120b8ec35bc53980c37d0d64e721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 20:06:17 compute-0 systemd[1]: libpod-conmon-1a7e3e86e23c91683b3728ae2d44b61ca8d8120b8ec35bc53980c37d0d64e721.scope: Deactivated successfully.
Nov 24 20:06:17 compute-0 python3.9[214088]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014776.553827-775-280798069842488/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:06:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:06:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:06:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:06:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:06:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:06:17 compute-0 ceph-mon[75677]: pgmap v678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:17 compute-0 sudo[214082]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 897 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:17 compute-0 podman[214107]: 2025-11-24 20:06:17.823329481 +0000 UTC m=+0.059504645 container create 56009e195fb41356aac7e94d201e107954fd8f0fa06b615e81e98c800f6fa87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:06:17 compute-0 systemd[1]: Started libpod-conmon-56009e195fb41356aac7e94d201e107954fd8f0fa06b615e81e98c800f6fa87a.scope.
Nov 24 20:06:17 compute-0 podman[214107]: 2025-11-24 20:06:17.791507791 +0000 UTC m=+0.027683045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:06:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efca97946576cf7a8216d862a27a9d7f48b32c2d03a5d5e13ecf01f931167c22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efca97946576cf7a8216d862a27a9d7f48b32c2d03a5d5e13ecf01f931167c22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efca97946576cf7a8216d862a27a9d7f48b32c2d03a5d5e13ecf01f931167c22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efca97946576cf7a8216d862a27a9d7f48b32c2d03a5d5e13ecf01f931167c22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efca97946576cf7a8216d862a27a9d7f48b32c2d03a5d5e13ecf01f931167c22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:17 compute-0 podman[214107]: 2025-11-24 20:06:17.923964434 +0000 UTC m=+0.160139688 container init 56009e195fb41356aac7e94d201e107954fd8f0fa06b615e81e98c800f6fa87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:06:17 compute-0 podman[214107]: 2025-11-24 20:06:17.935307151 +0000 UTC m=+0.171482325 container start 56009e195fb41356aac7e94d201e107954fd8f0fa06b615e81e98c800f6fa87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:06:17 compute-0 podman[214107]: 2025-11-24 20:06:17.940048823 +0000 UTC m=+0.176224077 container attach 56009e195fb41356aac7e94d201e107954fd8f0fa06b615e81e98c800f6fa87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:06:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:18.157+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:18 compute-0 sudo[214278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcunwkvejanalsjzhsbfmofejdxkmojv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014777.9450846-775-130131429594782/AnsiballZ_stat.py'
Nov 24 20:06:18 compute-0 sudo[214278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:18.352+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:18 compute-0 python3.9[214280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:18 compute-0 sudo[214278]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:18 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 897 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:18 compute-0 sudo[214421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptgskxptgnlmpeqowquxrjbkkakmqnrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014777.9450846-775-130131429594782/AnsiballZ_copy.py'
Nov 24 20:06:18 compute-0 sudo[214421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:19 compute-0 peaceful_tesla[214148]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:06:19 compute-0 peaceful_tesla[214148]: --> relative data size: 1.0
Nov 24 20:06:19 compute-0 peaceful_tesla[214148]: --> All data devices are unavailable
Nov 24 20:06:19 compute-0 systemd[1]: libpod-56009e195fb41356aac7e94d201e107954fd8f0fa06b615e81e98c800f6fa87a.scope: Deactivated successfully.
Nov 24 20:06:19 compute-0 systemd[1]: libpod-56009e195fb41356aac7e94d201e107954fd8f0fa06b615e81e98c800f6fa87a.scope: Consumed 1.052s CPU time.
Nov 24 20:06:19 compute-0 podman[214107]: 2025-11-24 20:06:19.057562596 +0000 UTC m=+1.293737750 container died 56009e195fb41356aac7e94d201e107954fd8f0fa06b615e81e98c800f6fa87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:06:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-efca97946576cf7a8216d862a27a9d7f48b32c2d03a5d5e13ecf01f931167c22-merged.mount: Deactivated successfully.
Nov 24 20:06:19 compute-0 podman[214107]: 2025-11-24 20:06:19.132752377 +0000 UTC m=+1.368927541 container remove 56009e195fb41356aac7e94d201e107954fd8f0fa06b615e81e98c800f6fa87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 20:06:19 compute-0 systemd[1]: libpod-conmon-56009e195fb41356aac7e94d201e107954fd8f0fa06b615e81e98c800f6fa87a.scope: Deactivated successfully.
Nov 24 20:06:19 compute-0 sudo[213880]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:19 compute-0 python3.9[214424]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014777.9450846-775-130131429594782/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:19.206+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:19 compute-0 sudo[214421]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:19 compute-0 sudo[214441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:06:19 compute-0 sudo[214441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:19 compute-0 sudo[214441]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:19 compute-0 sudo[214471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:06:19 compute-0 sudo[214471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:19 compute-0 sudo[214471]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:19.352+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:19 compute-0 sudo[214518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:06:19 compute-0 sudo[214518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:19 compute-0 sudo[214518]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:19 compute-0 sudo[214579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:06:19 compute-0 sudo[214579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:19 compute-0 sudo[214703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baenahcjvdyejkslimrtxvmuikcnllsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014779.3969557-775-229629408593855/AnsiballZ_stat.py'
Nov 24 20:06:19 compute-0 sudo[214703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:19 compute-0 ceph-mon[75677]: pgmap v679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:19 compute-0 python3.9[214712]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:19 compute-0 sudo[214703]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:19 compute-0 podman[214734]: 2025-11-24 20:06:19.896016249 +0000 UTC m=+0.104778144 container create 626dd9b05d47c4ea61d0d26a7fb602f25e0b1b5ec412ae46568ee30957584dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 20:06:19 compute-0 podman[214734]: 2025-11-24 20:06:19.830080584 +0000 UTC m=+0.038842519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:06:19 compute-0 systemd[1]: Started libpod-conmon-626dd9b05d47c4ea61d0d26a7fb602f25e0b1b5ec412ae46568ee30957584dc3.scope.
Nov 24 20:06:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:06:20 compute-0 podman[214734]: 2025-11-24 20:06:20.03998505 +0000 UTC m=+0.248747025 container init 626dd9b05d47c4ea61d0d26a7fb602f25e0b1b5ec412ae46568ee30957584dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:06:20 compute-0 podman[214734]: 2025-11-24 20:06:20.052432399 +0000 UTC m=+0.261194314 container start 626dd9b05d47c4ea61d0d26a7fb602f25e0b1b5ec412ae46568ee30957584dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banzai, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 20:06:20 compute-0 podman[214734]: 2025-11-24 20:06:20.056838569 +0000 UTC m=+0.265600494 container attach 626dd9b05d47c4ea61d0d26a7fb602f25e0b1b5ec412ae46568ee30957584dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banzai, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:06:20 compute-0 intelligent_banzai[214771]: 167 167
Nov 24 20:06:20 compute-0 systemd[1]: libpod-626dd9b05d47c4ea61d0d26a7fb602f25e0b1b5ec412ae46568ee30957584dc3.scope: Deactivated successfully.
Nov 24 20:06:20 compute-0 podman[214734]: 2025-11-24 20:06:20.059455591 +0000 UTC m=+0.268217506 container died 626dd9b05d47c4ea61d0d26a7fb602f25e0b1b5ec412ae46568ee30957584dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:06:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-089bdbe397dd0369f844c97653b21f8b88ebb41f168b00b0a981cd03965e64f4-merged.mount: Deactivated successfully.
Nov 24 20:06:20 compute-0 podman[214734]: 2025-11-24 20:06:20.111366924 +0000 UTC m=+0.320128819 container remove 626dd9b05d47c4ea61d0d26a7fb602f25e0b1b5ec412ae46568ee30957584dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:06:20 compute-0 systemd[1]: libpod-conmon-626dd9b05d47c4ea61d0d26a7fb602f25e0b1b5ec412ae46568ee30957584dc3.scope: Deactivated successfully.
Nov 24 20:06:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:20.188+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:20 compute-0 podman[214851]: 2025-11-24 20:06:20.278843755 +0000 UTC m=+0.051739109 container create b5408f6f9781256f7e17c166165c551d80db2b57448838ae7d996d2dd27b2e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:06:20 compute-0 systemd[1]: Started libpod-conmon-b5408f6f9781256f7e17c166165c551d80db2b57448838ae7d996d2dd27b2e86.scope.
Nov 24 20:06:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fae434d75524100a06e4b33ef693c7b8eca7caec3a0ee307f34eafcc0c9f73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fae434d75524100a06e4b33ef693c7b8eca7caec3a0ee307f34eafcc0c9f73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fae434d75524100a06e4b33ef693c7b8eca7caec3a0ee307f34eafcc0c9f73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fae434d75524100a06e4b33ef693c7b8eca7caec3a0ee307f34eafcc0c9f73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:20.341+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:20 compute-0 sudo[214912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqaivfgpmubddtrezpyrgkrmidqajrvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014779.3969557-775-229629408593855/AnsiballZ_copy.py'
Nov 24 20:06:20 compute-0 sudo[214912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:20 compute-0 podman[214851]: 2025-11-24 20:06:20.260453545 +0000 UTC m=+0.033348949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:06:20 compute-0 podman[214851]: 2025-11-24 20:06:20.351650368 +0000 UTC m=+0.124545722 container init b5408f6f9781256f7e17c166165c551d80db2b57448838ae7d996d2dd27b2e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 24 20:06:20 compute-0 podman[214851]: 2025-11-24 20:06:20.360681294 +0000 UTC m=+0.133576668 container start b5408f6f9781256f7e17c166165c551d80db2b57448838ae7d996d2dd27b2e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_stonebraker, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:06:20 compute-0 podman[214851]: 2025-11-24 20:06:20.363785788 +0000 UTC m=+0.136681142 container attach b5408f6f9781256f7e17c166165c551d80db2b57448838ae7d996d2dd27b2e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:06:20 compute-0 python3.9[214915]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014779.3969557-775-229629408593855/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:20 compute-0 sudo[214912]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:21 compute-0 sudo[215069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztkiipxelmvhcfgacwurzjdkgwwmijdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014780.6963544-775-202258294995361/AnsiballZ_stat.py'
Nov 24 20:06:21 compute-0 sudo[215069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]: {
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:     "0": [
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:         {
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "devices": [
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "/dev/loop3"
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             ],
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_name": "ceph_lv0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_size": "21470642176",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "name": "ceph_lv0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "tags": {
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.cluster_name": "ceph",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.crush_device_class": "",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.encrypted": "0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.osd_id": "0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.type": "block",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.vdo": "0"
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             },
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "type": "block",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "vg_name": "ceph_vg0"
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:         }
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:     ],
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:     "1": [
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:         {
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "devices": [
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "/dev/loop4"
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             ],
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_name": "ceph_lv1",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_size": "21470642176",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "name": "ceph_lv1",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "tags": {
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.cluster_name": "ceph",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.crush_device_class": "",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.encrypted": "0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.osd_id": "1",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.type": "block",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.vdo": "0"
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             },
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "type": "block",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "vg_name": "ceph_vg1"
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:         }
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:     ],
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:     "2": [
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:         {
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "devices": [
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "/dev/loop5"
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             ],
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_name": "ceph_lv2",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_size": "21470642176",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "name": "ceph_lv2",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "tags": {
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.cluster_name": "ceph",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.crush_device_class": "",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.encrypted": "0",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.osd_id": "2",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.type": "block",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:                 "ceph.vdo": "0"
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             },
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "type": "block",
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:             "vg_name": "ceph_vg2"
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:         }
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]:     ]
Nov 24 20:06:21 compute-0 heuristic_stonebraker[214910]: }
Nov 24 20:06:21 compute-0 systemd[1]: libpod-b5408f6f9781256f7e17c166165c551d80db2b57448838ae7d996d2dd27b2e86.scope: Deactivated successfully.
Nov 24 20:06:21 compute-0 podman[214851]: 2025-11-24 20:06:21.133565223 +0000 UTC m=+0.906460607 container died b5408f6f9781256f7e17c166165c551d80db2b57448838ae7d996d2dd27b2e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:06:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-15fae434d75524100a06e4b33ef693c7b8eca7caec3a0ee307f34eafcc0c9f73-merged.mount: Deactivated successfully.
Nov 24 20:06:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:21.185+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:21 compute-0 podman[214851]: 2025-11-24 20:06:21.195557662 +0000 UTC m=+0.968453056 container remove b5408f6f9781256f7e17c166165c551d80db2b57448838ae7d996d2dd27b2e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:06:21 compute-0 systemd[1]: libpod-conmon-b5408f6f9781256f7e17c166165c551d80db2b57448838ae7d996d2dd27b2e86.scope: Deactivated successfully.
Nov 24 20:06:21 compute-0 python3.9[215073]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:21 compute-0 sudo[215069]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:21 compute-0 sudo[214579]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:21 compute-0 sudo[215088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:06:21 compute-0 sudo[215088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:21 compute-0 sudo[215088]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:21.372+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:21 compute-0 sudo[215136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:06:21 compute-0 sudo[215136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:21 compute-0 sudo[215136]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:21 compute-0 sudo[215184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:06:21 compute-0 sudo[215184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:21 compute-0 sudo[215184]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:21 compute-0 sudo[215233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:06:21 compute-0 sudo[215233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:21 compute-0 sudo[215308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xarxqzqbxjvddlegnzqcsikjgepftjgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014780.6963544-775-202258294995361/AnsiballZ_copy.py'
Nov 24 20:06:21 compute-0 sudo[215308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:21 compute-0 ceph-mon[75677]: pgmap v680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:21 compute-0 python3.9[215310]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014780.6963544-775-202258294995361/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:21 compute-0 sudo[215308]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:22 compute-0 podman[215352]: 2025-11-24 20:06:22.006150808 +0000 UTC m=+0.055606185 container create 93e4050df7421f6804f1a0784fed6c2bdd7daca4381026ce8a24820fe51ddf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:06:22 compute-0 systemd[1]: Started libpod-conmon-93e4050df7421f6804f1a0784fed6c2bdd7daca4381026ce8a24820fe51ddf73.scope.
Nov 24 20:06:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:06:22 compute-0 podman[215352]: 2025-11-24 20:06:21.990086781 +0000 UTC m=+0.039542158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:06:22 compute-0 podman[215352]: 2025-11-24 20:06:22.09361552 +0000 UTC m=+0.143070897 container init 93e4050df7421f6804f1a0784fed6c2bdd7daca4381026ce8a24820fe51ddf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:06:22 compute-0 podman[215352]: 2025-11-24 20:06:22.106328926 +0000 UTC m=+0.155784283 container start 93e4050df7421f6804f1a0784fed6c2bdd7daca4381026ce8a24820fe51ddf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 20:06:22 compute-0 podman[215352]: 2025-11-24 20:06:22.108973628 +0000 UTC m=+0.158428985 container attach 93e4050df7421f6804f1a0784fed6c2bdd7daca4381026ce8a24820fe51ddf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 24 20:06:22 compute-0 bold_shannon[215397]: 167 167
Nov 24 20:06:22 compute-0 systemd[1]: libpod-93e4050df7421f6804f1a0784fed6c2bdd7daca4381026ce8a24820fe51ddf73.scope: Deactivated successfully.
Nov 24 20:06:22 compute-0 podman[215352]: 2025-11-24 20:06:22.116042371 +0000 UTC m=+0.165497828 container died 93e4050df7421f6804f1a0784fed6c2bdd7daca4381026ce8a24820fe51ddf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 20:06:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0c7f4dda078e8d9e9e162065ca40182f8d315334ad4b5d8d829dc40db9444ba-merged.mount: Deactivated successfully.
Nov 24 20:06:22 compute-0 podman[215352]: 2025-11-24 20:06:22.166972958 +0000 UTC m=+0.216428345 container remove 93e4050df7421f6804f1a0784fed6c2bdd7daca4381026ce8a24820fe51ddf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 20:06:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:22.170+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:22 compute-0 systemd[1]: libpod-conmon-93e4050df7421f6804f1a0784fed6c2bdd7daca4381026ce8a24820fe51ddf73.scope: Deactivated successfully.
Nov 24 20:06:22 compute-0 podman[215494]: 2025-11-24 20:06:22.3902738 +0000 UTC m=+0.051766202 container create 5d75e06739aeb1987e93496a27f241c8da027f607aa4dd4d34e84f6e003a4f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_carson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:06:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:22.391+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:22 compute-0 systemd[1]: Started libpod-conmon-5d75e06739aeb1987e93496a27f241c8da027f607aa4dd4d34e84f6e003a4f65.scope.
Nov 24 20:06:22 compute-0 podman[215494]: 2025-11-24 20:06:22.364614271 +0000 UTC m=+0.026106713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:06:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2530f06804537ad459a798a2c67bbcbebfca5fe91a818d5470e1943d0fe408a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2530f06804537ad459a798a2c67bbcbebfca5fe91a818d5470e1943d0fe408a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2530f06804537ad459a798a2c67bbcbebfca5fe91a818d5470e1943d0fe408a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2530f06804537ad459a798a2c67bbcbebfca5fe91a818d5470e1943d0fe408a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:06:22 compute-0 sudo[215561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybuyrxnywisozukpsxkxbxdzakcrptqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014782.0857203-775-144888172952786/AnsiballZ_stat.py'
Nov 24 20:06:22 compute-0 sudo[215561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:22 compute-0 podman[215494]: 2025-11-24 20:06:22.49601402 +0000 UTC m=+0.157506472 container init 5d75e06739aeb1987e93496a27f241c8da027f607aa4dd4d34e84f6e003a4f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:06:22 compute-0 podman[215494]: 2025-11-24 20:06:22.509887338 +0000 UTC m=+0.171379770 container start 5d75e06739aeb1987e93496a27f241c8da027f607aa4dd4d34e84f6e003a4f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_carson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:06:22 compute-0 podman[215494]: 2025-11-24 20:06:22.514142363 +0000 UTC m=+0.175634765 container attach 5d75e06739aeb1987e93496a27f241c8da027f607aa4dd4d34e84f6e003a4f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 20:06:22 compute-0 python3.9[215564]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:22 compute-0 sudo[215561]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:22 compute-0 sshd-session[215322]: Invalid user oracle from 27.79.44.141 port 58154
Nov 24 20:06:23 compute-0 sudo[215687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnqwwaxnmiovfmssxfzrqcxxjoakkvix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014782.0857203-775-144888172952786/AnsiballZ_copy.py'
Nov 24 20:06:23 compute-0 sudo[215687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:23.174+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:23 compute-0 python3.9[215690]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014782.0857203-775-144888172952786/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:23 compute-0 sudo[215687]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:23.433+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:23 compute-0 thirsty_carson[215557]: {
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "osd_id": 2,
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "type": "bluestore"
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:     },
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "osd_id": 1,
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "type": "bluestore"
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:     },
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "osd_id": 0,
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:         "type": "bluestore"
Nov 24 20:06:23 compute-0 thirsty_carson[215557]:     }
Nov 24 20:06:23 compute-0 thirsty_carson[215557]: }
Nov 24 20:06:23 compute-0 systemd[1]: libpod-5d75e06739aeb1987e93496a27f241c8da027f607aa4dd4d34e84f6e003a4f65.scope: Deactivated successfully.
Nov 24 20:06:23 compute-0 podman[215494]: 2025-11-24 20:06:23.582684314 +0000 UTC m=+1.244176746 container died 5d75e06739aeb1987e93496a27f241c8da027f607aa4dd4d34e84f6e003a4f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_carson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 24 20:06:23 compute-0 systemd[1]: libpod-5d75e06739aeb1987e93496a27f241c8da027f607aa4dd4d34e84f6e003a4f65.scope: Consumed 1.072s CPU time.
Nov 24 20:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2530f06804537ad459a798a2c67bbcbebfca5fe91a818d5470e1943d0fe408a8-merged.mount: Deactivated successfully.
Nov 24 20:06:23 compute-0 podman[215494]: 2025-11-24 20:06:23.660843523 +0000 UTC m=+1.322335925 container remove 5d75e06739aeb1987e93496a27f241c8da027f607aa4dd4d34e84f6e003a4f65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_carson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:06:23 compute-0 systemd[1]: libpod-conmon-5d75e06739aeb1987e93496a27f241c8da027f607aa4dd4d34e84f6e003a4f65.scope: Deactivated successfully.
Nov 24 20:06:23 compute-0 sudo[215233]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:06:23 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:06:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:06:23 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:06:23 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev eaab80b2-07f1-4cb0-8bcd-d23a780d0042 does not exist
Nov 24 20:06:23 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1b08c2a9-b893-4846-a89a-3189c825a94d does not exist
Nov 24 20:06:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:23 compute-0 ceph-mon[75677]: pgmap v681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:23 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:06:23 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:06:23 compute-0 sudo[215829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:06:23 compute-0 sudo[215829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:23 compute-0 sudo[215829]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:23 compute-0 sshd-session[215322]: Connection closed by invalid user oracle 27.79.44.141 port 58154 [preauth]
Nov 24 20:06:23 compute-0 sudo[215879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:06:23 compute-0 sudo[215879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:06:23 compute-0 sudo[215879]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:23 compute-0 sudo[215927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmixnayfdjnpjjewdoqbqnfytijxkeuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014783.5288246-775-130046842750193/AnsiballZ_stat.py'
Nov 24 20:06:23 compute-0 sudo[215927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:24 compute-0 python3.9[215931]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:24 compute-0 sudo[215927]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:24.162+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:06:24
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'backups', 'vms']
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:06:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:24.453+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:24 compute-0 sudo[216052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uugaksvazljguezdqqxdcwzzemtakfhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014783.5288246-775-130046842750193/AnsiballZ_copy.py'
Nov 24 20:06:24 compute-0 sudo[216052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:24 compute-0 python3.9[216054]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014783.5288246-775-130046842750193/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:24 compute-0 sudo[216052]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:25.193+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:25.467+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:25 compute-0 python3.9[216204]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:06:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:25 compute-0 ceph-mon[75677]: pgmap v682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:26.231+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:26 compute-0 sshd-session[215736]: Invalid user rebecca from 27.79.44.141 port 58164
Nov 24 20:06:26 compute-0 sudo[216357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htimnniuxifkqcvrjzeckrclkamwubzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014785.803832-981-193360347622718/AnsiballZ_seboolean.py'
Nov 24 20:06:26 compute-0 sudo[216357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:26.447+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:26 compute-0 sshd-session[215736]: Connection closed by invalid user rebecca 27.79.44.141 port 58164 [preauth]
Nov 24 20:06:26 compute-0 python3.9[216359]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 24 20:06:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 902 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:26 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 902 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:27.200+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:27.458+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:27 compute-0 sudo[216357]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:27 compute-0 ceph-mon[75677]: pgmap v683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:28.152+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:28.457+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:28 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 24 20:06:28 compute-0 sudo[216513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbzfsudgexvfvmyxyeovzhrabslouazk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014788.102724-989-67795458606621/AnsiballZ_copy.py'
Nov 24 20:06:28 compute-0 sudo[216513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:28 compute-0 podman[216515]: 2025-11-24 20:06:28.665906304 +0000 UTC m=+0.134625518 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 20:06:28 compute-0 python3.9[216516]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:28 compute-0 sudo[216513]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:28 compute-0 ceph-mon[75677]: pgmap v684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:29.178+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:29 compute-0 sudo[216691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdsmeqzmnafyxchdydpxlugquhlyiqpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014788.914478-989-129390742426926/AnsiballZ_copy.py'
Nov 24 20:06:29 compute-0 sudo[216691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:29.481+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:29 compute-0 python3.9[216693]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:29 compute-0 sudo[216691]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:30 compute-0 sudo[216843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npngqbkundmumisptyvrtcztqfykxuwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014789.6750813-989-253985882807688/AnsiballZ_copy.py'
Nov 24 20:06:30 compute-0 sudo[216843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:30.148+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:30 compute-0 python3.9[216845]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:30 compute-0 sudo[216843]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:30.470+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:30 compute-0 sudo[216995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktdlqhkuewmtnieygcmhlqduxlpkthrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014790.4854195-989-54599855994393/AnsiballZ_copy.py'
Nov 24 20:06:30 compute-0 sudo[216995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:30 compute-0 ceph-mon[75677]: pgmap v685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:31.129+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:31 compute-0 python3.9[216997]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:31 compute-0 sudo[216995]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:31.461+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 912 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:31 compute-0 sudo[217147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbvuywrpmvpiaggrpplktznumafgtjtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014791.378191-989-80545029617573/AnsiballZ_copy.py'
Nov 24 20:06:31 compute-0 sudo[217147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:31 compute-0 python3.9[217149]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 912 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:32 compute-0 sudo[217147]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:32.122+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:32.490+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:32 compute-0 sudo[217299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reoewskprhhssyflmfhqcldmltewpece ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014792.2414458-1025-210652120652028/AnsiballZ_copy.py'
Nov 24 20:06:32 compute-0 sudo[217299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:32 compute-0 python3.9[217301]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:32 compute-0 sudo[217299]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:33 compute-0 ceph-mon[75677]: pgmap v686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:33.147+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:33 compute-0 sudo[217453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcwldxzokneockfrjqctdpsenllsoash ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014793.091453-1025-179159806925856/AnsiballZ_copy.py'
Nov 24 20:06:33 compute-0 sudo[217453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:33.502+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:33 compute-0 python3.9[217455]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:33 compute-0 sudo[217453]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:34.120+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:34 compute-0 sudo[217605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trzlqqsfsrzgoozcufgnzkevojkihjwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014793.8435838-1025-137558242426169/AnsiballZ_copy.py'
Nov 24 20:06:34 compute-0 sudo[217605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:34 compute-0 python3.9[217607]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:06:34 compute-0 sudo[217605]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:06:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:34.505+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:34 compute-0 sudo[217757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngzbtaukeyoxxnphhetmivihqahmnvan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014794.5940506-1025-225076060224396/AnsiballZ_copy.py'
Nov 24 20:06:34 compute-0 sudo[217757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:35 compute-0 ceph-mon[75677]: pgmap v687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:35 compute-0 sshd-session[217302]: Connection closed by authenticating user sshd 27.79.44.141 port 42052 [preauth]
Nov 24 20:06:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:35.137+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:35 compute-0 python3.9[217759]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:35 compute-0 sudo[217757]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:35.471+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:35 compute-0 sudo[217909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bawvymtiykughaqkjletqqtfgqwfkklo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014795.4229095-1025-52574316740794/AnsiballZ_copy.py'
Nov 24 20:06:35 compute-0 sudo[217909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:36 compute-0 python3.9[217911]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:36 compute-0 sudo[217909]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:36.128+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:36.442+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.676139) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014796676226, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 3391, "num_deletes": 501, "total_data_size": 4022520, "memory_usage": 4098920, "flush_reason": "Manual Compaction"}
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 24 20:06:36 compute-0 sudo[218061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txcvzoiawaocosrsbcbbbzzrxuocdfpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014796.3095853-1061-231120825730937/AnsiballZ_systemd.py'
Nov 24 20:06:36 compute-0 sudo[218061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014796781849, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2470424, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14272, "largest_seqno": 17662, "table_properties": {"data_size": 2459475, "index_size": 5459, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4421, "raw_key_size": 38276, "raw_average_key_size": 21, "raw_value_size": 2429925, "raw_average_value_size": 1385, "num_data_blocks": 241, "num_entries": 1754, "num_filter_entries": 1754, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764014567, "oldest_key_time": 1764014567, "file_creation_time": 1764014796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 105745 microseconds, and 11114 cpu microseconds.
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.781901) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2470424 bytes OK
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.781917) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.803937) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.803978) EVENT_LOG_v1 {"time_micros": 1764014796803968, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.804004) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 4006730, prev total WAL file size 4006730, number of live WAL files 2.
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.805762) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2412KB)], [32(8238KB)]
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014796805858, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 10906711, "oldest_snapshot_seqno": -1}
Nov 24 20:06:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5998 keys, 7527048 bytes, temperature: kUnknown
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014796864392, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7527048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7489689, "index_size": 21249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 152539, "raw_average_key_size": 25, "raw_value_size": 7383130, "raw_average_value_size": 1230, "num_data_blocks": 873, "num_entries": 5998, "num_filter_entries": 5998, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764014796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.864744) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7527048 bytes
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.867173) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.1 rd, 128.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.0) OK, records in: 6904, records dropped: 906 output_compression: NoCompression
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.867201) EVENT_LOG_v1 {"time_micros": 1764014796867188, "job": 14, "event": "compaction_finished", "compaction_time_micros": 58622, "compaction_time_cpu_micros": 35263, "output_level": 6, "num_output_files": 1, "total_output_size": 7527048, "num_input_records": 6904, "num_output_records": 5998, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014796868033, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014796870983, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.805681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.871158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.871166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.871169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.871172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:36.871175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:37 compute-0 python3.9[218063]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 20:06:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:37 compute-0 ceph-mon[75677]: pgmap v688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:37 compute-0 systemd[1]: Reloading.
Nov 24 20:06:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:37.092+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:37 compute-0 systemd-sysv-generator[218095]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:06:37 compute-0 systemd-rc-local-generator[218091]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:06:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:37.452+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:37 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 24 20:06:37 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 24 20:06:37 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 24 20:06:37 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 24 20:06:37 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 24 20:06:37 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 24 20:06:37 compute-0 sudo[218061]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 917 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 917 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:38.056+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:38 compute-0 sudo[218255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smubjergfadxzwbqaadyxoozuxihishl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014797.866779-1061-276241292285385/AnsiballZ_systemd.py'
Nov 24 20:06:38 compute-0 sudo[218255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:38.488+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:38 compute-0 python3.9[218257]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 20:06:38 compute-0 systemd[1]: Reloading.
Nov 24 20:06:38 compute-0 systemd-rc-local-generator[218279]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:06:38 compute-0 systemd-sysv-generator[218286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:06:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:38 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 24 20:06:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 24 20:06:38 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 24 20:06:38 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 24 20:06:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 24 20:06:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 24 20:06:38 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 20:06:38 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 24 20:06:39 compute-0 sudo[218255]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:39.043+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:39 compute-0 ceph-mon[75677]: pgmap v689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:39 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 24 20:06:39 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 24 20:06:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:39.484+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:39 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 24 20:06:39 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 24 20:06:39 compute-0 sudo[218475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylwsyobsxibwiogfpgcxdwhaqozxwpcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014799.2404568-1061-207011988613848/AnsiballZ_systemd.py'
Nov 24 20:06:39 compute-0 sudo[218475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:39 compute-0 python3.9[218480]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 20:06:39 compute-0 systemd[1]: Reloading.
Nov 24 20:06:40 compute-0 systemd-sysv-generator[218511]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:06:40 compute-0 systemd-rc-local-generator[218506]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:06:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:40.082+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:40 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 24 20:06:40 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 24 20:06:40 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 24 20:06:40 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 24 20:06:40 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 24 20:06:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:06:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:06:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:06:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:06:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:06:40 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 24 20:06:40 compute-0 sudo[218475]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:40.531+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:40 compute-0 setroubleshoot[218319]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ae9f3853-591d-49c0-82d0-385c83222d36
Nov 24 20:06:40 compute-0 setroubleshoot[218319]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 24 20:06:40 compute-0 setroubleshoot[218319]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ae9f3853-591d-49c0-82d0-385c83222d36
Nov 24 20:06:40 compute-0 setroubleshoot[218319]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 24 20:06:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:41 compute-0 sudo[218691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khrtacrasqdinjcmwiqbwkqhcckxjlhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014800.6378217-1061-199659074552685/AnsiballZ_systemd.py'
Nov 24 20:06:41 compute-0 sudo[218691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:41 compute-0 ceph-mon[75677]: pgmap v690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:41.103+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:41 compute-0 python3.9[218693]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 20:06:41 compute-0 systemd[1]: Reloading.
Nov 24 20:06:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:41.492+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:41 compute-0 systemd-rc-local-generator[218718]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:06:41 compute-0 systemd-sysv-generator[218723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:06:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:06:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3614 writes, 17K keys, 3614 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3614 writes, 3614 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1836 writes, 9053 keys, 1836 commit groups, 1.0 writes per commit group, ingest: 9.98 MB, 0.02 MB/s
                                           Interval WAL: 1836 writes, 1836 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     58.1      0.28              0.06         7    0.040       0      0       0.0       0.0
                                             L6      1/0    7.18 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.7    131.4    108.7      0.40              0.19         6    0.067     31K   3195       0.0       0.0
                                            Sum      1/0    7.18 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7     77.4     87.9      0.68              0.26        13    0.053     31K   3195       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3    101.4     99.7      0.44              0.19        10    0.044     27K   2905       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    131.4    108.7      0.40              0.19         6    0.067     31K   3195       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     58.4      0.28              0.06         6    0.046       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.7 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b73e09f1f0#2 capacity: 308.00 MB usage: 1.75 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(106,1.48 MB,0.482089%) FilterBlock(14,98.80 KB,0.0313251%) IndexBlock(14,169.17 KB,0.0536386%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 20:06:41 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 24 20:06:41 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 24 20:06:41 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 24 20:06:41 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 24 20:06:41 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 24 20:06:41 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 24 20:06:41 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 24 20:06:41 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 24 20:06:41 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 24 20:06:41 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 24 20:06:41 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 20:06:41 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 24 20:06:41 compute-0 sudo[218691]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:42.123+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:42.445+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:42 compute-0 sudo[218906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haymwmprbnuydrcinvnemiicpkgrnmnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014802.151114-1061-54396553504609/AnsiballZ_systemd.py'
Nov 24 20:06:42 compute-0 sudo[218906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:42 compute-0 podman[218908]: 2025-11-24 20:06:42.646689834 +0000 UTC m=+0.092598432 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 20:06:42 compute-0 python3.9[218909]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 20:06:42 compute-0 systemd[1]: Reloading.
Nov 24 20:06:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:42 compute-0 systemd-rc-local-generator[218955]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:06:42 compute-0 systemd-sysv-generator[218958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:06:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:43 compute-0 ceph-mon[75677]: pgmap v691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.113485) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014803113535, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 341, "num_deletes": 251, "total_data_size": 141783, "memory_usage": 149848, "flush_reason": "Manual Compaction"}
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014803116849, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 140043, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17663, "largest_seqno": 18003, "table_properties": {"data_size": 137916, "index_size": 291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5638, "raw_average_key_size": 18, "raw_value_size": 133663, "raw_average_value_size": 447, "num_data_blocks": 13, "num_entries": 299, "num_filter_entries": 299, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764014797, "oldest_key_time": 1764014797, "file_creation_time": 1764014803, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 3435 microseconds, and 1518 cpu microseconds.
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.116920) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 140043 bytes OK
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.116941) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.118529) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.118549) EVENT_LOG_v1 {"time_micros": 1764014803118543, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.118568) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 139409, prev total WAL file size 139409, number of live WAL files 2.
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.119031) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(136KB)], [35(7350KB)]
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014803119070, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 7667091, "oldest_snapshot_seqno": -1}
Nov 24 20:06:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:43.158+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 5788 keys, 6235291 bytes, temperature: kUnknown
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014803166530, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 6235291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6200504, "index_size": 19189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 148958, "raw_average_key_size": 25, "raw_value_size": 6098709, "raw_average_value_size": 1053, "num_data_blocks": 779, "num_entries": 5788, "num_filter_entries": 5788, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764014803, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.167067) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 6235291 bytes
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.168646) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.5 rd, 130.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.2 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(99.3) write-amplify(44.5) OK, records in: 6297, records dropped: 509 output_compression: NoCompression
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.168678) EVENT_LOG_v1 {"time_micros": 1764014803168664, "job": 16, "event": "compaction_finished", "compaction_time_micros": 47757, "compaction_time_cpu_micros": 29024, "output_level": 6, "num_output_files": 1, "total_output_size": 6235291, "num_input_records": 6297, "num_output_records": 5788, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014803168902, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014803171411, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.118955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.171536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.171542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.171546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.171549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:43 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:06:43.171566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:06:43 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 24 20:06:43 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 24 20:06:43 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 24 20:06:43 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 24 20:06:43 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 24 20:06:43 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 24 20:06:43 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 24 20:06:43 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 24 20:06:43 compute-0 sudo[218906]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:43.479+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:44 compute-0 sudo[219137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oafnjkkupqunhjeunckbjkskiimafyrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014803.656677-1098-33445396017586/AnsiballZ_file.py'
Nov 24 20:06:44 compute-0 sudo[219137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:44.142+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:44 compute-0 python3.9[219139]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:44 compute-0 sudo[219137]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:44.432+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:44 compute-0 sudo[219289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfrimlsciwximfpohepdmnrinsjvbrye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014804.418904-1106-31104869920842/AnsiballZ_find.py'
Nov 24 20:06:44 compute-0 sudo[219289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:44 compute-0 python3.9[219291]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 20:06:44 compute-0 sudo[219289]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:45 compute-0 ceph-mon[75677]: pgmap v692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:45.124+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:45.473+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:45 compute-0 sudo[219441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqthhqwaldrwprkzvhyewctcbphnuozi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014805.199063-1114-179022907749012/AnsiballZ_command.py'
Nov 24 20:06:45 compute-0 sudo[219441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:45 compute-0 python3.9[219443]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:06:45 compute-0 sudo[219441]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:46.163+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:46.512+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 922 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:46 compute-0 python3.9[219597]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 20:06:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 922 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:47 compute-0 ceph-mon[75677]: pgmap v693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:47.126+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:47.544+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:47 compute-0 python3.9[219747]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:48.133+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:48 compute-0 python3.9[219868]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014807.0795128-1133-5200125096606/.source.xml follow=False _original_basename=secret.xml.j2 checksum=09a1257f1b3be3127f073f62bed15b684e092065 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:48.565+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:48 compute-0 sudo[220019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwrcuhnsbzwdbknhqbipbgduavubcbtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014808.419996-1148-146918607057751/AnsiballZ_command.py'
Nov 24 20:06:48 compute-0 sudo[220019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:48 compute-0 python3.9[220021]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 05e060a3-406b-57f0-89d2-ec35f5b09305
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:06:49 compute-0 polkitd[44045]: Registered Authentication Agent for unix-process:220023:352169 (system bus name :1.2920 [pkttyagent --process 220023 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 20:06:49 compute-0 polkitd[44045]: Unregistered Authentication Agent for unix-process:220023:352169 (system bus name :1.2920, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 20:06:49 compute-0 polkitd[44045]: Registered Authentication Agent for unix-process:220022:352169 (system bus name :1.2921 [pkttyagent --process 220022 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 20:06:49 compute-0 polkitd[44045]: Unregistered Authentication Agent for unix-process:220022:352169 (system bus name :1.2921, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 20:06:49 compute-0 sudo[220019]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:49.119+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:49 compute-0 ceph-mon[75677]: pgmap v694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:49.604+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:49 compute-0 python3.9[220183]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:50.151+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:50 compute-0 sudo[220334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwyjmzstgrdaqyciheiyoptdfsciurzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014810.043336-1164-169709322680149/AnsiballZ_command.py'
Nov 24 20:06:50 compute-0 sudo[220334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:50 compute-0 sudo[220334]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:50.624+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:50 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 24 20:06:50 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 24 20:06:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:51 compute-0 ceph-mon[75677]: pgmap v695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:51.171+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:51 compute-0 sudo[220487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxuhxvkiwakplybttcdmnrfnlsfwfbih ; FSID=05e060a3-406b-57f0-89d2-ec35f5b09305 KEY=AQD6tSRpAAAAABAAQR/Xi2jttOzDX+chNv0thg== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014810.8225813-1172-257894372286059/AnsiballZ_command.py'
Nov 24 20:06:51 compute-0 sudo[220487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:51 compute-0 polkitd[44045]: Registered Authentication Agent for unix-process:220490:352411 (system bus name :1.2924 [pkttyagent --process 220490 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 20:06:51 compute-0 polkitd[44045]: Unregistered Authentication Agent for unix-process:220490:352411 (system bus name :1.2924, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 20:06:51 compute-0 sudo[220487]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:51.630+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:51 compute-0 sshd-session[219869]: Connection closed by authenticating user root 80.94.95.115 port 15716 [preauth]
Nov 24 20:06:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 932 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:52 compute-0 sudo[220645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hndtxavpocsnuzdalfmvbzyogfesnwru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014811.7110047-1180-130987981119156/AnsiballZ_copy.py'
Nov 24 20:06:52 compute-0 sudo[220645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:52.162+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 932 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:52 compute-0 python3.9[220647]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:52 compute-0 sudo[220645]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:52.642+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:52 compute-0 sudo[220797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srjcosashffvvjnlgxktjhibwdjhjmdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014812.5217066-1188-268696329578619/AnsiballZ_stat.py'
Nov 24 20:06:52 compute-0 sudo[220797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:53 compute-0 python3.9[220799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:53 compute-0 sudo[220797]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:53 compute-0 ceph-mon[75677]: pgmap v696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:53.203+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:53 compute-0 sudo[220920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-velgbjzznzfwpdwbhtakyhmlgqwuztfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014812.5217066-1188-268696329578619/AnsiballZ_copy.py'
Nov 24 20:06:53 compute-0 sudo[220920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:53.635+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:53 compute-0 python3.9[220922]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014812.5217066-1188-268696329578619/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:53 compute-0 sudo[220920]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:54.158+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:06:54 compute-0 sudo[221072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxeqbwkxaqghoymxnbdcvkbmelunaspy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014814.145508-1204-73420923687644/AnsiballZ_file.py'
Nov 24 20:06:54 compute-0 sudo[221072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:54.595+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:54 compute-0 python3.9[221074]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:54 compute-0 sudo[221072]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:55.143+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:55 compute-0 ceph-mon[75677]: pgmap v697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:55 compute-0 sudo[221224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okbgevlcigvwawzpzckatorlevuexsap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014814.9533641-1212-119295775678282/AnsiballZ_stat.py'
Nov 24 20:06:55 compute-0 sudo[221224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:55 compute-0 python3.9[221226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:55 compute-0 sudo[221224]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:55.570+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:55 compute-0 sudo[221302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwrvjcqauelwnjlievmuoaxveuorydgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014814.9533641-1212-119295775678282/AnsiballZ_file.py'
Nov 24 20:06:55 compute-0 sudo[221302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:56 compute-0 python3.9[221304]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:56 compute-0 sudo[221302]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:56.157+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:56 compute-0 sudo[221454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdhmyetixcrgxqdnbmaymyzwojqthuvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014816.2045655-1224-124076240390079/AnsiballZ_stat.py'
Nov 24 20:06:56 compute-0 sudo[221454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:56.604+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:06:56 compute-0 python3.9[221456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:56 compute-0 sudo[221454]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:57 compute-0 sudo[221532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csbllszravypfdxwpvnubnydjxayiirc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014816.2045655-1224-124076240390079/AnsiballZ_file.py'
Nov 24 20:06:57 compute-0 sudo[221532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:57.190+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 937 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:57 compute-0 ceph-mon[75677]: pgmap v698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:57 compute-0 python3.9[221534]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.kn5kn7sc recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:57 compute-0 sudo[221532]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:57.649+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:57 compute-0 sudo[221684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqhjsrezorhwvunzicsucywhqrxiifxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014817.5104358-1236-12773159063941/AnsiballZ_stat.py'
Nov 24 20:06:57 compute-0 sudo[221684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:58 compute-0 python3.9[221686]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:06:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:58.202+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:58 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 937 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:06:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:58 compute-0 sudo[221684]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:58 compute-0 sudo[221762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mokvqoifhnmlllyclqyukoueekrjdwbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014817.5104358-1236-12773159063941/AnsiballZ_file.py'
Nov 24 20:06:58 compute-0 sudo[221762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:58.613+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:58 compute-0 python3.9[221764]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:06:58 compute-0 sudo[221762]: pam_unix(sudo:session): session closed for user root
Nov 24 20:06:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:58 compute-0 podman[221765]: 2025-11-24 20:06:58.918179771 +0000 UTC m=+0.140672192 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 20:06:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:06:59.204+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:06:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:06:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:59 compute-0 ceph-mon[75677]: pgmap v699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:06:59 compute-0 sudo[221940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwgiwspefyfnujzlxbtwbcwqyibbsyhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014819.0127413-1249-250869122411701/AnsiballZ_command.py'
Nov 24 20:06:59 compute-0 sudo[221940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:06:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:06:59.565+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:06:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:06:59 compute-0 python3.9[221942]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:06:59 compute-0 sudo[221940]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:00.248+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:00 compute-0 sudo[222093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rauujxpbaokaxmyigzzcrmrptceetoyx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764014819.834406-1257-31624956430079/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 20:07:00 compute-0 sudo[222093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:00.587+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:00 compute-0 python3[222095]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 20:07:00 compute-0 sudo[222093]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:01.234+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:01 compute-0 ceph-mon[75677]: pgmap v700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:01 compute-0 sudo[222245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjrmncgjmmbkdsbmkpiluwavgzfdxdth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014820.87835-1265-92642605993190/AnsiballZ_stat.py'
Nov 24 20:07:01 compute-0 sudo[222245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:01 compute-0 python3.9[222247]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:07:01 compute-0 sudo[222245]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:01.623+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:01 compute-0 sudo[222323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmlsxgqqjohuffasqnzgbvmqncaoyhxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014820.87835-1265-92642605993190/AnsiballZ_file.py'
Nov 24 20:07:01 compute-0 sudo[222323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:01 compute-0 python3.9[222325]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:02 compute-0 sudo[222323]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:02.261+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:02.603+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:02 compute-0 sudo[222475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmszjbqiepqiuvbctcfvicptbovtxyay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014822.228403-1277-222277768652610/AnsiballZ_stat.py'
Nov 24 20:07:02 compute-0 sudo[222475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:02 compute-0 python3.9[222477]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:07:02 compute-0 sudo[222475]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:03.250+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:03 compute-0 sudo[222553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbtohpndjvxjbgqcrmoqanqmhawijbtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014822.228403-1277-222277768652610/AnsiballZ_file.py'
Nov 24 20:07:03 compute-0 ceph-mon[75677]: pgmap v701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:03 compute-0 sudo[222553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:03 compute-0 python3.9[222555]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:03 compute-0 sudo[222553]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:03.563+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:04 compute-0 sudo[222705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljmuyacopytedlhxnkzscqlflccwsswz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014823.7035074-1289-92833207788897/AnsiballZ_stat.py'
Nov 24 20:07:04 compute-0 sudo[222705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:04.273+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:04 compute-0 python3.9[222707]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:07:04 compute-0 sudo[222705]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:04.598+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:04 compute-0 sudo[222783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiyavdzwsllyjpnjfjupmdikhahzqlrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014823.7035074-1289-92833207788897/AnsiballZ_file.py'
Nov 24 20:07:04 compute-0 sudo[222783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:04 compute-0 python3.9[222785]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:04 compute-0 sudo[222783]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:05 compute-0 ceph-mon[75677]: pgmap v702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:05.322+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:05 compute-0 sudo[222935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gslxsfxbhrntpjprbodppitvijhwtimd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014825.11323-1301-272123798374170/AnsiballZ_stat.py'
Nov 24 20:07:05 compute-0 sudo[222935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:05.555+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:05 compute-0 python3.9[222937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:07:05 compute-0 sudo[222935]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:06 compute-0 sudo[223013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzmqmgddrfjjiwchszkpczkonszktkmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014825.11323-1301-272123798374170/AnsiballZ_file.py'
Nov 24 20:07:06 compute-0 sudo[223013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:06 compute-0 python3.9[223015]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:06 compute-0 sudo[223013]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:06.353+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:06.586+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 942 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.692401) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014826692428, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 540, "num_deletes": 259, "total_data_size": 375829, "memory_usage": 387976, "flush_reason": "Manual Compaction"}
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014826696683, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 370140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18004, "largest_seqno": 18543, "table_properties": {"data_size": 367317, "index_size": 731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7337, "raw_average_key_size": 18, "raw_value_size": 361286, "raw_average_value_size": 912, "num_data_blocks": 33, "num_entries": 396, "num_filter_entries": 396, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764014804, "oldest_key_time": 1764014804, "file_creation_time": 1764014826, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 4314 microseconds, and 1808 cpu microseconds.
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.696715) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 370140 bytes OK
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.696730) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.697834) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.697847) EVENT_LOG_v1 {"time_micros": 1764014826697843, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.697861) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 372630, prev total WAL file size 372630, number of live WAL files 2.
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.698286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323534' seq:72057594037927935, type:22 .. '6C6F676D00353039' seq:0, type:0; will stop at (end)
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(361KB)], [38(6089KB)]
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014826698313, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 6605431, "oldest_snapshot_seqno": -1}
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5659 keys, 6376820 bytes, temperature: kUnknown
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014826775831, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 6376820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6342575, "index_size": 18967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 147775, "raw_average_key_size": 26, "raw_value_size": 6242716, "raw_average_value_size": 1103, "num_data_blocks": 760, "num_entries": 5659, "num_filter_entries": 5659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764014826, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.776146) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 6376820 bytes
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.779121) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.1 rd, 82.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 5.9 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(35.1) write-amplify(17.2) OK, records in: 6184, records dropped: 525 output_compression: NoCompression
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.779157) EVENT_LOG_v1 {"time_micros": 1764014826779141, "job": 18, "event": "compaction_finished", "compaction_time_micros": 77604, "compaction_time_cpu_micros": 28018, "output_level": 6, "num_output_files": 1, "total_output_size": 6376820, "num_input_records": 6184, "num_output_records": 5659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014826779429, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014826781884, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.698215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.781978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.781984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.781985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.781987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:07:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:07:06.781988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:07:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:06 compute-0 sudo[223165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttwfovhfldgbkxseyrufjbuqlluyquza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014826.476948-1313-226996422194597/AnsiballZ_stat.py'
Nov 24 20:07:06 compute-0 sudo[223165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:07 compute-0 python3.9[223167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:07:07 compute-0 sudo[223165]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 942 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:07 compute-0 ceph-mon[75677]: pgmap v703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:07.383+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:07 compute-0 sudo[223290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trtsxqwljkxpvglqbahdkkiolwyalyrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014826.476948-1313-226996422194597/AnsiballZ_copy.py'
Nov 24 20:07:07 compute-0 sudo[223290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:07.604+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:07 compute-0 python3.9[223292]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764014826.476948-1313-226996422194597/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:07 compute-0 sudo[223290]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:08 compute-0 sudo[223442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvwudqzbqfphqvmexovitzkzjzzexacc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014828.011818-1328-202453971508636/AnsiballZ_file.py'
Nov 24 20:07:08 compute-0 sudo[223442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:08.427+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:08.573+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:08 compute-0 python3.9[223444]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:08 compute-0 sudo[223442]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:09 compute-0 sudo[223594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wabdlfxixxnfrycwmuxsoewiuxgolucy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014828.8755085-1336-215344114477796/AnsiballZ_command.py'
Nov 24 20:07:09 compute-0 sudo[223594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:07:09.358 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:07:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:07:09.358 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:07:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:07:09.358 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:07:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:09.441+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:09 compute-0 python3.9[223596]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:07:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:09.532+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:09 compute-0 sudo[223594]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:09 compute-0 ceph-mon[75677]: pgmap v704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:10 compute-0 sudo[223749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfsmnwqbfrfrjeziipqzvlgpxeiifhdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014829.7387867-1344-248459919777638/AnsiballZ_blockinfile.py'
Nov 24 20:07:10 compute-0 sudo[223749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:10 compute-0 python3.9[223751]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:10.476+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:10 compute-0 sudo[223749]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:10.576+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:11 compute-0 sudo[223901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrgxtkzeqvfcejtjtlgfffdyrnytwkej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014830.8907557-1353-111641312181803/AnsiballZ_command.py'
Nov 24 20:07:11 compute-0 sudo[223901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:11 compute-0 python3.9[223903]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:07:11 compute-0 sudo[223901]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:11.505+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:11.581+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 947 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:11 compute-0 ceph-mon[75677]: pgmap v705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:11 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 947 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:12 compute-0 sudo[224054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kisucfetvdzjyhdllolsindbrtcsznxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014831.6836262-1361-278279288280230/AnsiballZ_stat.py'
Nov 24 20:07:12 compute-0 sudo[224054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:12 compute-0 python3.9[224056]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:07:12 compute-0 sudo[224054]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:12.541+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:12.564+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:12 compute-0 podman[224158]: 2025-11-24 20:07:12.864012301 +0000 UTC m=+0.091109332 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 20:07:12 compute-0 sudo[224227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evgsmizznvysvytckoxuwxugqjrmeolf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014832.5667772-1369-75306072771177/AnsiballZ_command.py'
Nov 24 20:07:12 compute-0 sudo[224227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:13 compute-0 python3.9[224229]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:07:13 compute-0 sudo[224227]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:13.536+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:13.563+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:13 compute-0 sudo[224382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkyuitzpcmsybcqtwiajzlyqrtrurwlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014833.4307442-1377-126568734018859/AnsiballZ_file.py'
Nov 24 20:07:13 compute-0 sudo[224382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:13 compute-0 ceph-mon[75677]: pgmap v706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:13 compute-0 python3.9[224384]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:13 compute-0 sudo[224382]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:14.533+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:14 compute-0 sudo[224534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtdloovuafilmvwfbthdyghcoaehrwmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014834.2216277-1385-176416978097346/AnsiballZ_stat.py'
Nov 24 20:07:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:14 compute-0 sudo[224534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:14.570+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:14 compute-0 python3.9[224536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:07:14 compute-0 sudo[224534]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:15 compute-0 sudo[224657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juymuvobdxezaloiaanrlyimsfvazvts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014834.2216277-1385-176416978097346/AnsiballZ_copy.py'
Nov 24 20:07:15 compute-0 sudo[224657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:15 compute-0 python3.9[224659]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014834.2216277-1385-176416978097346/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:15 compute-0 sudo[224657]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:15.526+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:15.618+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:15 compute-0 ceph-mon[75677]: pgmap v707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:16 compute-0 sudo[224809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cagwgixyxthcmhlfljjztzvwhfjwnggh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014835.7120056-1400-219904466669479/AnsiballZ_stat.py'
Nov 24 20:07:16 compute-0 sudo[224809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:16 compute-0 python3.9[224811]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:07:16 compute-0 sudo[224809]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:16.548+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:16.631+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 952 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:16 compute-0 sudo[224932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuuerjeguugrerjpqdzoocsxidikiwfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014835.7120056-1400-219904466669479/AnsiballZ_copy.py'
Nov 24 20:07:16 compute-0 sudo[224932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:16 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 952 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:17 compute-0 python3.9[224934]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014835.7120056-1400-219904466669479/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:17 compute-0 sudo[224932]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:17.536+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:17.599+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:17 compute-0 sudo[225084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucxnrkwffjxrkcorigyoclxhpusavbhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014837.340276-1415-202297673815687/AnsiballZ_stat.py'
Nov 24 20:07:17 compute-0 sudo[225084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:17 compute-0 ceph-mon[75677]: pgmap v708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:17 compute-0 python3.9[225086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:07:17 compute-0 sudo[225084]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:18 compute-0 sudo[225207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyaqsnkignkqqfhxfxnvoktgpdwctcxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014837.340276-1415-202297673815687/AnsiballZ_copy.py'
Nov 24 20:07:18 compute-0 sudo[225207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:18.533+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:18.639+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:18 compute-0 python3.9[225209]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014837.340276-1415-202297673815687/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:18 compute-0 sudo[225207]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:19.490+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:19.619+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:19 compute-0 sudo[225359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqedekgigeqqqmcrmjhpjwnopqisctna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014838.8969598-1430-79901479682505/AnsiballZ_systemd.py'
Nov 24 20:07:19 compute-0 sudo[225359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:19 compute-0 ceph-mon[75677]: pgmap v709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:19 compute-0 python3.9[225361]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:07:19 compute-0 systemd[1]: Reloading.
Nov 24 20:07:20 compute-0 systemd-rc-local-generator[225386]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:07:20 compute-0 systemd-sysv-generator[225390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:07:20 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 24 20:07:20 compute-0 sudo[225359]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:20.511+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:20.615+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:20 compute-0 ceph-mon[75677]: pgmap v710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:21 compute-0 sudo[225550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vthnztpwefzdictddixtpsyakpgauymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014840.6856403-1438-49818281268053/AnsiballZ_systemd.py'
Nov 24 20:07:21 compute-0 sudo[225550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:21 compute-0 python3.9[225552]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 20:07:21 compute-0 systemd[1]: Reloading.
Nov 24 20:07:21 compute-0 systemd-rc-local-generator[225573]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:07:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:21.522+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:21 compute-0 systemd-sysv-generator[225578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:07:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:21.636+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 962 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:21 compute-0 systemd[1]: Reloading.
Nov 24 20:07:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:21 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 962 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:21 compute-0 systemd-rc-local-generator[225611]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:07:21 compute-0 systemd-sysv-generator[225616]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:07:22 compute-0 sudo[225550]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:22.495+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:22 compute-0 sshd-session[166065]: Connection closed by 192.168.122.30 port 47916
Nov 24 20:07:22 compute-0 sshd-session[166062]: pam_unix(sshd:session): session closed for user zuul
Nov 24 20:07:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:22.624+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:22 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 24 20:07:22 compute-0 systemd[1]: session-49.scope: Consumed 4min 6.648s CPU time.
Nov 24 20:07:22 compute-0 systemd-logind[795]: Session 49 logged out. Waiting for processes to exit.
Nov 24 20:07:22 compute-0 systemd-logind[795]: Removed session 49.
Nov 24 20:07:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:22 compute-0 ceph-mon[75677]: pgmap v711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:23.499+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:23.641+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:23 compute-0 sudo[225649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:07:23 compute-0 sudo[225649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:23 compute-0 sudo[225649]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:24 compute-0 sudo[225674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:07:24 compute-0 sudo[225674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:24 compute-0 sudo[225674]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:24 compute-0 sudo[225699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:07:24 compute-0 sudo[225699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:24 compute-0 sudo[225699]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:24 compute-0 sudo[225724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:07:24 compute-0 sudo[225724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:07:24
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.control', '.rgw.root']
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:07:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:24.463+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:24.651+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:24 compute-0 sudo[225724]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:07:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:07:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:07:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:07:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:07:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9ee9f0f7-0982-4ffd-bc77-118d496788e0 does not exist
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e600099e-6dab-4c80-a22a-420c9ff9cca2 does not exist
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 99fff673-2c68-4713-99a6-e6c68e5d9661 does not exist
Nov 24 20:07:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:07:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:07:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:07:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:07:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:07:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:07:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:24 compute-0 sudo[225780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:07:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:07:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:07:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:07:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:07:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:07:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:07:24 compute-0 ceph-mon[75677]: pgmap v712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:24 compute-0 sudo[225780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:24 compute-0 sudo[225780]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:25 compute-0 sudo[225805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:07:25 compute-0 sudo[225805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:25 compute-0 sudo[225805]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:25 compute-0 sudo[225830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:07:25 compute-0 sudo[225830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:25 compute-0 sudo[225830]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:25 compute-0 sudo[225855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:07:25 compute-0 sudo[225855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:25.507+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:25.631+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:25 compute-0 podman[225922]: 2025-11-24 20:07:25.634700117 +0000 UTC m=+0.061926384 container create 7b8e9b3c94e0f8dad8fff2ee4539131531535fd444816fdb44f17f6ba83e7d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 20:07:25 compute-0 systemd[1]: Started libpod-conmon-7b8e9b3c94e0f8dad8fff2ee4539131531535fd444816fdb44f17f6ba83e7d50.scope.
Nov 24 20:07:25 compute-0 podman[225922]: 2025-11-24 20:07:25.604375523 +0000 UTC m=+0.031601850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:07:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:07:25 compute-0 podman[225922]: 2025-11-24 20:07:25.759285095 +0000 UTC m=+0.186511422 container init 7b8e9b3c94e0f8dad8fff2ee4539131531535fd444816fdb44f17f6ba83e7d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 20:07:25 compute-0 podman[225922]: 2025-11-24 20:07:25.771435287 +0000 UTC m=+0.198661564 container start 7b8e9b3c94e0f8dad8fff2ee4539131531535fd444816fdb44f17f6ba83e7d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lamport, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 20:07:25 compute-0 podman[225922]: 2025-11-24 20:07:25.775678496 +0000 UTC m=+0.202904823 container attach 7b8e9b3c94e0f8dad8fff2ee4539131531535fd444816fdb44f17f6ba83e7d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lamport, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:07:25 compute-0 thirsty_lamport[225939]: 167 167
Nov 24 20:07:25 compute-0 systemd[1]: libpod-7b8e9b3c94e0f8dad8fff2ee4539131531535fd444816fdb44f17f6ba83e7d50.scope: Deactivated successfully.
Nov 24 20:07:25 compute-0 podman[225922]: 2025-11-24 20:07:25.781683465 +0000 UTC m=+0.208909732 container died 7b8e9b3c94e0f8dad8fff2ee4539131531535fd444816fdb44f17f6ba83e7d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lamport, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 20:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3893f488025c6b2520f6b6688012b05d9e331aec74802876bb23fdbfdfe5891d-merged.mount: Deactivated successfully.
Nov 24 20:07:25 compute-0 podman[225922]: 2025-11-24 20:07:25.886138877 +0000 UTC m=+0.313365154 container remove 7b8e9b3c94e0f8dad8fff2ee4539131531535fd444816fdb44f17f6ba83e7d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lamport, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:07:25 compute-0 systemd[1]: libpod-conmon-7b8e9b3c94e0f8dad8fff2ee4539131531535fd444816fdb44f17f6ba83e7d50.scope: Deactivated successfully.
Nov 24 20:07:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:26 compute-0 podman[225963]: 2025-11-24 20:07:26.102504399 +0000 UTC m=+0.052198351 container create e2ee562b62fcad21132cbd03ac8320f6e9067ac5a41b3d741cb17b04e07466fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:07:26 compute-0 systemd[1]: Started libpod-conmon-e2ee562b62fcad21132cbd03ac8320f6e9067ac5a41b3d741cb17b04e07466fe.scope.
Nov 24 20:07:26 compute-0 podman[225963]: 2025-11-24 20:07:26.084406799 +0000 UTC m=+0.034100731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:07:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be23c9bc9ccb71dcb9cfa29e7444940646df94b4ec2d9db0f4b6b6c1f634765/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be23c9bc9ccb71dcb9cfa29e7444940646df94b4ec2d9db0f4b6b6c1f634765/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be23c9bc9ccb71dcb9cfa29e7444940646df94b4ec2d9db0f4b6b6c1f634765/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be23c9bc9ccb71dcb9cfa29e7444940646df94b4ec2d9db0f4b6b6c1f634765/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be23c9bc9ccb71dcb9cfa29e7444940646df94b4ec2d9db0f4b6b6c1f634765/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:26 compute-0 podman[225963]: 2025-11-24 20:07:26.205449828 +0000 UTC m=+0.155143780 container init e2ee562b62fcad21132cbd03ac8320f6e9067ac5a41b3d741cb17b04e07466fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lumiere, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:07:26 compute-0 podman[225963]: 2025-11-24 20:07:26.219982047 +0000 UTC m=+0.169675959 container start e2ee562b62fcad21132cbd03ac8320f6e9067ac5a41b3d741cb17b04e07466fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lumiere, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 20:07:26 compute-0 podman[225963]: 2025-11-24 20:07:26.224034641 +0000 UTC m=+0.173728583 container attach e2ee562b62fcad21132cbd03ac8320f6e9067ac5a41b3d741cb17b04e07466fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 20:07:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:26.537+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:26.641+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 967 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:26 compute-0 ceph-mon[75677]: pgmap v713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:27 compute-0 friendly_lumiere[225979]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:07:27 compute-0 friendly_lumiere[225979]: --> relative data size: 1.0
Nov 24 20:07:27 compute-0 friendly_lumiere[225979]: --> All data devices are unavailable
Nov 24 20:07:27 compute-0 systemd[1]: libpod-e2ee562b62fcad21132cbd03ac8320f6e9067ac5a41b3d741cb17b04e07466fe.scope: Deactivated successfully.
Nov 24 20:07:27 compute-0 systemd[1]: libpod-e2ee562b62fcad21132cbd03ac8320f6e9067ac5a41b3d741cb17b04e07466fe.scope: Consumed 1.100s CPU time.
Nov 24 20:07:27 compute-0 podman[225963]: 2025-11-24 20:07:27.369961997 +0000 UTC m=+1.319655949 container died e2ee562b62fcad21132cbd03ac8320f6e9067ac5a41b3d741cb17b04e07466fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3be23c9bc9ccb71dcb9cfa29e7444940646df94b4ec2d9db0f4b6b6c1f634765-merged.mount: Deactivated successfully.
Nov 24 20:07:27 compute-0 podman[225963]: 2025-11-24 20:07:27.439108444 +0000 UTC m=+1.388802366 container remove e2ee562b62fcad21132cbd03ac8320f6e9067ac5a41b3d741cb17b04e07466fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:07:27 compute-0 systemd[1]: libpod-conmon-e2ee562b62fcad21132cbd03ac8320f6e9067ac5a41b3d741cb17b04e07466fe.scope: Deactivated successfully.
Nov 24 20:07:27 compute-0 sudo[225855]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:27 compute-0 sudo[226023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:07:27 compute-0 sudo[226023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:27 compute-0 sudo[226023]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:27.575+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:27 compute-0 sudo[226048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:07:27 compute-0 sudo[226048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:27.620+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:27 compute-0 sudo[226048]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:27 compute-0 sudo[226073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:07:27 compute-0 sudo[226073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:27 compute-0 sudo[226073]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:27 compute-0 sudo[226098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:07:27 compute-0 sudo[226098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 967 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:28 compute-0 podman[226163]: 2025-11-24 20:07:28.241119637 +0000 UTC m=+0.072893644 container create 79281bac66854dbf49f95e06a9e1eba5828112c34c6cec643b446b91e4273ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 20:07:28 compute-0 systemd[1]: Started libpod-conmon-79281bac66854dbf49f95e06a9e1eba5828112c34c6cec643b446b91e4273ee8.scope.
Nov 24 20:07:28 compute-0 podman[226163]: 2025-11-24 20:07:28.210865566 +0000 UTC m=+0.042639613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:07:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:07:28 compute-0 sshd-session[226176]: Accepted publickey for zuul from 192.168.122.30 port 57602 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 20:07:28 compute-0 systemd-logind[795]: New session 50 of user zuul.
Nov 24 20:07:28 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 24 20:07:28 compute-0 podman[226163]: 2025-11-24 20:07:28.326253544 +0000 UTC m=+0.158027521 container init 79281bac66854dbf49f95e06a9e1eba5828112c34c6cec643b446b91e4273ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nash, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:07:28 compute-0 sshd-session[226176]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 20:07:28 compute-0 podman[226163]: 2025-11-24 20:07:28.337861361 +0000 UTC m=+0.169635338 container start 79281bac66854dbf49f95e06a9e1eba5828112c34c6cec643b446b91e4273ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 20:07:28 compute-0 podman[226163]: 2025-11-24 20:07:28.341414021 +0000 UTC m=+0.173187988 container attach 79281bac66854dbf49f95e06a9e1eba5828112c34c6cec643b446b91e4273ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:07:28 compute-0 laughing_nash[226181]: 167 167
Nov 24 20:07:28 compute-0 systemd[1]: libpod-79281bac66854dbf49f95e06a9e1eba5828112c34c6cec643b446b91e4273ee8.scope: Deactivated successfully.
Nov 24 20:07:28 compute-0 podman[226163]: 2025-11-24 20:07:28.345143706 +0000 UTC m=+0.176917693 container died 79281bac66854dbf49f95e06a9e1eba5828112c34c6cec643b446b91e4273ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeb185a73007ca23813e900dffc9ec193365e785437f2dee4ee25d09eabc5985-merged.mount: Deactivated successfully.
Nov 24 20:07:28 compute-0 podman[226163]: 2025-11-24 20:07:28.38401021 +0000 UTC m=+0.215784187 container remove 79281bac66854dbf49f95e06a9e1eba5828112c34c6cec643b446b91e4273ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_nash, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 20:07:28 compute-0 systemd[1]: libpod-conmon-79281bac66854dbf49f95e06a9e1eba5828112c34c6cec643b446b91e4273ee8.scope: Deactivated successfully.
Nov 24 20:07:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:28.593+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:28 compute-0 podman[226230]: 2025-11-24 20:07:28.602125882 +0000 UTC m=+0.079091428 container create 251e2bbb0a2458f77f358872292fc0007feff1cd7b6b9e24a58e3ba62964fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:07:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:28.625+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:28 compute-0 systemd[1]: Started libpod-conmon-251e2bbb0a2458f77f358872292fc0007feff1cd7b6b9e24a58e3ba62964fbb9.scope.
Nov 24 20:07:28 compute-0 podman[226230]: 2025-11-24 20:07:28.56867705 +0000 UTC m=+0.045642636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:07:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17c0066ba365e24624fce09f7271dbb6fd449f7d3282c2480b659a183025800/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17c0066ba365e24624fce09f7271dbb6fd449f7d3282c2480b659a183025800/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17c0066ba365e24624fce09f7271dbb6fd449f7d3282c2480b659a183025800/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17c0066ba365e24624fce09f7271dbb6fd449f7d3282c2480b659a183025800/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:28 compute-0 podman[226230]: 2025-11-24 20:07:28.727138142 +0000 UTC m=+0.204103708 container init 251e2bbb0a2458f77f358872292fc0007feff1cd7b6b9e24a58e3ba62964fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 24 20:07:28 compute-0 podman[226230]: 2025-11-24 20:07:28.739810458 +0000 UTC m=+0.216776004 container start 251e2bbb0a2458f77f358872292fc0007feff1cd7b6b9e24a58e3ba62964fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:07:28 compute-0 podman[226230]: 2025-11-24 20:07:28.744251834 +0000 UTC m=+0.221217380 container attach 251e2bbb0a2458f77f358872292fc0007feff1cd7b6b9e24a58e3ba62964fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:07:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:28 compute-0 sshd-session[226001]: Connection closed by authenticating user root 27.79.44.141 port 38190 [preauth]
Nov 24 20:07:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:28 compute-0 ceph-mon[75677]: pgmap v714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:29 compute-0 podman[226351]: 2025-11-24 20:07:29.378245555 +0000 UTC m=+0.155175600 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 20:07:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:29.562+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:29 compute-0 python3.9[226387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 20:07:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:29.637+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:29 compute-0 sad_kare[226275]: {
Nov 24 20:07:29 compute-0 sad_kare[226275]:     "0": [
Nov 24 20:07:29 compute-0 sad_kare[226275]:         {
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "devices": [
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "/dev/loop3"
Nov 24 20:07:29 compute-0 sad_kare[226275]:             ],
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_name": "ceph_lv0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_size": "21470642176",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "name": "ceph_lv0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "tags": {
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.cluster_name": "ceph",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.crush_device_class": "",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.encrypted": "0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.osd_id": "0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.type": "block",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.vdo": "0"
Nov 24 20:07:29 compute-0 sad_kare[226275]:             },
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "type": "block",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "vg_name": "ceph_vg0"
Nov 24 20:07:29 compute-0 sad_kare[226275]:         }
Nov 24 20:07:29 compute-0 sad_kare[226275]:     ],
Nov 24 20:07:29 compute-0 sad_kare[226275]:     "1": [
Nov 24 20:07:29 compute-0 sad_kare[226275]:         {
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "devices": [
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "/dev/loop4"
Nov 24 20:07:29 compute-0 sad_kare[226275]:             ],
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_name": "ceph_lv1",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_size": "21470642176",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "name": "ceph_lv1",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "tags": {
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.cluster_name": "ceph",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.crush_device_class": "",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.encrypted": "0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.osd_id": "1",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.type": "block",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.vdo": "0"
Nov 24 20:07:29 compute-0 sad_kare[226275]:             },
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "type": "block",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "vg_name": "ceph_vg1"
Nov 24 20:07:29 compute-0 sad_kare[226275]:         }
Nov 24 20:07:29 compute-0 sad_kare[226275]:     ],
Nov 24 20:07:29 compute-0 sad_kare[226275]:     "2": [
Nov 24 20:07:29 compute-0 sad_kare[226275]:         {
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "devices": [
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "/dev/loop5"
Nov 24 20:07:29 compute-0 sad_kare[226275]:             ],
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_name": "ceph_lv2",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_size": "21470642176",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "name": "ceph_lv2",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "tags": {
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.cluster_name": "ceph",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.crush_device_class": "",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.encrypted": "0",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.osd_id": "2",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.type": "block",
Nov 24 20:07:29 compute-0 sad_kare[226275]:                 "ceph.vdo": "0"
Nov 24 20:07:29 compute-0 sad_kare[226275]:             },
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "type": "block",
Nov 24 20:07:29 compute-0 sad_kare[226275]:             "vg_name": "ceph_vg2"
Nov 24 20:07:29 compute-0 sad_kare[226275]:         }
Nov 24 20:07:29 compute-0 sad_kare[226275]:     ]
Nov 24 20:07:29 compute-0 sad_kare[226275]: }
Nov 24 20:07:29 compute-0 systemd[1]: libpod-251e2bbb0a2458f77f358872292fc0007feff1cd7b6b9e24a58e3ba62964fbb9.scope: Deactivated successfully.
Nov 24 20:07:29 compute-0 podman[226230]: 2025-11-24 20:07:29.690552569 +0000 UTC m=+1.167518115 container died 251e2bbb0a2458f77f358872292fc0007feff1cd7b6b9e24a58e3ba62964fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 20:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f17c0066ba365e24624fce09f7271dbb6fd449f7d3282c2480b659a183025800-merged.mount: Deactivated successfully.
Nov 24 20:07:29 compute-0 podman[226230]: 2025-11-24 20:07:29.775970164 +0000 UTC m=+1.252935710 container remove 251e2bbb0a2458f77f358872292fc0007feff1cd7b6b9e24a58e3ba62964fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:07:29 compute-0 systemd[1]: libpod-conmon-251e2bbb0a2458f77f358872292fc0007feff1cd7b6b9e24a58e3ba62964fbb9.scope: Deactivated successfully.
Nov 24 20:07:29 compute-0 sudo[226098]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:29 compute-0 sudo[226423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:07:29 compute-0 sudo[226423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:29 compute-0 sudo[226423]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:29 compute-0 sudo[226448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:07:29 compute-0 sudo[226448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:29 compute-0 sudo[226448]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:30 compute-0 sudo[226481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:07:30 compute-0 sudo[226481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:30 compute-0 sudo[226481]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:30 compute-0 sudo[226522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:07:30 compute-0 sudo[226522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:30.569+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:30.602+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:30 compute-0 podman[226640]: 2025-11-24 20:07:30.612570511 +0000 UTC m=+0.071164566 container create 19d2ae1adba8ba9410abe42542c69d079886cbdb95473ce5757b8a31cc94440c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:07:30 compute-0 systemd[1]: Started libpod-conmon-19d2ae1adba8ba9410abe42542c69d079886cbdb95473ce5757b8a31cc94440c.scope.
Nov 24 20:07:30 compute-0 podman[226640]: 2025-11-24 20:07:30.585437376 +0000 UTC m=+0.044031481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:07:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:07:30 compute-0 podman[226640]: 2025-11-24 20:07:30.739027411 +0000 UTC m=+0.197621536 container init 19d2ae1adba8ba9410abe42542c69d079886cbdb95473ce5757b8a31cc94440c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_almeida, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 20:07:30 compute-0 podman[226640]: 2025-11-24 20:07:30.751297216 +0000 UTC m=+0.209891241 container start 19d2ae1adba8ba9410abe42542c69d079886cbdb95473ce5757b8a31cc94440c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_almeida, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 20:07:30 compute-0 podman[226640]: 2025-11-24 20:07:30.755057993 +0000 UTC m=+0.213652048 container attach 19d2ae1adba8ba9410abe42542c69d079886cbdb95473ce5757b8a31cc94440c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:07:30 compute-0 eloquent_almeida[226677]: 167 167
Nov 24 20:07:30 compute-0 systemd[1]: libpod-19d2ae1adba8ba9410abe42542c69d079886cbdb95473ce5757b8a31cc94440c.scope: Deactivated successfully.
Nov 24 20:07:30 compute-0 podman[226640]: 2025-11-24 20:07:30.761024061 +0000 UTC m=+0.219618086 container died 19d2ae1adba8ba9410abe42542c69d079886cbdb95473ce5757b8a31cc94440c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_almeida, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a341d04993589890b920635b20298c96b50187d67bbb7f08c1aaa081f4c6f0e4-merged.mount: Deactivated successfully.
Nov 24 20:07:30 compute-0 podman[226640]: 2025-11-24 20:07:30.818815528 +0000 UTC m=+0.277409583 container remove 19d2ae1adba8ba9410abe42542c69d079886cbdb95473ce5757b8a31cc94440c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_almeida, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:07:30 compute-0 systemd[1]: libpod-conmon-19d2ae1adba8ba9410abe42542c69d079886cbdb95473ce5757b8a31cc94440c.scope: Deactivated successfully.
Nov 24 20:07:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:30 compute-0 ceph-mon[75677]: pgmap v715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:31 compute-0 podman[226756]: 2025-11-24 20:07:31.058095675 +0000 UTC m=+0.049115854 container create 7c4789e108ada6bf678cf09906919d3764ce0645d0f62239b1d1194dd7666140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:07:31 compute-0 python3.9[226750]: ansible-ansible.builtin.service_facts Invoked
Nov 24 20:07:31 compute-0 systemd[1]: Started libpod-conmon-7c4789e108ada6bf678cf09906919d3764ce0645d0f62239b1d1194dd7666140.scope.
Nov 24 20:07:31 compute-0 podman[226756]: 2025-11-24 20:07:31.032471584 +0000 UTC m=+0.023491753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:07:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92cefd30f1534e8013ce2c4a1382f289519c6c802dc732732aeab0e8b58a66a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92cefd30f1534e8013ce2c4a1382f289519c6c802dc732732aeab0e8b58a66a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92cefd30f1534e8013ce2c4a1382f289519c6c802dc732732aeab0e8b58a66a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92cefd30f1534e8013ce2c4a1382f289519c6c802dc732732aeab0e8b58a66a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:07:31 compute-0 podman[226756]: 2025-11-24 20:07:31.176844819 +0000 UTC m=+0.167865038 container init 7c4789e108ada6bf678cf09906919d3764ce0645d0f62239b1d1194dd7666140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:07:31 compute-0 network[226793]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 20:07:31 compute-0 network[226794]: 'network-scripts' will be removed from distribution in near future.
Nov 24 20:07:31 compute-0 network[226795]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 20:07:31 compute-0 podman[226756]: 2025-11-24 20:07:31.193372914 +0000 UTC m=+0.184393093 container start 7c4789e108ada6bf678cf09906919d3764ce0645d0f62239b1d1194dd7666140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:07:31 compute-0 podman[226756]: 2025-11-24 20:07:31.19679364 +0000 UTC m=+0.187813789 container attach 7c4789e108ada6bf678cf09906919d3764ce0645d0f62239b1d1194dd7666140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:07:31 compute-0 sshd-session[225499]: error: kex_exchange_identification: read: Connection timed out
Nov 24 20:07:31 compute-0 sshd-session[225499]: banner exchange: Connection from 120.231.191.40 port 11621: Connection timed out
Nov 24 20:07:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:31.535+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:31.649+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:32 compute-0 recursing_galileo[226775]: {
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "osd_id": 2,
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "type": "bluestore"
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:     },
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "osd_id": 1,
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "type": "bluestore"
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:     },
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "osd_id": 0,
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:         "type": "bluestore"
Nov 24 20:07:32 compute-0 recursing_galileo[226775]:     }
Nov 24 20:07:32 compute-0 recursing_galileo[226775]: }
Nov 24 20:07:32 compute-0 systemd[1]: libpod-7c4789e108ada6bf678cf09906919d3764ce0645d0f62239b1d1194dd7666140.scope: Deactivated successfully.
Nov 24 20:07:32 compute-0 podman[226756]: 2025-11-24 20:07:32.316222131 +0000 UTC m=+1.307242310 container died 7c4789e108ada6bf678cf09906919d3764ce0645d0f62239b1d1194dd7666140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:07:32 compute-0 systemd[1]: libpod-7c4789e108ada6bf678cf09906919d3764ce0645d0f62239b1d1194dd7666140.scope: Consumed 1.123s CPU time.
Nov 24 20:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-92cefd30f1534e8013ce2c4a1382f289519c6c802dc732732aeab0e8b58a66a6-merged.mount: Deactivated successfully.
Nov 24 20:07:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:32.493+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:32 compute-0 podman[226756]: 2025-11-24 20:07:32.556225139 +0000 UTC m=+1.547245278 container remove 7c4789e108ada6bf678cf09906919d3764ce0645d0f62239b1d1194dd7666140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:07:32 compute-0 systemd[1]: libpod-conmon-7c4789e108ada6bf678cf09906919d3764ce0645d0f62239b1d1194dd7666140.scope: Deactivated successfully.
Nov 24 20:07:32 compute-0 sudo[226522]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:07:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:07:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:07:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:07:32 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2cb154d9-c2b2-4630-a380-c4478590621d does not exist
Nov 24 20:07:32 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 59e6f851-ce7b-4071-9de1-d2df64eabf64 does not exist
Nov 24 20:07:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:32.674+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:32 compute-0 sudo[226861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:07:32 compute-0 sudo[226861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:32 compute-0 sudo[226861]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:32 compute-0 sudo[226889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:07:32 compute-0 sudo[226889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:07:32 compute-0 sudo[226889]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:32 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:07:32 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:07:32 compute-0 ceph-mon[75677]: pgmap v716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:33.522+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:33.638+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:07:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:34.478+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:34.673+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:35 compute-0 ceph-mon[75677]: pgmap v717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:35.453+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:35.658+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:35 compute-0 sudo[227158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smpwduvrurkqqufcpepxjwldwrwebent ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014855.5573463-47-272465125622606/AnsiballZ_setup.py'
Nov 24 20:07:35 compute-0 sudo[227158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:36 compute-0 python3.9[227160]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 20:07:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:36.453+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:36 compute-0 sudo[227158]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:36.650+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 971 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:37 compute-0 sudo[227242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iafbgbtxrregqkdbroocymzcetjxxcpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014855.5573463-47-272465125622606/AnsiballZ_dnf.py'
Nov 24 20:07:37 compute-0 sudo[227242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:37 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 971 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:37 compute-0 ceph-mon[75677]: pgmap v718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:37 compute-0 python3.9[227244]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 20:07:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:37.434+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:37.669+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:38.394+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:38.707+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:39 compute-0 ceph-mon[75677]: pgmap v719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:39.355+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:39.680+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:40.353+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:07:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:07:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:07:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:07:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:07:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:40.637+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:41 compute-0 ceph-mon[75677]: pgmap v720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:41.334+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:41.682+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 982 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:42 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 982 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:42.321+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:42 compute-0 sudo[227242]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:42.639+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:43 compute-0 ceph-mon[75677]: pgmap v721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:43 compute-0 podman[227369]: 2025-11-24 20:07:43.175772177 +0000 UTC m=+0.076796624 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 20:07:43 compute-0 sudo[227412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvxzitjzhdfjyusoxeeaqcvdretuyzgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014862.6477988-59-233179565174964/AnsiballZ_stat.py'
Nov 24 20:07:43 compute-0 sudo[227412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:43.299+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:43 compute-0 python3.9[227416]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:07:43 compute-0 sudo[227412]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:43.623+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:44 compute-0 sudo[227567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdmmaowvhztubkufgwyepqtxbdzpwedp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014863.6437695-69-168953132684556/AnsiballZ_command.py'
Nov 24 20:07:44 compute-0 sudo[227567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:44.339+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:44 compute-0 python3.9[227569]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:07:44 compute-0 sudo[227567]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:44.596+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:45 compute-0 sudo[227720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwqphiyezhzkhmwgfulruklifetzsgct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014864.7378194-79-23242905134923/AnsiballZ_stat.py'
Nov 24 20:07:45 compute-0 sudo[227720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:45 compute-0 ceph-mon[75677]: pgmap v722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:45.290+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:45 compute-0 python3.9[227722]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:07:45 compute-0 sudo[227720]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:45.554+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:45 compute-0 sudo[227872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rarbihbvvcuwiagdhwqygdimjpvqtumr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014865.534163-87-139657773651580/AnsiballZ_command.py'
Nov 24 20:07:45 compute-0 sudo[227872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:46 compute-0 python3.9[227874]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:07:46 compute-0 sudo[227872]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:46.246+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:46.594+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:46 compute-0 sudo[228025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnagpqdkfskqqlrxhotvexuvujqwxahp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014866.26826-95-135297277664324/AnsiballZ_stat.py'
Nov 24 20:07:46 compute-0 sudo[228025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:46 compute-0 python3.9[228027]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:07:46 compute-0 sudo[228025]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 987 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:47 compute-0 ceph-mon[75677]: pgmap v723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:47.219+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:47 compute-0 sudo[228148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-votqfbkbfduqmojuphhnvcluvsknqkrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014866.26826-95-135297277664324/AnsiballZ_copy.py'
Nov 24 20:07:47 compute-0 sudo[228148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:47 compute-0 python3.9[228150]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014866.26826-95-135297277664324/.source.iscsi _original_basename=.ugfmmq4c follow=False checksum=51474a504b65225b2448ac1e4d6d12f9c37efb91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:47 compute-0 sudo[228148]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:47.632+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:48.185+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:48 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 987 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:48 compute-0 sudo[228300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqsslnuspmylbnxbxzwaxuqbisgdjrws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014867.8414986-110-39418860964796/AnsiballZ_file.py'
Nov 24 20:07:48 compute-0 sudo[228300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:48 compute-0 python3.9[228302]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:48 compute-0 sudo[228300]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:48.645+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:49.160+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:49 compute-0 ceph-mon[75677]: pgmap v724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:49 compute-0 sudo[228452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sghnofnwlusdpeclugvxuskpatomuthb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014868.783156-118-175913057836328/AnsiballZ_lineinfile.py'
Nov 24 20:07:49 compute-0 sudo[228452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:49 compute-0 python3.9[228454]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:07:49 compute-0 sudo[228452]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:49.666+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:50.179+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:50 compute-0 sudo[228604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsfbpoaauqjqezcakhpegyxzzhithxmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014869.809792-127-48435395880440/AnsiballZ_systemd_service.py'
Nov 24 20:07:50 compute-0 sudo[228604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:50.653+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:50 compute-0 python3.9[228606]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:07:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:50 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 24 20:07:50 compute-0 sudo[228604]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:51.187+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:51 compute-0 ceph-mon[75677]: pgmap v725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:51 compute-0 sudo[228760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtcuuzfcrnwwvfqwijqupvjzxnupsnwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014871.1614945-135-206485987189991/AnsiballZ_systemd_service.py'
Nov 24 20:07:51 compute-0 sudo[228760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:51.690+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:51 compute-0 python3.9[228762]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:07:51 compute-0 systemd[1]: Reloading.
Nov 24 20:07:51 compute-0 systemd-rc-local-generator[228788]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:07:52 compute-0 systemd-sysv-generator[228793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:07:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:52.182+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:52 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 20:07:52 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 24 20:07:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:52 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 24 20:07:52 compute-0 systemd[1]: Started Open-iSCSI.
Nov 24 20:07:52 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 24 20:07:52 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 24 20:07:52 compute-0 sudo[228760]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:52.738+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:53 compute-0 sudo[228959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beqrbvooazjexihojxstixhuowdltchr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014872.7277884-146-244873015097512/AnsiballZ_service_facts.py'
Nov 24 20:07:53 compute-0 sudo[228959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:53.178+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:53 compute-0 ceph-mon[75677]: pgmap v726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:53 compute-0 python3.9[228961]: ansible-ansible.builtin.service_facts Invoked
Nov 24 20:07:53 compute-0 network[228978]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 20:07:53 compute-0 network[228979]: 'network-scripts' will be removed from distribution in near future.
Nov 24 20:07:53 compute-0 network[228980]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 20:07:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:53.694+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:54.197+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:07:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:54.664+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:55.177+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:55.696+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:56.164+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:56 compute-0 ceph-mon[75677]: pgmap v727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 992 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:07:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:56.729+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:57.186+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:57 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 992 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:07:57 compute-0 sudo[228959]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:57.732+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:58.155+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:58 compute-0 sudo[229250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajnbrgngerhrtdlovxkoybmqmzzcinkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014877.9244578-156-135989651262881/AnsiballZ_file.py'
Nov 24 20:07:58 compute-0 sudo[229250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:58 compute-0 ceph-mon[75677]: pgmap v728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:58 compute-0 python3.9[229252]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 20:07:58 compute-0 sudo[229250]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:58.706+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:07:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:07:59.166+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:07:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:59 compute-0 sudo[229402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwillykaeorwxwxxrvuyoqgnghvmoinw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014878.7018552-164-180675779261895/AnsiballZ_modprobe.py'
Nov 24 20:07:59 compute-0 sudo[229402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:07:59 compute-0 python3.9[229404]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 24 20:07:59 compute-0 sudo[229402]: pam_unix(sudo:session): session closed for user root
Nov 24 20:07:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:07:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:07:59 compute-0 podman[229408]: 2025-11-24 20:07:59.564367506 +0000 UTC m=+0.111563262 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 20:07:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:07:59.736+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:07:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:00 compute-0 sudo[229581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bumqylruacslapbuzgstipuepokofkkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014879.624463-172-239928471553793/AnsiballZ_stat.py'
Nov 24 20:08:00 compute-0 sudo[229581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:00.196+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:00 compute-0 python3.9[229583]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:00 compute-0 sudo[229581]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:00 compute-0 ceph-mon[75677]: pgmap v729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:00 compute-0 sudo[229704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xctdzlyxnunprltrjoseawsurmnhgavd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014879.624463-172-239928471553793/AnsiballZ_copy.py'
Nov 24 20:08:00 compute-0 sudo[229704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:00.736+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:00 compute-0 python3.9[229706]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014879.624463-172-239928471553793/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:00 compute-0 sudo[229704]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:01.152+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:01 compute-0 sudo[229856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boadyqywcltmbqnqgujkhddvthpnqzcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014881.1408265-188-153125930862952/AnsiballZ_lineinfile.py'
Nov 24 20:08:01 compute-0 sudo[229856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1002 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:01 compute-0 python3.9[229858]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:01.745+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:01 compute-0 sudo[229856]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:02.169+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:02 compute-0 ceph-mon[75677]: pgmap v730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:02 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1002 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:02 compute-0 sudo[230008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgueeymogblllgbyyzzlmmkpajbbkrps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014881.9937923-196-138730001598888/AnsiballZ_systemd.py'
Nov 24 20:08:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:02.730+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:02 compute-0 sudo[230008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:03 compute-0 python3.9[230010]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 20:08:03 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 20:08:03 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 24 20:08:03 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 24 20:08:03 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 20:08:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:03.130+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:03 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 20:08:03 compute-0 sudo[230008]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:03.680+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:03 compute-0 sudo[230165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qubmzpnzxryhfpupvztqmakyqjgbpump ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014883.4141078-204-244578747100457/AnsiballZ_file.py'
Nov 24 20:08:03 compute-0 sudo[230165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:03 compute-0 python3.9[230167]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:08:04 compute-0 sudo[230165]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:04.104+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:04 compute-0 ceph-mon[75677]: pgmap v731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:04.715+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:04 compute-0 sudo[230317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecvkrabvunnjmvwewtkijnxhkcpzygdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014884.3607788-213-113649120073678/AnsiballZ_stat.py'
Nov 24 20:08:04 compute-0 sudo[230317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:04 compute-0 python3.9[230319]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:08:04 compute-0 sudo[230317]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:05.142+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:05 compute-0 sudo[230469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfkgctplsujhkazngtcxrvgfabbhevtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014885.2423718-222-81258045127660/AnsiballZ_stat.py'
Nov 24 20:08:05 compute-0 sudo[230469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:05.707+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:05 compute-0 python3.9[230471]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:08:05 compute-0 sudo[230469]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:06.175+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:06 compute-0 sudo[230621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqrqdjjeddvbbfrfubtjgwoxqevcilya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014886.057659-230-113991142606990/AnsiballZ_stat.py'
Nov 24 20:08:06 compute-0 sudo[230621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:06 compute-0 ceph-mon[75677]: pgmap v732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:06 compute-0 python3.9[230623]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:06 compute-0 sudo[230621]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:06.663+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:07 compute-0 sudo[230744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgzklzaxerhejxcinetbdmvzusxglylx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014886.057659-230-113991142606990/AnsiballZ_copy.py'
Nov 24 20:08:07 compute-0 sudo[230744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:07.190+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:07 compute-0 python3.9[230746]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014886.057659-230-113991142606990/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:07 compute-0 sudo[230744]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1007 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:07.615+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:08 compute-0 sudo[230896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmtuxwjlohlioamvgzosrtbaagmpypge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014887.6181347-245-45532410243082/AnsiballZ_command.py'
Nov 24 20:08:08 compute-0 sudo[230896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:08.225+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:08 compute-0 python3.9[230898]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:08:08 compute-0 sudo[230896]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:08 compute-0 ceph-mon[75677]: pgmap v733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:08 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1007 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:08.589+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:08 compute-0 sudo[231049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqibxhtcttczbzhwjgztaogfldbduxgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014888.5170653-253-194869267988909/AnsiballZ_lineinfile.py'
Nov 24 20:08:08 compute-0 sudo[231049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:09 compute-0 python3.9[231051]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:09 compute-0 sudo[231049]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:09.199+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:08:09.359 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:08:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:08:09.360 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:08:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:08:09.360 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:08:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:09.584+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:09 compute-0 sudo[231201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-infynnkunjxypoadeiqekahkqkovcfny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014889.3833764-261-221592598272126/AnsiballZ_replace.py'
Nov 24 20:08:09 compute-0 sudo[231201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:10.191+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:10 compute-0 python3.9[231203]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:10 compute-0 sudo[231201]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:10.564+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:10 compute-0 ceph-mon[75677]: pgmap v734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:10 compute-0 sudo[231353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxehhkrdrzsajynrumnyhcrbtkqnougy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014890.4365673-269-180154120466735/AnsiballZ_replace.py'
Nov 24 20:08:10 compute-0 sudo[231353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:11 compute-0 python3.9[231355]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:11 compute-0 sudo[231353]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:11.212+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:11.569+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:11 compute-0 sudo[231505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuylyyspaxqylutbfeqxjvcacfeubuji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014891.3287292-278-26836369239334/AnsiballZ_lineinfile.py'
Nov 24 20:08:11 compute-0 sudo[231505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:11 compute-0 python3.9[231507]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:11 compute-0 sudo[231505]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:12.182+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:12 compute-0 sudo[231657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrcyffglucbxebisjdttcoooxkwdeuos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014892.0886571-278-38431479684559/AnsiballZ_lineinfile.py'
Nov 24 20:08:12 compute-0 sudo[231657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:12.527+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:12 compute-0 ceph-mon[75677]: pgmap v735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:12 compute-0 python3.9[231659]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:12 compute-0 sudo[231657]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:13.203+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:13 compute-0 sudo[231822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmmulzqrfcfpeokuxrjfzbtpummddzyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014892.9005735-278-71415683114920/AnsiballZ_lineinfile.py'
Nov 24 20:08:13 compute-0 sudo[231822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:13 compute-0 podman[231783]: 2025-11-24 20:08:13.344704633 +0000 UTC m=+0.101054596 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:08:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:13.511+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:13 compute-0 python3.9[231830]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:13 compute-0 sudo[231822]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:14 compute-0 sudo[231981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axepsyosigoqodxfoskyezinghrsbqok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014893.716615-278-26651404369704/AnsiballZ_lineinfile.py'
Nov 24 20:08:14 compute-0 sudo[231981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:14.157+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:14 compute-0 python3.9[231983]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:14 compute-0 sudo[231981]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:14.475+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:14 compute-0 ceph-mon[75677]: pgmap v736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:14 compute-0 sudo[232133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwpgqodsojefugewgeylwlzthsjpjnfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014894.5309129-307-61803910456213/AnsiballZ_stat.py'
Nov 24 20:08:14 compute-0 sudo[232133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:15.160+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:15 compute-0 python3.9[232135]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:08:15 compute-0 sudo[232133]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:15.433+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:15 compute-0 sudo[232287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wirkufvupjatofkgjmdvgmuboywyucmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014895.4286716-315-263721695983317/AnsiballZ_file.py'
Nov 24 20:08:15 compute-0 sudo[232287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:16 compute-0 python3.9[232289]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:16 compute-0 sudo[232287]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:16.186+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:16.391+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:16 compute-0 ceph-mon[75677]: pgmap v737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1012 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:16 compute-0 sudo[232439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuxvgdbuhetpvlqtdneucghopzxfslla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014896.3510675-324-246020007023830/AnsiballZ_file.py'
Nov 24 20:08:16 compute-0 sudo[232439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:16 compute-0 python3.9[232441]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:08:16 compute-0 sudo[232439]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:17.188+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:17.357+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:17 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1012 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:17 compute-0 sudo[232591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvoxiiubysnlttsatgmjnvfqhjmgkqxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014897.2337577-332-66234827106402/AnsiballZ_stat.py'
Nov 24 20:08:17 compute-0 sudo[232591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:17 compute-0 python3.9[232593]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:17 compute-0 sudo[232591]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:18.142+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:18 compute-0 sudo[232669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiifmxiguyfpxltidiqwrskarynhmnbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014897.2337577-332-66234827106402/AnsiballZ_file.py'
Nov 24 20:08:18 compute-0 sudo[232669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:18.355+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:18 compute-0 python3.9[232671]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:08:18 compute-0 sudo[232669]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:18 compute-0 ceph-mon[75677]: pgmap v738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:18 compute-0 sudo[232821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeoortxqwstkuubdfotytlqhroioykml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014898.5430064-332-261695072678532/AnsiballZ_stat.py'
Nov 24 20:08:18 compute-0 sudo[232821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:19 compute-0 python3.9[232823]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:19.115+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:19 compute-0 sudo[232821]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:19.362+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:20.086+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:20.372+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:21.088+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:21 compute-0 sudo[232900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdfaazxnmkxqgcvxaefiyhomvaarmope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014898.5430064-332-261695072678532/AnsiballZ_file.py'
Nov 24 20:08:21 compute-0 sudo[232900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:21.349+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:21 compute-0 python3.9[232902]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:08:21 compute-0 sudo[232900]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1017 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:22 compute-0 sudo[233052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuxqanwhpzizgvgkguggdityfztxuoen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014901.6942601-355-48788812246730/AnsiballZ_file.py'
Nov 24 20:08:22 compute-0 sudo[233052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:22 compute-0 ceph-mon[75677]: pgmap v739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:22 compute-0 ceph-mon[75677]: pgmap v740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:22 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1017 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:22.102+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:22 compute-0 python3.9[233054]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:22 compute-0 sudo[233052]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:22.373+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:22 compute-0 sudo[233204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwokcyqyvammektyrmlrxnmyugyrnxjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014902.521131-363-109442084907669/AnsiballZ_stat.py'
Nov 24 20:08:22 compute-0 sudo[233204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:23.070+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:23 compute-0 python3.9[233206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:23 compute-0 sudo[233204]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:08:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5578 writes, 23K keys, 5578 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5578 writes, 847 syncs, 6.59 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 28 writes, 42 keys, 28 commit groups, 1.0 writes per commit group, ingest: 0.02 MB, 0.00 MB/s
                                           Interval WAL: 28 writes, 14 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa41090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa41090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa41090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560fcfa411f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 20:08:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:23.391+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:23 compute-0 sudo[233282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyhhzkcpiekrygnqflfqrjfcrhxxsxok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014902.521131-363-109442084907669/AnsiballZ_file.py'
Nov 24 20:08:23 compute-0 sudo[233282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:23 compute-0 python3.9[233284]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:23 compute-0 sudo[233282]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:24.075+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:24 compute-0 ceph-mon[75677]: pgmap v741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:24 compute-0 sudo[233434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqvoynxdjznuujjsebrxjhtvnqxfrkvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014903.9464204-375-277994681056/AnsiballZ_stat.py'
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:08:24
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', 'volumes', 'images', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms']
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:08:24 compute-0 sudo[233434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:24.368+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:08:24 compute-0 python3.9[233436]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:24 compute-0 sudo[233434]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:24 compute-0 sudo[233512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysrpvcaumlvdtfacpeewyskghaiqufsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014903.9464204-375-277994681056/AnsiballZ_file.py'
Nov 24 20:08:24 compute-0 sudo[233512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:25.056+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:25 compute-0 python3.9[233514]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:25 compute-0 sudo[233512]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:25.361+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:25 compute-0 sudo[233664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brwvmjqkjgtvowiduefdaorirygqhlyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014905.4787085-387-141784751698705/AnsiballZ_systemd.py'
Nov 24 20:08:25 compute-0 sudo[233664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:26.069+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:26 compute-0 python3.9[233666]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:08:26 compute-0 systemd[1]: Reloading.
Nov 24 20:08:26 compute-0 systemd-rc-local-generator[233694]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:08:26 compute-0 systemd-sysv-generator[233698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:08:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:26 compute-0 ceph-mon[75677]: pgmap v742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:26.342+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:26 compute-0 sudo[233664]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1022 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:27.053+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:27 compute-0 sudo[233852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idytktcohfmbegtmiealjfdyyomluwuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014906.7377357-395-153436982670809/AnsiballZ_stat.py'
Nov 24 20:08:27 compute-0 sudo[233852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1022 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:27 compute-0 python3.9[233854]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:27.316+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:27 compute-0 sudo[233852]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:27 compute-0 sudo[233930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tixzwukpydqahnpsfhidwjbchboichkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014906.7377357-395-153436982670809/AnsiballZ_file.py'
Nov 24 20:08:27 compute-0 sudo[233930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:27 compute-0 python3.9[233932]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:27 compute-0 sudo[233930]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:28.027+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:28 compute-0 ceph-mon[75677]: pgmap v743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:28.304+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:28 compute-0 sudo[234082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqogegclhczolrbdribouiuobgktqswy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014908.059161-407-187132898333169/AnsiballZ_stat.py'
Nov 24 20:08:28 compute-0 sudo[234082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:28 compute-0 python3.9[234084]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:28 compute-0 sudo[234082]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:08:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6656 writes, 27K keys, 6656 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6656 writes, 1203 syncs, 5.53 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 20 writes, 35 keys, 20 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s
                                           Interval WAL: 20 writes, 10 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ba3936d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 20:08:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:28.986+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:29 compute-0 sudo[234160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buixpbiwnrrnswusxiccudealrqbzotk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014908.059161-407-187132898333169/AnsiballZ_file.py'
Nov 24 20:08:29 compute-0 sudo[234160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:29 compute-0 python3.9[234162]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:29 compute-0 sudo[234160]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:29.318+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:29 compute-0 sudo[234329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnkhtrxvodnmpawntkotvittvkvpgafd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014909.5138385-419-228498009586115/AnsiballZ_systemd.py'
Nov 24 20:08:29 compute-0 sudo[234329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:29 compute-0 podman[234270]: 2025-11-24 20:08:29.915258236 +0000 UTC m=+0.130766075 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:08:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:30.022+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:30 compute-0 python3.9[234335]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:08:30 compute-0 systemd[1]: Reloading.
Nov 24 20:08:30 compute-0 systemd-rc-local-generator[234365]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:08:30 compute-0 systemd-sysv-generator[234369]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:08:30 compute-0 ceph-mon[75677]: pgmap v744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:30.349+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:30 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 20:08:30 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 20:08:30 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 20:08:30 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 20:08:30 compute-0 sudo[234329]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:30.974+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:31.342+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:31 compute-0 sudo[234530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwuvopltrfgkqtaqdmfwupkspqhvyrer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014911.0964878-429-118451006089441/AnsiballZ_file.py'
Nov 24 20:08:31 compute-0 sudo[234530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:31 compute-0 python3.9[234532]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:08:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1032 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:31 compute-0 sudo[234530]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:31.979+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:32 compute-0 sudo[234682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sutiovmvrlbdwnjsxxmvypkrnqyvdjgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014911.9251902-437-59363680211217/AnsiballZ_stat.py'
Nov 24 20:08:32 compute-0 sudo[234682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:32.308+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:32 compute-0 ceph-mon[75677]: pgmap v745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1032 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:32 compute-0 python3.9[234684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:32 compute-0 sudo[234682]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:32 compute-0 sudo[234775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:32 compute-0 sudo[234775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:32 compute-0 sudo[234775]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:32 compute-0 sudo[234836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bghmnzxoexweobvszgijrpydnyxqkmmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014911.9251902-437-59363680211217/AnsiballZ_copy.py'
Nov 24 20:08:33 compute-0 sudo[234836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:33.006+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:33 compute-0 sudo[234826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:08:33 compute-0 sudo[234826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:33 compute-0 sudo[234826]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:33 compute-0 sudo[234858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:33 compute-0 sudo[234858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:33 compute-0 sudo[234858]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:33 compute-0 sudo[234883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 20:08:33 compute-0 sudo[234883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:33 compute-0 python3.9[234855]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764014911.9251902-437-59363680211217/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:08:33 compute-0 sudo[234836]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:33.319+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:33 compute-0 sudo[234883]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:08:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:08:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:08:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:08:33 compute-0 sudo[234952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:33 compute-0 sudo[234952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:33 compute-0 sudo[234952]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:33 compute-0 sudo[234994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:08:33 compute-0 sudo[234994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:33 compute-0 sudo[234994]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:33 compute-0 sudo[235047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:33 compute-0 sudo[235047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:33 compute-0 sudo[235047]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:33 compute-0 sudo[235083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:08:33 compute-0 sudo[235083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:34.025+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:34 compute-0 sudo[235188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkasgkkaelfvefvstnbahgiaodbxjvwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014913.7122376-454-187150699885951/AnsiballZ_file.py'
Nov 24 20:08:34 compute-0 sudo[235188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:34 compute-0 python3.9[235193]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:08:34 compute-0 sudo[235188]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:34.285+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:34 compute-0 sudo[235083]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:08:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:08:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5389 writes, 23K keys, 5389 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5389 writes, 764 syncs, 7.05 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 36 writes, 54 keys, 36 commit groups, 1.0 writes per commit group, ingest: 0.02 MB, 0.00 MB/s
                                           Interval WAL: 36 writes, 18 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557d316cd1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 20:08:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:08:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:08:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:08:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:08:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:08:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7fc8bcab-669a-4cab-9bd2-23af6fe24345 does not exist
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 001cbd02-8be5-4da3-b441-e03c807208cb does not exist
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev b93d74c3-b4e5-4962-8c9b-e127a87bdbcc does not exist
Nov 24 20:08:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:08:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:08:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:08:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:08:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:08:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:08:34 compute-0 ceph-mon[75677]: pgmap v746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:08:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:08:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:08:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:08:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:08:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:08:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:08:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:08:34 compute-0 sudo[235259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:34 compute-0 sudo[235259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:34 compute-0 sudo[235259]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:34 compute-0 sudo[235309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:08:34 compute-0 sudo[235309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:34 compute-0 sudo[235309]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:34 compute-0 sudo[235358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:34 compute-0 sudo[235358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:34 compute-0 sudo[235358]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:34 compute-0 sudo[235394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:08:34 compute-0 sudo[235394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:34 compute-0 sudo[235461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boowoxcdfbvhztqkhabufhhnjcupzvax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014914.5384362-462-59930226295042/AnsiballZ_stat.py'
Nov 24 20:08:34 compute-0 sudo[235461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:35.037+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:35 compute-0 python3.9[235463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:35 compute-0 sudo[235461]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:35.245+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:35 compute-0 podman[235523]: 2025-11-24 20:08:35.346314746 +0000 UTC m=+0.063249577 container create f46517ffe65d8980760bb4b6602d3b1389ab894b3dca97d496f0f10abdbca5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dhawan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:08:35 compute-0 podman[235523]: 2025-11-24 20:08:35.311089597 +0000 UTC m=+0.028024498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:08:35 compute-0 systemd[1]: Started libpod-conmon-f46517ffe65d8980760bb4b6602d3b1389ab894b3dca97d496f0f10abdbca5f4.scope.
Nov 24 20:08:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:08:35 compute-0 podman[235523]: 2025-11-24 20:08:35.539682188 +0000 UTC m=+0.256616999 container init f46517ffe65d8980760bb4b6602d3b1389ab894b3dca97d496f0f10abdbca5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dhawan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 24 20:08:35 compute-0 podman[235523]: 2025-11-24 20:08:35.549066957 +0000 UTC m=+0.266001788 container start f46517ffe65d8980760bb4b6602d3b1389ab894b3dca97d496f0f10abdbca5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dhawan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:08:35 compute-0 bold_dhawan[235590]: 167 167
Nov 24 20:08:35 compute-0 systemd[1]: libpod-f46517ffe65d8980760bb4b6602d3b1389ab894b3dca97d496f0f10abdbca5f4.scope: Deactivated successfully.
Nov 24 20:08:35 compute-0 podman[235523]: 2025-11-24 20:08:35.591031205 +0000 UTC m=+0.307966106 container attach f46517ffe65d8980760bb4b6602d3b1389ab894b3dca97d496f0f10abdbca5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 20:08:35 compute-0 podman[235523]: 2025-11-24 20:08:35.592422602 +0000 UTC m=+0.309357473 container died f46517ffe65d8980760bb4b6602d3b1389ab894b3dca97d496f0f10abdbca5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dhawan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:08:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:35 compute-0 sudo[235657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrjefccqtxyusdsnvgmlgxwrpadhotfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014914.5384362-462-59930226295042/AnsiballZ_copy.py'
Nov 24 20:08:35 compute-0 sudo[235657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-920c62f894f93980e1e0cc3d2bfea399bda1b6bf7f81c930fc16e52a6333f36b-merged.mount: Deactivated successfully.
Nov 24 20:08:35 compute-0 podman[235523]: 2025-11-24 20:08:35.8441954 +0000 UTC m=+0.561130231 container remove f46517ffe65d8980760bb4b6602d3b1389ab894b3dca97d496f0f10abdbca5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dhawan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:08:35 compute-0 systemd[1]: libpod-conmon-f46517ffe65d8980760bb4b6602d3b1389ab894b3dca97d496f0f10abdbca5f4.scope: Deactivated successfully.
Nov 24 20:08:35 compute-0 python3.9[235659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014914.5384362-462-59930226295042/.source.json _original_basename=.96yms013 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:35 compute-0 sudo[235657]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:36.033+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:36 compute-0 podman[235682]: 2025-11-24 20:08:36.08254691 +0000 UTC m=+0.062840075 container create 9aec797ab69dbfc93fa702670fc8130800738681f384ed1a7863857e03c25cbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:08:36 compute-0 podman[235682]: 2025-11-24 20:08:36.040057368 +0000 UTC m=+0.020350513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:08:36 compute-0 systemd[1]: Started libpod-conmon-9aec797ab69dbfc93fa702670fc8130800738681f384ed1a7863857e03c25cbf.scope.
Nov 24 20:08:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2902b9ef4418681b4b5bba1b4436966cec0e8f884d97b1c72c8c5b9c2701054/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2902b9ef4418681b4b5bba1b4436966cec0e8f884d97b1c72c8c5b9c2701054/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2902b9ef4418681b4b5bba1b4436966cec0e8f884d97b1c72c8c5b9c2701054/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2902b9ef4418681b4b5bba1b4436966cec0e8f884d97b1c72c8c5b9c2701054/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2902b9ef4418681b4b5bba1b4436966cec0e8f884d97b1c72c8c5b9c2701054/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:36 compute-0 podman[235682]: 2025-11-24 20:08:36.229558278 +0000 UTC m=+0.209851483 container init 9aec797ab69dbfc93fa702670fc8130800738681f384ed1a7863857e03c25cbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 20:08:36 compute-0 podman[235682]: 2025-11-24 20:08:36.244634639 +0000 UTC m=+0.224927794 container start 9aec797ab69dbfc93fa702670fc8130800738681f384ed1a7863857e03c25cbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:08:36 compute-0 podman[235682]: 2025-11-24 20:08:36.260867232 +0000 UTC m=+0.241160387 container attach 9aec797ab69dbfc93fa702670fc8130800738681f384ed1a7863857e03c25cbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 20:08:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:36.267+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:36 compute-0 sudo[235840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oojupanogkwlzpuzishdtazvfqqnpewl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014916.176159-477-42977528658580/AnsiballZ_file.py'
Nov 24 20:08:36 compute-0 sudo[235840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:36 compute-0 ceph-mon[75677]: pgmap v747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:36 compute-0 python3.9[235842]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:36 compute-0 sudo[235840]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:37.041+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:37.267+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:37 compute-0 sudo[236012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxlzrptflpgvetjvvrciwzpcblgkjzgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014917.0497794-485-14949747702097/AnsiballZ_stat.py'
Nov 24 20:08:37 compute-0 sudo[236012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:37 compute-0 keen_pare[235724]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:08:37 compute-0 keen_pare[235724]: --> relative data size: 1.0
Nov 24 20:08:37 compute-0 keen_pare[235724]: --> All data devices are unavailable
Nov 24 20:08:37 compute-0 systemd[1]: libpod-9aec797ab69dbfc93fa702670fc8130800738681f384ed1a7863857e03c25cbf.scope: Deactivated successfully.
Nov 24 20:08:37 compute-0 podman[235682]: 2025-11-24 20:08:37.508205184 +0000 UTC m=+1.488498349 container died 9aec797ab69dbfc93fa702670fc8130800738681f384ed1a7863857e03c25cbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:08:37 compute-0 systemd[1]: libpod-9aec797ab69dbfc93fa702670fc8130800738681f384ed1a7863857e03c25cbf.scope: Consumed 1.186s CPU time.
Nov 24 20:08:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2902b9ef4418681b4b5bba1b4436966cec0e8f884d97b1c72c8c5b9c2701054-merged.mount: Deactivated successfully.
Nov 24 20:08:37 compute-0 podman[235682]: 2025-11-24 20:08:37.582829953 +0000 UTC m=+1.563123068 container remove 9aec797ab69dbfc93fa702670fc8130800738681f384ed1a7863857e03c25cbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:08:37 compute-0 systemd[1]: libpod-conmon-9aec797ab69dbfc93fa702670fc8130800738681f384ed1a7863857e03c25cbf.scope: Deactivated successfully.
Nov 24 20:08:37 compute-0 sudo[236012]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:37 compute-0 sudo[235394]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:37 compute-0 sudo[236029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1037 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:37 compute-0 sudo[236029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:37 compute-0 sudo[236029]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:37 compute-0 sudo[236077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:08:37 compute-0 sudo[236077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:37 compute-0 sudo[236077]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:37 compute-0 sudo[236127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:37 compute-0 sudo[236127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:37 compute-0 sudo[236127]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:37 compute-0 sudo[236180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:08:37 compute-0 sudo[236180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:38 compute-0 sudo[236249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gphorgnpehozwjqiuoldviblloiavjsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014917.0497794-485-14949747702097/AnsiballZ_copy.py'
Nov 24 20:08:38 compute-0 sudo[236249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:38.087+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:38.224+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:38 compute-0 sudo[236249]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:38 compute-0 podman[236316]: 2025-11-24 20:08:38.401750541 +0000 UTC m=+0.057955115 container create 908b825a414fb200fc00eaf8ed3d3b1cee64b1ed7c9d59aaebeeb8b9771108b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:08:38 compute-0 systemd[1]: Started libpod-conmon-908b825a414fb200fc00eaf8ed3d3b1cee64b1ed7c9d59aaebeeb8b9771108b8.scope.
Nov 24 20:08:38 compute-0 podman[236316]: 2025-11-24 20:08:38.374132185 +0000 UTC m=+0.030336789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:08:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:08:38 compute-0 podman[236316]: 2025-11-24 20:08:38.516427466 +0000 UTC m=+0.172632100 container init 908b825a414fb200fc00eaf8ed3d3b1cee64b1ed7c9d59aaebeeb8b9771108b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:08:38 compute-0 podman[236316]: 2025-11-24 20:08:38.533542112 +0000 UTC m=+0.189746696 container start 908b825a414fb200fc00eaf8ed3d3b1cee64b1ed7c9d59aaebeeb8b9771108b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:08:38 compute-0 podman[236316]: 2025-11-24 20:08:38.537417786 +0000 UTC m=+0.193622370 container attach 908b825a414fb200fc00eaf8ed3d3b1cee64b1ed7c9d59aaebeeb8b9771108b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 20:08:38 compute-0 gracious_murdock[236333]: 167 167
Nov 24 20:08:38 compute-0 systemd[1]: libpod-908b825a414fb200fc00eaf8ed3d3b1cee64b1ed7c9d59aaebeeb8b9771108b8.scope: Deactivated successfully.
Nov 24 20:08:38 compute-0 podman[236316]: 2025-11-24 20:08:38.545740348 +0000 UTC m=+0.201944892 container died 908b825a414fb200fc00eaf8ed3d3b1cee64b1ed7c9d59aaebeeb8b9771108b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:08:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fc2e19d752e1018301f65d5818c44dbe1023dcfc0713a85599460687bb9f9ed-merged.mount: Deactivated successfully.
Nov 24 20:08:38 compute-0 podman[236316]: 2025-11-24 20:08:38.591315552 +0000 UTC m=+0.247520136 container remove 908b825a414fb200fc00eaf8ed3d3b1cee64b1ed7c9d59aaebeeb8b9771108b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:08:38 compute-0 systemd[1]: libpod-conmon-908b825a414fb200fc00eaf8ed3d3b1cee64b1ed7c9d59aaebeeb8b9771108b8.scope: Deactivated successfully.
Nov 24 20:08:38 compute-0 ceph-mon[75677]: pgmap v748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1037 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:38 compute-0 podman[236409]: 2025-11-24 20:08:38.86598806 +0000 UTC m=+0.068441894 container create 0beba37cd21aaec6a06f3812557d79b76bea752dfa07260924736a5144b614d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:08:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:38 compute-0 systemd[1]: Started libpod-conmon-0beba37cd21aaec6a06f3812557d79b76bea752dfa07260924736a5144b614d4.scope.
Nov 24 20:08:38 compute-0 podman[236409]: 2025-11-24 20:08:38.838794975 +0000 UTC m=+0.041248859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:08:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd1e17aac48233f8549ffb8e239ea7c69ddcd308e56f05300cd677a3225dca5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd1e17aac48233f8549ffb8e239ea7c69ddcd308e56f05300cd677a3225dca5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd1e17aac48233f8549ffb8e239ea7c69ddcd308e56f05300cd677a3225dca5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd1e17aac48233f8549ffb8e239ea7c69ddcd308e56f05300cd677a3225dca5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:38 compute-0 podman[236409]: 2025-11-24 20:08:38.996532568 +0000 UTC m=+0.198986402 container init 0beba37cd21aaec6a06f3812557d79b76bea752dfa07260924736a5144b614d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 20:08:39 compute-0 podman[236409]: 2025-11-24 20:08:39.008772004 +0000 UTC m=+0.211225838 container start 0beba37cd21aaec6a06f3812557d79b76bea752dfa07260924736a5144b614d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:08:39 compute-0 podman[236409]: 2025-11-24 20:08:39.012979496 +0000 UTC m=+0.215433290 container attach 0beba37cd21aaec6a06f3812557d79b76bea752dfa07260924736a5144b614d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:08:39 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 24 20:08:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:39.124+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:39.213+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:39 compute-0 sudo[236505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmtwhhobrjuyizcklkolbhoubltdlaft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014918.6471746-502-46535880071668/AnsiballZ_container_config_data.py'
Nov 24 20:08:39 compute-0 sudo[236505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:39 compute-0 python3.9[236507]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 24 20:08:39 compute-0 sudo[236505]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:39 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Check health
Nov 24 20:08:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:39 compute-0 zealous_mclean[236426]: {
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:     "0": [
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:         {
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "devices": [
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "/dev/loop3"
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             ],
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_name": "ceph_lv0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_size": "21470642176",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "name": "ceph_lv0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "tags": {
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.cluster_name": "ceph",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.crush_device_class": "",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.encrypted": "0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.osd_id": "0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.type": "block",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.vdo": "0"
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             },
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "type": "block",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "vg_name": "ceph_vg0"
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:         }
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:     ],
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:     "1": [
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:         {
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "devices": [
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "/dev/loop4"
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             ],
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_name": "ceph_lv1",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_size": "21470642176",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "name": "ceph_lv1",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "tags": {
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.cluster_name": "ceph",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.crush_device_class": "",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.encrypted": "0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.osd_id": "1",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.type": "block",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.vdo": "0"
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             },
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "type": "block",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "vg_name": "ceph_vg1"
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:         }
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:     ],
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:     "2": [
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:         {
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "devices": [
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "/dev/loop5"
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             ],
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_name": "ceph_lv2",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_size": "21470642176",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "name": "ceph_lv2",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "tags": {
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.cluster_name": "ceph",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.crush_device_class": "",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.encrypted": "0",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.osd_id": "2",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.type": "block",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:                 "ceph.vdo": "0"
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             },
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "type": "block",
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:             "vg_name": "ceph_vg2"
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:         }
Nov 24 20:08:39 compute-0 zealous_mclean[236426]:     ]
Nov 24 20:08:39 compute-0 zealous_mclean[236426]: }
Nov 24 20:08:39 compute-0 systemd[1]: libpod-0beba37cd21aaec6a06f3812557d79b76bea752dfa07260924736a5144b614d4.scope: Deactivated successfully.
Nov 24 20:08:39 compute-0 podman[236409]: 2025-11-24 20:08:39.818451906 +0000 UTC m=+1.020905720 container died 0beba37cd21aaec6a06f3812557d79b76bea752dfa07260924736a5144b614d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 20:08:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bd1e17aac48233f8549ffb8e239ea7c69ddcd308e56f05300cd677a3225dca5-merged.mount: Deactivated successfully.
Nov 24 20:08:39 compute-0 podman[236409]: 2025-11-24 20:08:39.889238082 +0000 UTC m=+1.091691916 container remove 0beba37cd21aaec6a06f3812557d79b76bea752dfa07260924736a5144b614d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:08:39 compute-0 systemd[1]: libpod-conmon-0beba37cd21aaec6a06f3812557d79b76bea752dfa07260924736a5144b614d4.scope: Deactivated successfully.
Nov 24 20:08:39 compute-0 sudo[236180]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:40 compute-0 sudo[236602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:40 compute-0 sudo[236602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:40 compute-0 sudo[236602]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:40.093+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:40 compute-0 sudo[236627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:08:40 compute-0 sudo[236627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:40 compute-0 sudo[236627]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:40 compute-0 sudo[236675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:40 compute-0 sudo[236675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:40 compute-0 sudo[236675]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:40.202+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:40 compute-0 sudo[236724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:08:40 compute-0 sudo[236724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:40 compute-0 sudo[236773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvzjbirkfmqtuspjwmqibnlttsmvtsqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014919.7637377-511-135816329326171/AnsiballZ_container_config_hash.py'
Nov 24 20:08:40 compute-0 sudo[236773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:08:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:08:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:08:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:08:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:08:40 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 20:08:40 compute-0 python3.9[236777]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 20:08:40 compute-0 sudo[236773]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:40 compute-0 podman[236844]: 2025-11-24 20:08:40.691140657 +0000 UTC m=+0.054800331 container create 62708f5ccccbc9d29f41dafceb03a0f3bbf2d9edf6c3795bbc777c88caa8573b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:08:40 compute-0 systemd[1]: Started libpod-conmon-62708f5ccccbc9d29f41dafceb03a0f3bbf2d9edf6c3795bbc777c88caa8573b.scope.
Nov 24 20:08:40 compute-0 ceph-mon[75677]: pgmap v749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:40 compute-0 podman[236844]: 2025-11-24 20:08:40.665450512 +0000 UTC m=+0.029110226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:08:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:08:40 compute-0 podman[236844]: 2025-11-24 20:08:40.798924729 +0000 UTC m=+0.162584453 container init 62708f5ccccbc9d29f41dafceb03a0f3bbf2d9edf6c3795bbc777c88caa8573b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:08:40 compute-0 podman[236844]: 2025-11-24 20:08:40.810752934 +0000 UTC m=+0.174412598 container start 62708f5ccccbc9d29f41dafceb03a0f3bbf2d9edf6c3795bbc777c88caa8573b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 20:08:40 compute-0 podman[236844]: 2025-11-24 20:08:40.815535691 +0000 UTC m=+0.179195365 container attach 62708f5ccccbc9d29f41dafceb03a0f3bbf2d9edf6c3795bbc777c88caa8573b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 20:08:40 compute-0 confident_mclean[236860]: 167 167
Nov 24 20:08:40 compute-0 systemd[1]: libpod-62708f5ccccbc9d29f41dafceb03a0f3bbf2d9edf6c3795bbc777c88caa8573b.scope: Deactivated successfully.
Nov 24 20:08:40 compute-0 podman[236844]: 2025-11-24 20:08:40.819325092 +0000 UTC m=+0.182984786 container died 62708f5ccccbc9d29f41dafceb03a0f3bbf2d9edf6c3795bbc777c88caa8573b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:08:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e10d897246ea7e9b283c1468ec3490623d4e58db29f099b21c7ebedb7c887dd-merged.mount: Deactivated successfully.
Nov 24 20:08:40 compute-0 podman[236844]: 2025-11-24 20:08:40.864924708 +0000 UTC m=+0.228584372 container remove 62708f5ccccbc9d29f41dafceb03a0f3bbf2d9edf6c3795bbc777c88caa8573b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 20:08:40 compute-0 systemd[1]: libpod-conmon-62708f5ccccbc9d29f41dafceb03a0f3bbf2d9edf6c3795bbc777c88caa8573b.scope: Deactivated successfully.
Nov 24 20:08:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:41.082+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:41 compute-0 podman[236935]: 2025-11-24 20:08:41.093098366 +0000 UTC m=+0.060768400 container create 9f54f44c221cd09ca1d196845626dbd1a1f71cda66436fad12604af3c9ad52ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:08:41 compute-0 systemd[1]: Started libpod-conmon-9f54f44c221cd09ca1d196845626dbd1a1f71cda66436fad12604af3c9ad52ce.scope.
Nov 24 20:08:41 compute-0 podman[236935]: 2025-11-24 20:08:41.06921268 +0000 UTC m=+0.036882754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:08:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364aec36ec718d66f109a927d80d447dfc64a53a06498da4a073050747439da3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364aec36ec718d66f109a927d80d447dfc64a53a06498da4a073050747439da3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364aec36ec718d66f109a927d80d447dfc64a53a06498da4a073050747439da3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364aec36ec718d66f109a927d80d447dfc64a53a06498da4a073050747439da3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:41.192+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:41 compute-0 podman[236935]: 2025-11-24 20:08:41.198777212 +0000 UTC m=+0.166447206 container init 9f54f44c221cd09ca1d196845626dbd1a1f71cda66436fad12604af3c9ad52ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 20:08:41 compute-0 podman[236935]: 2025-11-24 20:08:41.211812149 +0000 UTC m=+0.179482143 container start 9f54f44c221cd09ca1d196845626dbd1a1f71cda66436fad12604af3c9ad52ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 24 20:08:41 compute-0 podman[236935]: 2025-11-24 20:08:41.214981164 +0000 UTC m=+0.182651148 container attach 9f54f44c221cd09ca1d196845626dbd1a1f71cda66436fad12604af3c9ad52ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 20:08:41 compute-0 sudo[237029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcupsyraosvodotdtachnwvctutfomxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014920.8784225-520-41885741406453/AnsiballZ_podman_container_info.py'
Nov 24 20:08:41 compute-0 sudo[237029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:41 compute-0 python3.9[237031]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 20:08:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:41 compute-0 sudo[237029]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:42.077+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:42.155+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:42 compute-0 boring_saha[236951]: {
Nov 24 20:08:42 compute-0 boring_saha[236951]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "osd_id": 2,
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "type": "bluestore"
Nov 24 20:08:42 compute-0 boring_saha[236951]:     },
Nov 24 20:08:42 compute-0 boring_saha[236951]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "osd_id": 1,
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "type": "bluestore"
Nov 24 20:08:42 compute-0 boring_saha[236951]:     },
Nov 24 20:08:42 compute-0 boring_saha[236951]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "osd_id": 0,
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:08:42 compute-0 boring_saha[236951]:         "type": "bluestore"
Nov 24 20:08:42 compute-0 boring_saha[236951]:     }
Nov 24 20:08:42 compute-0 boring_saha[236951]: }
Nov 24 20:08:42 compute-0 systemd[1]: libpod-9f54f44c221cd09ca1d196845626dbd1a1f71cda66436fad12604af3c9ad52ce.scope: Deactivated successfully.
Nov 24 20:08:42 compute-0 podman[236935]: 2025-11-24 20:08:42.231563908 +0000 UTC m=+1.199233942 container died 9f54f44c221cd09ca1d196845626dbd1a1f71cda66436fad12604af3c9ad52ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:08:42 compute-0 systemd[1]: libpod-9f54f44c221cd09ca1d196845626dbd1a1f71cda66436fad12604af3c9ad52ce.scope: Consumed 1.029s CPU time.
Nov 24 20:08:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-364aec36ec718d66f109a927d80d447dfc64a53a06498da4a073050747439da3-merged.mount: Deactivated successfully.
Nov 24 20:08:42 compute-0 podman[236935]: 2025-11-24 20:08:42.297912716 +0000 UTC m=+1.265582750 container remove 9f54f44c221cd09ca1d196845626dbd1a1f71cda66436fad12604af3c9ad52ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 24 20:08:42 compute-0 systemd[1]: libpod-conmon-9f54f44c221cd09ca1d196845626dbd1a1f71cda66436fad12604af3c9ad52ce.scope: Deactivated successfully.
Nov 24 20:08:42 compute-0 sudo[236724]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:08:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:08:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:08:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:08:42 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 60da09df-852f-4001-b862-f2ee65e83001 does not exist
Nov 24 20:08:42 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev b432faf3-d548-4153-910f-e6a25cf637fb does not exist
Nov 24 20:08:42 compute-0 sudo[237125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:08:42 compute-0 sudo[237125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:42 compute-0 sudo[237125]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:42 compute-0 sudo[237150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:08:42 compute-0 sudo[237150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:08:42 compute-0 sudo[237150]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:42 compute-0 ceph-mon[75677]: pgmap v750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:08:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:08:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:43.102+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:43 compute-0 sudo[237300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjghapavvsbpnlodiwijouygppsmzkll ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764014922.5852058-533-79781581468542/AnsiballZ_edpm_container_manage.py'
Nov 24 20:08:43 compute-0 sudo[237300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:43.150+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:43 compute-0 python3[237302]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 20:08:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:43 compute-0 podman[237330]: 2025-11-24 20:08:43.855468484 +0000 UTC m=+0.080616139 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 20:08:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:44.083+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:44.146+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:44 compute-0 podman[237317]: 2025-11-24 20:08:44.482052008 +0000 UTC m=+1.031126523 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 24 20:08:44 compute-0 podman[237392]: 2025-11-24 20:08:44.672840441 +0000 UTC m=+0.084538033 container create 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 20:08:44 compute-0 podman[237392]: 2025-11-24 20:08:44.627527404 +0000 UTC m=+0.039224986 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 24 20:08:44 compute-0 python3[237302]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 24 20:08:44 compute-0 ceph-mon[75677]: pgmap v751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:44 compute-0 sudo[237300]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:45.082+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:45.161+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:45 compute-0 sudo[237580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzcetwiheooabwzvfbdijjlcfkbzwoae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014925.069496-541-260403487240369/AnsiballZ_stat.py'
Nov 24 20:08:45 compute-0 sudo[237580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:45 compute-0 python3.9[237582]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:08:45 compute-0 sudo[237580]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:46.076+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:46.142+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:46 compute-0 sudo[237734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzeorzanirperbggnluzjewmincszwwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014926.0553722-550-191295693755528/AnsiballZ_file.py'
Nov 24 20:08:46 compute-0 sudo[237734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:46 compute-0 python3.9[237736]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1042 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:46 compute-0 sudo[237734]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:47.112+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:47.139+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:47 compute-0 ceph-mon[75677]: pgmap v752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1042 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:47 compute-0 sudo[237810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oserjhkmyyhuqevbfbyqqyvmnpkwjcjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014926.0553722-550-191295693755528/AnsiballZ_stat.py'
Nov 24 20:08:47 compute-0 sudo[237810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:47 compute-0 python3.9[237812]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:08:47 compute-0 sudo[237810]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:48.128+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:48.134+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:48 compute-0 sudo[237961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlxwliffeyugousalundspwwotkfdwlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014927.615985-550-197939316434439/AnsiballZ_copy.py'
Nov 24 20:08:48 compute-0 sudo[237961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:48 compute-0 ceph-mon[75677]: pgmap v753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:48 compute-0 python3.9[237963]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764014927.615985-550-197939316434439/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:48 compute-0 sudo[237961]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:48 compute-0 sudo[238037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdgdsjyolyzxocjqzpcnosesqpcyjbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014927.615985-550-197939316434439/AnsiballZ_systemd.py'
Nov 24 20:08:48 compute-0 sudo[238037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:49 compute-0 python3.9[238039]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 20:08:49 compute-0 systemd[1]: Reloading.
Nov 24 20:08:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:49.105+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:49.117+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:49 compute-0 systemd-rc-local-generator[238068]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:08:49 compute-0 systemd-sysv-generator[238071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:08:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:49 compute-0 sudo[238037]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:49 compute-0 sudo[238149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svrqlvvnghyvclzgkjfanulrhzoenfqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014927.615985-550-197939316434439/AnsiballZ_systemd.py'
Nov 24 20:08:49 compute-0 sudo[238149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:50.097+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:50 compute-0 python3.9[238151]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:08:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:50.149+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:50 compute-0 systemd[1]: Reloading.
Nov 24 20:08:50 compute-0 systemd-rc-local-generator[238177]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:08:50 compute-0 systemd-sysv-generator[238181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:08:50 compute-0 ceph-mon[75677]: pgmap v754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:50 compute-0 systemd[1]: Starting multipathd container...
Nov 24 20:08:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123541f87d943d820d7f7fa8a4164c4788568fdea413955942ac93caf2a5b17/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123541f87d943d820d7f7fa8a4164c4788568fdea413955942ac93caf2a5b17/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:50 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf.
Nov 24 20:08:50 compute-0 podman[238191]: 2025-11-24 20:08:50.777219099 +0000 UTC m=+0.204774647 container init 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:08:50 compute-0 multipathd[238207]: + sudo -E kolla_set_configs
Nov 24 20:08:50 compute-0 podman[238191]: 2025-11-24 20:08:50.821307534 +0000 UTC m=+0.248863032 container start 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:08:50 compute-0 podman[238191]: multipathd
Nov 24 20:08:50 compute-0 systemd[1]: Started multipathd container.
Nov 24 20:08:50 compute-0 sudo[238213]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 20:08:50 compute-0 sudo[238213]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 20:08:50 compute-0 sudo[238213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 20:08:50 compute-0 sudo[238149]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:50 compute-0 multipathd[238207]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 20:08:50 compute-0 multipathd[238207]: INFO:__main__:Validating config file
Nov 24 20:08:50 compute-0 multipathd[238207]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 20:08:50 compute-0 multipathd[238207]: INFO:__main__:Writing out command to execute
Nov 24 20:08:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:50 compute-0 podman[238214]: 2025-11-24 20:08:50.910781228 +0000 UTC m=+0.079859539 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 20:08:50 compute-0 sudo[238213]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:50 compute-0 multipathd[238207]: ++ cat /run_command
Nov 24 20:08:50 compute-0 systemd[1]: 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf-2f01421d85a1026b.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 20:08:50 compute-0 systemd[1]: 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf-2f01421d85a1026b.service: Failed with result 'exit-code'.
Nov 24 20:08:50 compute-0 multipathd[238207]: + CMD='/usr/sbin/multipathd -d'
Nov 24 20:08:50 compute-0 multipathd[238207]: + ARGS=
Nov 24 20:08:50 compute-0 multipathd[238207]: + sudo kolla_copy_cacerts
Nov 24 20:08:50 compute-0 sudo[238255]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 20:08:50 compute-0 sudo[238255]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 20:08:50 compute-0 sudo[238255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 20:08:50 compute-0 sudo[238255]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:50 compute-0 multipathd[238207]: + [[ ! -n '' ]]
Nov 24 20:08:50 compute-0 multipathd[238207]: + . kolla_extend_start
Nov 24 20:08:50 compute-0 multipathd[238207]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 20:08:50 compute-0 multipathd[238207]: Running command: '/usr/sbin/multipathd -d'
Nov 24 20:08:50 compute-0 multipathd[238207]: + umask 0022
Nov 24 20:08:50 compute-0 multipathd[238207]: + exec /usr/sbin/multipathd -d
Nov 24 20:08:50 compute-0 multipathd[238207]: 3643.674137 | --------start up--------
Nov 24 20:08:50 compute-0 multipathd[238207]: 3643.674158 | read /etc/multipath.conf
Nov 24 20:08:50 compute-0 multipathd[238207]: 3643.680438 | path checkers start up
Nov 24 20:08:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:51.091+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:51.118+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:51 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 24 20:08:51 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 20:08:51 compute-0 python3.9[238396]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:08:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1052 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:52.082+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:52.114+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:52 compute-0 sudo[238550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhbvztmdklylqlsdjkbhmwettbswjkql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014931.9303317-586-32256444739112/AnsiballZ_command.py'
Nov 24 20:08:52 compute-0 sudo[238550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:52 compute-0 ceph-mon[75677]: pgmap v755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1052 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:52 compute-0 python3.9[238552]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:08:52 compute-0 sudo[238550]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:53.087+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:53.100+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:53 compute-0 sudo[238714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhqysphniwbxsajzjfghkkuqygcmekfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014932.765934-594-155186772632697/AnsiballZ_systemd.py'
Nov 24 20:08:53 compute-0 sudo[238714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:53 compute-0 python3.9[238716]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 20:08:53 compute-0 systemd[1]: Stopping multipathd container...
Nov 24 20:08:53 compute-0 multipathd[238207]: 3646.305735 | exit (signal)
Nov 24 20:08:53 compute-0 multipathd[238207]: 3646.306566 | --------shut down-------
Nov 24 20:08:53 compute-0 systemd[1]: libpod-088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf.scope: Deactivated successfully.
Nov 24 20:08:53 compute-0 conmon[238207]: conmon 088b3a7a6268400f9c19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf.scope/container/memory.events
Nov 24 20:08:53 compute-0 podman[238720]: 2025-11-24 20:08:53.631136376 +0000 UTC m=+0.082298824 container died 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:08:53 compute-0 systemd[1]: 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf-2f01421d85a1026b.timer: Deactivated successfully.
Nov 24 20:08:53 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf.
Nov 24 20:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf-userdata-shm.mount: Deactivated successfully.
Nov 24 20:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c123541f87d943d820d7f7fa8a4164c4788568fdea413955942ac93caf2a5b17-merged.mount: Deactivated successfully.
Nov 24 20:08:53 compute-0 podman[238720]: 2025-11-24 20:08:53.69321506 +0000 UTC m=+0.144377538 container cleanup 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:08:53 compute-0 podman[238720]: multipathd
Nov 24 20:08:53 compute-0 podman[238750]: multipathd
Nov 24 20:08:53 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 24 20:08:53 compute-0 systemd[1]: Stopped multipathd container.
Nov 24 20:08:53 compute-0 systemd[1]: Starting multipathd container...
Nov 24 20:08:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123541f87d943d820d7f7fa8a4164c4788568fdea413955942ac93caf2a5b17/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123541f87d943d820d7f7fa8a4164c4788568fdea413955942ac93caf2a5b17/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 20:08:53 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf.
Nov 24 20:08:54 compute-0 podman[238763]: 2025-11-24 20:08:54.009810175 +0000 UTC m=+0.176484373 container init 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:08:54 compute-0 multipathd[238779]: + sudo -E kolla_set_configs
Nov 24 20:08:54 compute-0 sudo[238785]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 20:08:54 compute-0 podman[238763]: 2025-11-24 20:08:54.043823841 +0000 UTC m=+0.210498019 container start 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:08:54 compute-0 sudo[238785]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 20:08:54 compute-0 sudo[238785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 20:08:54 compute-0 multipathd[238779]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 20:08:54 compute-0 multipathd[238779]: INFO:__main__:Validating config file
Nov 24 20:08:54 compute-0 multipathd[238779]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 20:08:54 compute-0 multipathd[238779]: INFO:__main__:Writing out command to execute
Nov 24 20:08:54 compute-0 sudo[238785]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:54 compute-0 multipathd[238779]: ++ cat /run_command
Nov 24 20:08:54 compute-0 multipathd[238779]: + CMD='/usr/sbin/multipathd -d'
Nov 24 20:08:54 compute-0 multipathd[238779]: + ARGS=
Nov 24 20:08:54 compute-0 multipathd[238779]: + sudo kolla_copy_cacerts
Nov 24 20:08:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:54.110+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:54.113+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:54 compute-0 podman[238763]: multipathd
Nov 24 20:08:54 compute-0 sudo[238800]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 20:08:54 compute-0 systemd[1]: Started multipathd container.
Nov 24 20:08:54 compute-0 sudo[238800]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 20:08:54 compute-0 sudo[238800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 20:08:54 compute-0 sudo[238800]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:54 compute-0 multipathd[238779]: + [[ ! -n '' ]]
Nov 24 20:08:54 compute-0 multipathd[238779]: + . kolla_extend_start
Nov 24 20:08:54 compute-0 multipathd[238779]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 20:08:54 compute-0 multipathd[238779]: Running command: '/usr/sbin/multipathd -d'
Nov 24 20:08:54 compute-0 multipathd[238779]: + umask 0022
Nov 24 20:08:54 compute-0 multipathd[238779]: + exec /usr/sbin/multipathd -d
Nov 24 20:08:54 compute-0 multipathd[238779]: 3646.865155 | --------start up--------
Nov 24 20:08:54 compute-0 multipathd[238779]: 3646.865177 | read /etc/multipath.conf
Nov 24 20:08:54 compute-0 sudo[238714]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:54 compute-0 multipathd[238779]: 3646.873543 | path checkers start up
Nov 24 20:08:54 compute-0 podman[238786]: 2025-11-24 20:08:54.188691301 +0000 UTC m=+0.132811910 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:08:54 compute-0 ceph-mon[75677]: pgmap v756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:08:54 compute-0 sudo[238968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imepjlrffpdwiqckjnmedjdpdxelrefb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014934.3846192-602-128358198860841/AnsiballZ_file.py'
Nov 24 20:08:54 compute-0 sudo[238968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:54 compute-0 python3.9[238970]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:54 compute-0 sudo[238968]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:55.116+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:55.131+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:55 compute-0 sudo[239120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzcvejgvzonflkhscrnerzxxulqzmmtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014935.4231145-614-167276427407525/AnsiballZ_file.py'
Nov 24 20:08:55 compute-0 sudo[239120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:55 compute-0 python3.9[239122]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 20:08:55 compute-0 sudo[239120]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:56.085+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:56.135+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:56 compute-0 ceph-mon[75677]: pgmap v757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:56 compute-0 sudo[239272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nexiegrikrxfbmqxqaxunmckjxqjqkrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014936.2367086-622-242480424580039/AnsiballZ_modprobe.py'
Nov 24 20:08:56 compute-0 sudo[239272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:08:56 compute-0 python3.9[239274]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 24 20:08:56 compute-0 kernel: Key type psk registered
Nov 24 20:08:56 compute-0 sudo[239272]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:57.092+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:57.100+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1057 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:57 compute-0 sudo[239434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chupwikjzuidraqbbwhffmrsszybbvpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014937.109959-630-180291686156143/AnsiballZ_stat.py'
Nov 24 20:08:57 compute-0 sudo[239434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:57 compute-0 python3.9[239436]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:08:57 compute-0 sudo[239434]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:58.080+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:58 compute-0 sudo[239557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahtrdljamzbehdhnfmlpvzgoquyhvxit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014937.109959-630-180291686156143/AnsiballZ_copy.py'
Nov 24 20:08:58 compute-0 sudo[239557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:58.146+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:58 compute-0 python3.9[239559]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764014937.109959-630-180291686156143/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:58 compute-0 sudo[239557]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:58 compute-0 ceph-mon[75677]: pgmap v758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:58 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1057 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:08:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:08:58 compute-0 sudo[239709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxifnupocqckvlucvijopxlfnekftyda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014938.5845351-646-68986133131284/AnsiballZ_lineinfile.py'
Nov 24 20:08:58 compute-0 sudo[239709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:08:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:08:59.096+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:08:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:08:59.147+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:08:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:59 compute-0 python3.9[239711]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:08:59 compute-0 sudo[239709]: pam_unix(sudo:session): session closed for user root
Nov 24 20:08:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:08:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:08:59 compute-0 sudo[239861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyoarejgnikjzsxdibvybpmwxxjatzlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014939.5260475-654-198446433365978/AnsiballZ_systemd.py'
Nov 24 20:08:59 compute-0 sudo[239861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:00.095+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:00 compute-0 podman[239863]: 2025-11-24 20:09:00.156639374 +0000 UTC m=+0.177535221 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 20:09:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:00.169+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:00 compute-0 python3.9[239864]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 20:09:00 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 20:09:00 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 24 20:09:00 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 24 20:09:00 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 20:09:00 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 20:09:00 compute-0 sudo[239861]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:00 compute-0 ceph-mon[75677]: pgmap v759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:01 compute-0 sudo[240040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebcwdxavkxbfosgnvayjirdmpdhcgumd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014940.6774428-662-70288406257174/AnsiballZ_dnf.py'
Nov 24 20:09:01 compute-0 sudo[240040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:01.098+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:01.195+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:01 compute-0 python3.9[240042]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 20:09:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:02.128+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:02.147+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:02 compute-0 ceph-mon[75677]: pgmap v760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:03.146+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:03.147+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:04.183+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:04.185+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:04 compute-0 systemd[1]: Reloading.
Nov 24 20:09:04 compute-0 systemd-rc-local-generator[240069]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:09:04 compute-0 systemd-sysv-generator[240078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:09:04 compute-0 systemd[1]: Reloading.
Nov 24 20:09:04 compute-0 systemd-sysv-generator[240113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:09:04 compute-0 systemd-rc-local-generator[240106]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:09:04 compute-0 ceph-mon[75677]: pgmap v761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:05 compute-0 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 20:09:05 compute-0 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 20:09:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:05.167+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:05 compute-0 lvm[240159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 20:09:05 compute-0 lvm[240159]: VG ceph_vg2 finished
Nov 24 20:09:05 compute-0 lvm[240157]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 20:09:05 compute-0 lvm[240157]: VG ceph_vg1 finished
Nov 24 20:09:05 compute-0 lvm[240158]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 20:09:05 compute-0 lvm[240158]: VG ceph_vg0 finished
Nov 24 20:09:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:05.234+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 20:09:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 20:09:05 compute-0 systemd[1]: Reloading.
Nov 24 20:09:05 compute-0 systemd-rc-local-generator[240211]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:09:05 compute-0 systemd-sysv-generator[240215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:09:05 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 20:09:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:06.211+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:06.218+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:06 compute-0 sudo[240040]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1062 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:06 compute-0 sudo[241459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyafkhyulzceqrcljpnnunyhcvlooejq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014946.4645402-670-120135050889002/AnsiballZ_systemd_service.py'
Nov 24 20:09:06 compute-0 sudo[241459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:06 compute-0 ceph-mon[75677]: pgmap v762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:06 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1062 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 20:09:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 20:09:07 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.902s CPU time.
Nov 24 20:09:07 compute-0 systemd[1]: run-ra5483769e4214633bccfdb9aa1f2d704.service: Deactivated successfully.
Nov 24 20:09:07 compute-0 python3.9[241485]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 20:09:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:07.184+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:07 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 24 20:09:07 compute-0 iscsid[228801]: iscsid shutting down.
Nov 24 20:09:07 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 24 20:09:07 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 24 20:09:07 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 20:09:07 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 24 20:09:07 compute-0 systemd[1]: Started Open-iSCSI.
Nov 24 20:09:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:07.250+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:07 compute-0 sudo[241459]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:07 compute-0 ceph-mon[75677]: pgmap v763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:08 compute-0 sshd-session[240120]: Invalid user admin from 27.79.44.141 port 53106
Nov 24 20:09:08 compute-0 python3.9[241656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 20:09:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:08.228+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:08.262+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:08 compute-0 sshd-session[240120]: Connection closed by invalid user admin 27.79.44.141 port 53106 [preauth]
Nov 24 20:09:08 compute-0 sshd-session[240186]: Invalid user test from 27.79.44.141 port 53116
Nov 24 20:09:08 compute-0 sshd-session[240186]: Connection closed by invalid user test 27.79.44.141 port 53116 [preauth]
Nov 24 20:09:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:09 compute-0 sudo[241810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmkeiufrsxlkfkymmgcdqtfldpybgtgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014948.7299836-688-267864291321107/AnsiballZ_file.py'
Nov 24 20:09:09 compute-0 sudo[241810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:09.179+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:09.238+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:09 compute-0 python3.9[241812]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:09 compute-0 sudo[241810]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:09:09.361 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:09:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:09:09.361 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:09:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:09:09.362 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:09:09 compute-0 ceph-mon[75677]: pgmap v764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:10.151+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:10 compute-0 sudo[241962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnsaacbxkcpdwzvxkszshcfcvxyibnrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014949.7767293-699-185478567865094/AnsiballZ_systemd_service.py'
Nov 24 20:09:10 compute-0 sudo[241962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:10.261+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:10 compute-0 python3.9[241964]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 20:09:10 compute-0 systemd[1]: Reloading.
Nov 24 20:09:10 compute-0 systemd-sysv-generator[241995]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:09:10 compute-0 systemd-rc-local-generator[241990]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:09:10 compute-0 sudo[241962]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:11.123+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:11.286+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:11 compute-0 python3.9[242150]: ansible-ansible.builtin.service_facts Invoked
Nov 24 20:09:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1072 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:11 compute-0 network[242167]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 20:09:11 compute-0 network[242168]: 'network-scripts' will be removed from distribution in near future.
Nov 24 20:09:11 compute-0 network[242169]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 20:09:12 compute-0 ceph-mon[75677]: pgmap v765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1072 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:12.149+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:12.326+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:13.155+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:13.371+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:13 compute-0 podman[242188]: 2025-11-24 20:09:13.966201801 +0000 UTC m=+0.066768460 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 20:09:14 compute-0 ceph-mon[75677]: pgmap v766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:14.129+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:14.345+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:15.086+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:15.371+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:16 compute-0 ceph-mon[75677]: pgmap v767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:16.114+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:16.392+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1077 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:17.142+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:17.357+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:18 compute-0 ceph-mon[75677]: pgmap v768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:18 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1077 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:18.101+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:18.389+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:18 compute-0 sudo[242462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wovokyhdvmldaezcgauzcnskfnlxpfxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014957.9927046-718-185124149940376/AnsiballZ_systemd_service.py'
Nov 24 20:09:18 compute-0 sudo[242462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:18 compute-0 python3.9[242464]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:09:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:18 compute-0 sudo[242462]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:19.134+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:19.363+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:19 compute-0 sudo[242615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqunwqmnkyeydbffqmsiwenjldxuitys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014959.1473837-718-261937026973389/AnsiballZ_systemd_service.py'
Nov 24 20:09:19 compute-0 sudo[242615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:19 compute-0 python3.9[242617]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:09:20 compute-0 sudo[242615]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:20.132+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:20 compute-0 ceph-mon[75677]: pgmap v769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:20.349+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:20 compute-0 sudo[242768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhvlcqjpvpumfiluuqamcdnnaqhlgkso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014960.1688538-718-158592364599601/AnsiballZ_systemd_service.py'
Nov 24 20:09:20 compute-0 sudo[242768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:20 compute-0 python3.9[242770]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:09:20 compute-0 sudo[242768]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:21.179+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:21.348+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:21 compute-0 sudo[242921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twvqmxaipcuzrelhbqecvntoizovgcrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014961.0424786-718-2076350703524/AnsiballZ_systemd_service.py'
Nov 24 20:09:21 compute-0 sudo[242921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:21 compute-0 python3.9[242923]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:09:21 compute-0 sudo[242921]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:22.143+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:22 compute-0 ceph-mon[75677]: pgmap v770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:22.357+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:22 compute-0 sudo[243074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwxdgnfwosofovlynagmwvndnedcehha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014962.0128376-718-71642792703916/AnsiballZ_systemd_service.py'
Nov 24 20:09:22 compute-0 sudo[243074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:22 compute-0 python3.9[243076]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:09:22 compute-0 sudo[243074]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:23.182+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:23.330+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:23 compute-0 sudo[243227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwmslumuoywebwzuevhpazjrjgrsjsvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014962.9455235-718-97041434776980/AnsiballZ_systemd_service.py'
Nov 24 20:09:23 compute-0 sudo[243227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:23 compute-0 python3.9[243229]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:09:23 compute-0 sudo[243227]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:24.160+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:24 compute-0 ceph-mon[75677]: pgmap v771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:24.349+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:24 compute-0 sudo[243397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emmbrqfigpsgplaoppcfvxqptkpoedha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014963.947017-718-217948366587904/AnsiballZ_systemd_service.py'
Nov 24 20:09:24 compute-0 sudo[243397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:09:24
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['volumes', 'vms', 'backups', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:09:24 compute-0 podman[243354]: 2025-11-24 20:09:24.37571225 +0000 UTC m=+0.081872532 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:09:24 compute-0 python3.9[243402]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:09:24 compute-0 sudo[243397]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 24 20:09:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:25.160+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:25 compute-0 sudo[243553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhspierbpjmhpmksvwvprmdwflqmjycl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014964.8374534-718-254580832766746/AnsiballZ_systemd_service.py'
Nov 24 20:09:25 compute-0 sudo[243553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:25.382+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:25 compute-0 python3.9[243555]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:09:25 compute-0 sudo[243553]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:26.207+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:26 compute-0 ceph-mon[75677]: pgmap v772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 24 20:09:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:26 compute-0 sudo[243706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpkycqjwdeumfiahotuktpoafcgkwtwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014965.9282355-777-53406306041667/AnsiballZ_file.py'
Nov 24 20:09:26 compute-0 sudo[243706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:26.406+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:26 compute-0 python3.9[243708]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:26 compute-0 sudo[243706]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1082 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:09:27 compute-0 sudo[243858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqfewblfxfnlexlpmvktswhctxwbscnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014966.7591934-777-73479280974413/AnsiballZ_file.py'
Nov 24 20:09:27 compute-0 sudo[243858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:27.215+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1082 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.262669) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014967262750, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2023, "num_deletes": 251, "total_data_size": 2494223, "memory_usage": 2532512, "flush_reason": "Manual Compaction"}
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014967278008, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 2433177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18544, "largest_seqno": 20566, "table_properties": {"data_size": 2424525, "index_size": 4826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23968, "raw_average_key_size": 22, "raw_value_size": 2404627, "raw_average_value_size": 2208, "num_data_blocks": 213, "num_entries": 1089, "num_filter_entries": 1089, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764014827, "oldest_key_time": 1764014827, "file_creation_time": 1764014967, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 15377 microseconds, and 6707 cpu microseconds.
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.278055) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 2433177 bytes OK
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.278075) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.279957) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.279971) EVENT_LOG_v1 {"time_micros": 1764014967279966, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.279990) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2484995, prev total WAL file size 2484995, number of live WAL files 2.
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.280812) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(2376KB)], [41(6227KB)]
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014967280859, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8809997, "oldest_snapshot_seqno": -1}
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 6234 keys, 7355067 bytes, temperature: kUnknown
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014967323291, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 7355067, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7316887, "index_size": 21494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 162316, "raw_average_key_size": 26, "raw_value_size": 7206388, "raw_average_value_size": 1155, "num_data_blocks": 864, "num_entries": 6234, "num_filter_entries": 6234, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764014967, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.323486) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 7355067 bytes
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.325029) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.3 rd, 173.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.1 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 6748, records dropped: 514 output_compression: NoCompression
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.325047) EVENT_LOG_v1 {"time_micros": 1764014967325038, "job": 20, "event": "compaction_finished", "compaction_time_micros": 42490, "compaction_time_cpu_micros": 15179, "output_level": 6, "num_output_files": 1, "total_output_size": 7355067, "num_input_records": 6748, "num_output_records": 6234, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014967325615, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764014967326825, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.280728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.326869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.326873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.326874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.326876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:09:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:09:27.326878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:09:27 compute-0 python3.9[243860]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:27 compute-0 sudo[243858]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:27.437+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:27 compute-0 sudo[244010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntdeqnbwtmxxeexqviaatuxtgxdyhhds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014967.5927153-777-43868142354893/AnsiballZ_file.py'
Nov 24 20:09:27 compute-0 sudo[244010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:28 compute-0 python3.9[244012]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:28 compute-0 sudo[244010]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:28.223+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:28 compute-0 ceph-mon[75677]: pgmap v773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:09:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:28.414+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:28 compute-0 sudo[244164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whooapeebsmapzeegxdjamvwtupjvwgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014968.3664997-777-196499875598321/AnsiballZ_file.py'
Nov 24 20:09:28 compute-0 sudo[244164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:28 compute-0 python3.9[244166]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:28 compute-0 sudo[244164]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:09:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:29.214+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:29.460+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:29 compute-0 sudo[244316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmgahovmwkkibfqixyrcaembnumpmpsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014969.0809863-777-194044649364889/AnsiballZ_file.py'
Nov 24 20:09:29 compute-0 sudo[244316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:29 compute-0 python3.9[244318]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:29 compute-0 sudo[244316]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:30.194+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:30 compute-0 ceph-mon[75677]: pgmap v774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:09:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:30 compute-0 sudo[244481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njkrtnkcpcaxkmkjxhgnqmsufzgmuwfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014969.891253-777-11191204316879/AnsiballZ_file.py'
Nov 24 20:09:30 compute-0 sudo[244481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:30 compute-0 podman[244442]: 2025-11-24 20:09:30.394697464 +0000 UTC m=+0.139152979 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:09:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:30.483+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:30 compute-0 python3.9[244490]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:30 compute-0 sudo[244481]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:09:31 compute-0 sudo[244648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsloghvoxzqugzshfrrfdokigfzhvjhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014970.756709-777-45859799176173/AnsiballZ_file.py'
Nov 24 20:09:31 compute-0 sudo[244648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:31.216+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:31 compute-0 python3.9[244650]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:31 compute-0 sudo[244648]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:31.481+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1092 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:31 compute-0 sshd-session[244060]: Connection closed by authenticating user root 27.79.44.141 port 40512 [preauth]
Nov 24 20:09:31 compute-0 sudo[244800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urmutvjyquwdpiuwfoyrixhbgnsuwpon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014971.5734026-777-62637138441849/AnsiballZ_file.py'
Nov 24 20:09:31 compute-0 sudo[244800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:32.211+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:32 compute-0 python3.9[244802]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:32 compute-0 sudo[244800]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:32 compute-0 ceph-mon[75677]: pgmap v775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:09:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1092 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:32.461+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:32 compute-0 sudo[244952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnxtrcwjunzhmtzyhhccbeeyhitvlcda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014972.4860659-834-177647417102533/AnsiballZ_file.py'
Nov 24 20:09:32 compute-0 sudo[244952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:09:33 compute-0 python3.9[244954]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:33 compute-0 sudo[244952]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:33.177+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:33.433+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:33 compute-0 sudo[245104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iggyvtgzvrjqmeeuldbmueppiuavgkdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014973.2565768-834-206580956740084/AnsiballZ_file.py'
Nov 24 20:09:33 compute-0 sudo[245104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:33 compute-0 python3.9[245106]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:33 compute-0 sudo[245104]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:34.173+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:34 compute-0 ceph-mon[75677]: pgmap v776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:09:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:34 compute-0 sudo[245256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-patkqpmnhlkecjpjydgzwjcooqjsqyrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014974.0386882-834-233145937042775/AnsiballZ_file.py'
Nov 24 20:09:34 compute-0 sudo[245256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:09:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:34.478+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:34 compute-0 python3.9[245258]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:34 compute-0 sudo[245256]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:09:35 compute-0 sudo[245408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpyoevxjtyudlgbhqgdgxajqoqxstzbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014974.7924523-834-89373777533414/AnsiballZ_file.py'
Nov 24 20:09:35 compute-0 sudo[245408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:35.202+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:35 compute-0 python3.9[245410]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:35 compute-0 sudo[245408]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:35.488+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:35 compute-0 sudo[245560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jirpqjccyepnesadawhoqwoveyjplxrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014975.494789-834-157184837219260/AnsiballZ_file.py'
Nov 24 20:09:35 compute-0 sudo[245560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:36 compute-0 python3.9[245562]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:36 compute-0 sudo[245560]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:36.213+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:36 compute-0 ceph-mon[75677]: pgmap v777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:09:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:36.440+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:36 compute-0 sudo[245712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uimxmzybqmmbkbbjcoszmeyrmkdhlmaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014976.223738-834-266452289402428/AnsiballZ_file.py'
Nov 24 20:09:36 compute-0 sudo[245712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:36 compute-0 python3.9[245714]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:36 compute-0 sudo[245712]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 20:09:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:37.167+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1097 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:37 compute-0 sudo[245864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlzwowkxqnoxpvojlwpppduauckhvmld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014977.027141-834-43134794373610/AnsiballZ_file.py'
Nov 24 20:09:37 compute-0 sudo[245864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:37.430+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:37 compute-0 python3.9[245866]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:37 compute-0 sudo[245864]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:38.145+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:38 compute-0 sudo[246016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osvtfpjmfygslrkihqbqbkqnhuxvefii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014977.9130297-834-67429552716489/AnsiballZ_file.py'
Nov 24 20:09:38 compute-0 sudo[246016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:38 compute-0 ceph-mon[75677]: pgmap v778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 20:09:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1097 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:38.462+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:38 compute-0 python3.9[246018]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:09:38 compute-0 sudo[246016]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:39.180+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:39 compute-0 sudo[246168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vexubtvjcalplzuxzreiycpuuulrnuxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014978.8572173-892-176946873938603/AnsiballZ_command.py'
Nov 24 20:09:39 compute-0 sudo[246168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:39 compute-0 python3.9[246170]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:09:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:39.496+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:39 compute-0 sudo[246168]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:40.170+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:09:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:09:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:09:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:09:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:09:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:40 compute-0 ceph-mon[75677]: pgmap v779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:40.453+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:40 compute-0 python3.9[246322]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 20:09:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:41.165+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:41 compute-0 sudo[246472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbufpllqhzpltwjjwiscavemfbzcngvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014980.8707771-910-179790583285806/AnsiballZ_systemd_service.py'
Nov 24 20:09:41 compute-0 sudo[246472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:41.482+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:41 compute-0 python3.9[246474]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 20:09:41 compute-0 systemd[1]: Reloading.
Nov 24 20:09:41 compute-0 systemd-rc-local-generator[246500]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:09:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:41 compute-0 systemd-sysv-generator[246504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:09:41 compute-0 sudo[246472]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:42.141+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:42 compute-0 ceph-mon[75677]: pgmap v780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:42.466+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:42 compute-0 sudo[246677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfqyeikdwxxjkpqvsynyvmyypvxsawbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014982.23666-918-77923915253924/AnsiballZ_command.py'
Nov 24 20:09:42 compute-0 sudo[246677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:42 compute-0 sudo[246644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:09:42 compute-0 sudo[246644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:42 compute-0 sudo[246644]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:42 compute-0 sudo[246687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:09:42 compute-0 sudo[246687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:42 compute-0 sudo[246687]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:42 compute-0 sudo[246712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:09:42 compute-0 sudo[246712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:42 compute-0 sudo[246712]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:42 compute-0 python3.9[246684]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:09:42 compute-0 sudo[246677]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:42 compute-0 sudo[246737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:09:42 compute-0 sudo[246737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:43.168+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:43 compute-0 sudo[246932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkmezmsopsvfitrxgctcszpustiuwlya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014983.0702844-918-49123530237348/AnsiballZ_command.py'
Nov 24 20:09:43 compute-0 sudo[246932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:43.497+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:43 compute-0 sudo[246737]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 20:09:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 20:09:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:09:43 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:09:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:09:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:09:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:09:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:09:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2227be11-7e7d-4c5b-b23c-00e813e208ca does not exist
Nov 24 20:09:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d93390d0-179f-43ef-85a0-1a1c98433757 does not exist
Nov 24 20:09:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 83873902-6844-458d-a622-e6d406b3ca29 does not exist
Nov 24 20:09:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:09:43 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:09:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:09:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:09:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:09:43 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:09:43 compute-0 sudo[246946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:09:43 compute-0 sudo[246946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:43 compute-0 sudo[246946]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:43 compute-0 python3.9[246941]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:09:43 compute-0 sudo[246971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:09:43 compute-0 sudo[246932]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:43 compute-0 sudo[246971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:43 compute-0 sudo[246971]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:43 compute-0 sudo[246997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:09:43 compute-0 sudo[246997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:43 compute-0 sudo[246997]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:43 compute-0 sudo[247046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:09:43 compute-0 sudo[247046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:44.125+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:44 compute-0 podman[247196]: 2025-11-24 20:09:44.264144862 +0000 UTC m=+0.071344559 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:09:44 compute-0 sudo[247258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlhurajeopdfdyfdeyrookzvgrnjnnqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014983.8815198-918-178119018818746/AnsiballZ_command.py'
Nov 24 20:09:44 compute-0 sudo[247258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:44 compute-0 podman[247236]: 2025-11-24 20:09:44.328286151 +0000 UTC m=+0.080180343 container create f8b136dec443a1824deb986a2e24fa65c82a7152a96ac2f5273ec77af757ca2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 20:09:44 compute-0 podman[247236]: 2025-11-24 20:09:44.28253501 +0000 UTC m=+0.034429292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:09:44 compute-0 systemd[1]: Started libpod-conmon-f8b136dec443a1824deb986a2e24fa65c82a7152a96ac2f5273ec77af757ca2b.scope.
Nov 24 20:09:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:09:44 compute-0 podman[247236]: 2025-11-24 20:09:44.424235798 +0000 UTC m=+0.176130020 container init f8b136dec443a1824deb986a2e24fa65c82a7152a96ac2f5273ec77af757ca2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 20:09:44 compute-0 ceph-mon[75677]: pgmap v781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 20:09:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:09:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:09:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:09:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:09:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:09:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:09:44 compute-0 podman[247236]: 2025-11-24 20:09:44.436338511 +0000 UTC m=+0.188232733 container start f8b136dec443a1824deb986a2e24fa65c82a7152a96ac2f5273ec77af757ca2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lewin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:09:44 compute-0 podman[247236]: 2025-11-24 20:09:44.440651071 +0000 UTC m=+0.192545293 container attach f8b136dec443a1824deb986a2e24fa65c82a7152a96ac2f5273ec77af757ca2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:09:44 compute-0 wizardly_lewin[247269]: 167 167
Nov 24 20:09:44 compute-0 systemd[1]: libpod-f8b136dec443a1824deb986a2e24fa65c82a7152a96ac2f5273ec77af757ca2b.scope: Deactivated successfully.
Nov 24 20:09:44 compute-0 podman[247236]: 2025-11-24 20:09:44.446218704 +0000 UTC m=+0.198112916 container died f8b136dec443a1824deb986a2e24fa65c82a7152a96ac2f5273ec77af757ca2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 20:09:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:44.455+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7846c3a74cab740f08830bf433dbf881e04a821d748c6e662a72336b635250dd-merged.mount: Deactivated successfully.
Nov 24 20:09:44 compute-0 podman[247236]: 2025-11-24 20:09:44.495194815 +0000 UTC m=+0.247089027 container remove f8b136dec443a1824deb986a2e24fa65c82a7152a96ac2f5273ec77af757ca2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lewin, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:09:44 compute-0 systemd[1]: libpod-conmon-f8b136dec443a1824deb986a2e24fa65c82a7152a96ac2f5273ec77af757ca2b.scope: Deactivated successfully.
Nov 24 20:09:44 compute-0 python3.9[247265]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:09:44 compute-0 sudo[247258]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:44 compute-0 podman[247319]: 2025-11-24 20:09:44.722732412 +0000 UTC m=+0.057686962 container create ba4b41a66786d0514397025c9ec81cf4e48736e72a49d5d91d55a0f81ddcef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:09:44 compute-0 systemd[1]: Started libpod-conmon-ba4b41a66786d0514397025c9ec81cf4e48736e72a49d5d91d55a0f81ddcef10.scope.
Nov 24 20:09:44 compute-0 podman[247319]: 2025-11-24 20:09:44.695726287 +0000 UTC m=+0.030680917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:09:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf14b9075a9a141d9d9e8d5cdc73dfcd732e307edfdd520adfd964e377c887c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf14b9075a9a141d9d9e8d5cdc73dfcd732e307edfdd520adfd964e377c887c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf14b9075a9a141d9d9e8d5cdc73dfcd732e307edfdd520adfd964e377c887c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf14b9075a9a141d9d9e8d5cdc73dfcd732e307edfdd520adfd964e377c887c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf14b9075a9a141d9d9e8d5cdc73dfcd732e307edfdd520adfd964e377c887c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:44 compute-0 podman[247319]: 2025-11-24 20:09:44.839062491 +0000 UTC m=+0.174017081 container init ba4b41a66786d0514397025c9ec81cf4e48736e72a49d5d91d55a0f81ddcef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cray, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:09:44 compute-0 podman[247319]: 2025-11-24 20:09:44.855308259 +0000 UTC m=+0.190262819 container start ba4b41a66786d0514397025c9ec81cf4e48736e72a49d5d91d55a0f81ddcef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:09:44 compute-0 podman[247319]: 2025-11-24 20:09:44.859308689 +0000 UTC m=+0.194263239 container attach ba4b41a66786d0514397025c9ec81cf4e48736e72a49d5d91d55a0f81ddcef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 20:09:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:45 compute-0 sudo[247466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycjiolkyylxxzupgmbevevxbmgjipgdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014984.7347398-918-173917202938446/AnsiballZ_command.py'
Nov 24 20:09:45 compute-0 sudo[247466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:45.143+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:45 compute-0 python3.9[247468]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:09:45 compute-0 sudo[247466]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:45.505+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:45 compute-0 sudo[247637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akwexytqcnapfqkpifbjbeeylvilrliv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014985.4940395-918-7946946946710/AnsiballZ_command.py'
Nov 24 20:09:45 compute-0 sudo[247637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:45 compute-0 sleepy_cray[247367]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:09:45 compute-0 sleepy_cray[247367]: --> relative data size: 1.0
Nov 24 20:09:45 compute-0 sleepy_cray[247367]: --> All data devices are unavailable
Nov 24 20:09:45 compute-0 systemd[1]: libpod-ba4b41a66786d0514397025c9ec81cf4e48736e72a49d5d91d55a0f81ddcef10.scope: Deactivated successfully.
Nov 24 20:09:45 compute-0 systemd[1]: libpod-ba4b41a66786d0514397025c9ec81cf4e48736e72a49d5d91d55a0f81ddcef10.scope: Consumed 1.045s CPU time.
Nov 24 20:09:45 compute-0 podman[247319]: 2025-11-24 20:09:45.984292152 +0000 UTC m=+1.319246732 container died ba4b41a66786d0514397025c9ec81cf4e48736e72a49d5d91d55a0f81ddcef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 20:09:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cf14b9075a9a141d9d9e8d5cdc73dfcd732e307edfdd520adfd964e377c887c-merged.mount: Deactivated successfully.
Nov 24 20:09:46 compute-0 python3.9[247641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:09:46 compute-0 podman[247319]: 2025-11-24 20:09:46.101067393 +0000 UTC m=+1.436021943 container remove ba4b41a66786d0514397025c9ec81cf4e48736e72a49d5d91d55a0f81ddcef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cray, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 20:09:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:46.099+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:46 compute-0 systemd[1]: libpod-conmon-ba4b41a66786d0514397025c9ec81cf4e48736e72a49d5d91d55a0f81ddcef10.scope: Deactivated successfully.
Nov 24 20:09:46 compute-0 sudo[247637]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:46 compute-0 sudo[247046]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:46 compute-0 sudo[247660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:09:46 compute-0 sudo[247660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:46 compute-0 sudo[247660]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:46 compute-0 sudo[247709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:09:46 compute-0 sudo[247709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:46 compute-0 sudo[247709]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:46 compute-0 sudo[247761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:09:46 compute-0 sudo[247761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:46 compute-0 sudo[247761]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:46 compute-0 ceph-mon[75677]: pgmap v782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:46 compute-0 sudo[247812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:09:46 compute-0 sudo[247812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:46.469+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:46 compute-0 sudo[247911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpmuidknspomhfmneqlblmxtmtjnrdad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014986.28367-918-183717386922319/AnsiballZ_command.py'
Nov 24 20:09:46 compute-0 sudo[247911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1102 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:46 compute-0 python3.9[247918]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:09:46 compute-0 sudo[247911]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:46 compute-0 podman[247954]: 2025-11-24 20:09:46.928391684 +0000 UTC m=+0.062360241 container create 7ab0eac27f9f8c7a63c60a8a30df0de72443c1291ec294b578a93eefa6578868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:09:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:46 compute-0 systemd[1]: Started libpod-conmon-7ab0eac27f9f8c7a63c60a8a30df0de72443c1291ec294b578a93eefa6578868.scope.
Nov 24 20:09:46 compute-0 podman[247954]: 2025-11-24 20:09:46.904260018 +0000 UTC m=+0.038228655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:09:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:09:47 compute-0 podman[247954]: 2025-11-24 20:09:47.058454331 +0000 UTC m=+0.192422898 container init 7ab0eac27f9f8c7a63c60a8a30df0de72443c1291ec294b578a93eefa6578868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:09:47 compute-0 podman[247954]: 2025-11-24 20:09:47.068872049 +0000 UTC m=+0.202840596 container start 7ab0eac27f9f8c7a63c60a8a30df0de72443c1291ec294b578a93eefa6578868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:09:47 compute-0 zealous_gould[247994]: 167 167
Nov 24 20:09:47 compute-0 podman[247954]: 2025-11-24 20:09:47.072967983 +0000 UTC m=+0.206936530 container attach 7ab0eac27f9f8c7a63c60a8a30df0de72443c1291ec294b578a93eefa6578868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:09:47 compute-0 systemd[1]: libpod-7ab0eac27f9f8c7a63c60a8a30df0de72443c1291ec294b578a93eefa6578868.scope: Deactivated successfully.
Nov 24 20:09:47 compute-0 podman[247954]: 2025-11-24 20:09:47.074568906 +0000 UTC m=+0.208537473 container died 7ab0eac27f9f8c7a63c60a8a30df0de72443c1291ec294b578a93eefa6578868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 20:09:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:47.086+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-86a6c24f6af165c46c3ae86a77de5eb6f16e60b64cff8814f151ca25a20c682e-merged.mount: Deactivated successfully.
Nov 24 20:09:47 compute-0 podman[247954]: 2025-11-24 20:09:47.113526251 +0000 UTC m=+0.247494788 container remove 7ab0eac27f9f8c7a63c60a8a30df0de72443c1291ec294b578a93eefa6578868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 20:09:47 compute-0 systemd[1]: libpod-conmon-7ab0eac27f9f8c7a63c60a8a30df0de72443c1291ec294b578a93eefa6578868.scope: Deactivated successfully.
Nov 24 20:09:47 compute-0 podman[248095]: 2025-11-24 20:09:47.299511571 +0000 UTC m=+0.050563466 container create 76beedbc36770e413555f641191202f5406a8101adbef3c9a23c84aad7742cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 20:09:47 compute-0 systemd[1]: Started libpod-conmon-76beedbc36770e413555f641191202f5406a8101adbef3c9a23c84aad7742cd2.scope.
Nov 24 20:09:47 compute-0 podman[248095]: 2025-11-24 20:09:47.275142749 +0000 UTC m=+0.026194614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:09:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10257bb0a111635670b2390dfe22732247d0275b5176575abb80ffa9045a0c86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10257bb0a111635670b2390dfe22732247d0275b5176575abb80ffa9045a0c86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10257bb0a111635670b2390dfe22732247d0275b5176575abb80ffa9045a0c86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10257bb0a111635670b2390dfe22732247d0275b5176575abb80ffa9045a0c86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:47 compute-0 sudo[248164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlfzhkzpyqhbadpioivkykhlihaujvts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014987.0430925-918-239185013515323/AnsiballZ_command.py'
Nov 24 20:09:47 compute-0 podman[248095]: 2025-11-24 20:09:47.406214975 +0000 UTC m=+0.157266850 container init 76beedbc36770e413555f641191202f5406a8101adbef3c9a23c84aad7742cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:09:47 compute-0 sudo[248164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:47 compute-0 podman[248095]: 2025-11-24 20:09:47.422908576 +0000 UTC m=+0.173960431 container start 76beedbc36770e413555f641191202f5406a8101adbef3c9a23c84aad7742cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:09:47 compute-0 podman[248095]: 2025-11-24 20:09:47.430038132 +0000 UTC m=+0.181089997 container attach 76beedbc36770e413555f641191202f5406a8101adbef3c9a23c84aad7742cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:09:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:47.448+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1102 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:47 compute-0 python3.9[248167]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:09:47 compute-0 sudo[248164]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:48.088+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:48 compute-0 sudo[248323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grltolmmdovzpqljqvjqexekgpnvafwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014987.7878869-918-35173349279458/AnsiballZ_command.py'
Nov 24 20:09:48 compute-0 sudo[248323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:48 compute-0 lucid_lewin[248151]: {
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:     "0": [
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:         {
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "devices": [
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "/dev/loop3"
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             ],
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_name": "ceph_lv0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_size": "21470642176",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "name": "ceph_lv0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "tags": {
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.cluster_name": "ceph",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.crush_device_class": "",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.encrypted": "0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.osd_id": "0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.type": "block",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.vdo": "0"
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             },
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "type": "block",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "vg_name": "ceph_vg0"
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:         }
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:     ],
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:     "1": [
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:         {
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "devices": [
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "/dev/loop4"
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             ],
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_name": "ceph_lv1",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_size": "21470642176",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "name": "ceph_lv1",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "tags": {
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.cluster_name": "ceph",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.crush_device_class": "",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.encrypted": "0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.osd_id": "1",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.type": "block",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.vdo": "0"
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             },
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "type": "block",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "vg_name": "ceph_vg1"
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:         }
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:     ],
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:     "2": [
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:         {
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "devices": [
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "/dev/loop5"
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             ],
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_name": "ceph_lv2",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_size": "21470642176",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "name": "ceph_lv2",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "tags": {
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.cluster_name": "ceph",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.crush_device_class": "",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.encrypted": "0",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.osd_id": "2",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.type": "block",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:                 "ceph.vdo": "0"
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             },
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "type": "block",
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:             "vg_name": "ceph_vg2"
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:         }
Nov 24 20:09:48 compute-0 lucid_lewin[248151]:     ]
Nov 24 20:09:48 compute-0 lucid_lewin[248151]: }
Nov 24 20:09:48 compute-0 systemd[1]: libpod-76beedbc36770e413555f641191202f5406a8101adbef3c9a23c84aad7742cd2.scope: Deactivated successfully.
Nov 24 20:09:48 compute-0 podman[248095]: 2025-11-24 20:09:48.193705188 +0000 UTC m=+0.944757043 container died 76beedbc36770e413555f641191202f5406a8101adbef3c9a23c84aad7742cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:09:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-10257bb0a111635670b2390dfe22732247d0275b5176575abb80ffa9045a0c86-merged.mount: Deactivated successfully.
Nov 24 20:09:48 compute-0 podman[248095]: 2025-11-24 20:09:48.287277638 +0000 UTC m=+1.038329483 container remove 76beedbc36770e413555f641191202f5406a8101adbef3c9a23c84aad7742cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:09:48 compute-0 systemd[1]: libpod-conmon-76beedbc36770e413555f641191202f5406a8101adbef3c9a23c84aad7742cd2.scope: Deactivated successfully.
Nov 24 20:09:48 compute-0 sudo[247812]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:48 compute-0 python3.9[248326]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 20:09:48 compute-0 sudo[248323]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:48 compute-0 sudo[248341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:09:48 compute-0 sudo[248341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:48 compute-0 sudo[248341]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:48 compute-0 sudo[248368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:09:48 compute-0 sudo[248368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:48 compute-0 sudo[248368]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:48.465+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:48 compute-0 ceph-mon[75677]: pgmap v783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:48 compute-0 sudo[248416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:09:48 compute-0 sudo[248416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:48 compute-0 sudo[248416]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:48 compute-0 sudo[248441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:09:48 compute-0 sudo[248441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:49 compute-0 podman[248505]: 2025-11-24 20:09:49.029750259 +0000 UTC m=+0.070351022 container create 7a7408f83161db6d348915f0c41f1ee9a63f08b19d377bbc615b899395fb0f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 20:09:49 compute-0 systemd[1]: Started libpod-conmon-7a7408f83161db6d348915f0c41f1ee9a63f08b19d377bbc615b899395fb0f11.scope.
Nov 24 20:09:49 compute-0 podman[248505]: 2025-11-24 20:09:49.001769157 +0000 UTC m=+0.042369970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:09:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:09:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:49.125+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:49 compute-0 podman[248505]: 2025-11-24 20:09:49.138965242 +0000 UTC m=+0.179566085 container init 7a7408f83161db6d348915f0c41f1ee9a63f08b19d377bbc615b899395fb0f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:09:49 compute-0 podman[248505]: 2025-11-24 20:09:49.151271302 +0000 UTC m=+0.191872085 container start 7a7408f83161db6d348915f0c41f1ee9a63f08b19d377bbc615b899395fb0f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:09:49 compute-0 podman[248505]: 2025-11-24 20:09:49.155367874 +0000 UTC m=+0.195968647 container attach 7a7408f83161db6d348915f0c41f1ee9a63f08b19d377bbc615b899395fb0f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 20:09:49 compute-0 jolly_khayyam[248521]: 167 167
Nov 24 20:09:49 compute-0 systemd[1]: libpod-7a7408f83161db6d348915f0c41f1ee9a63f08b19d377bbc615b899395fb0f11.scope: Deactivated successfully.
Nov 24 20:09:49 compute-0 podman[248505]: 2025-11-24 20:09:49.160818835 +0000 UTC m=+0.201419618 container died 7a7408f83161db6d348915f0c41f1ee9a63f08b19d377bbc615b899395fb0f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 20:09:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-eea4e6996ad7d07164683f09f1dd651e36838ef58c7da2d5574dd95762af92a0-merged.mount: Deactivated successfully.
Nov 24 20:09:49 compute-0 podman[248505]: 2025-11-24 20:09:49.232814651 +0000 UTC m=+0.273415414 container remove 7a7408f83161db6d348915f0c41f1ee9a63f08b19d377bbc615b899395fb0f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 20:09:49 compute-0 systemd[1]: libpod-conmon-7a7408f83161db6d348915f0c41f1ee9a63f08b19d377bbc615b899395fb0f11.scope: Deactivated successfully.
Nov 24 20:09:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:49.459+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:49 compute-0 podman[248547]: 2025-11-24 20:09:49.467960007 +0000 UTC m=+0.066922687 container create a00042de900dac10080b6c42b8659c8ad91092bae4f121734758f656d05e1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 20:09:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:49 compute-0 systemd[1]: Started libpod-conmon-a00042de900dac10080b6c42b8659c8ad91092bae4f121734758f656d05e1a71.scope.
Nov 24 20:09:49 compute-0 podman[248547]: 2025-11-24 20:09:49.441509687 +0000 UTC m=+0.040472467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:09:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e93ff3f3779a525d4e9543f5992355c380a6432669b8dd5dc36693ba6b0633/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e93ff3f3779a525d4e9543f5992355c380a6432669b8dd5dc36693ba6b0633/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e93ff3f3779a525d4e9543f5992355c380a6432669b8dd5dc36693ba6b0633/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e93ff3f3779a525d4e9543f5992355c380a6432669b8dd5dc36693ba6b0633/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:09:49 compute-0 podman[248547]: 2025-11-24 20:09:49.570969478 +0000 UTC m=+0.169932248 container init a00042de900dac10080b6c42b8659c8ad91092bae4f121734758f656d05e1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 24 20:09:49 compute-0 podman[248547]: 2025-11-24 20:09:49.585750127 +0000 UTC m=+0.184712847 container start a00042de900dac10080b6c42b8659c8ad91092bae4f121734758f656d05e1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:09:49 compute-0 podman[248547]: 2025-11-24 20:09:49.590358133 +0000 UTC m=+0.189320853 container attach a00042de900dac10080b6c42b8659c8ad91092bae4f121734758f656d05e1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:09:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:50.149+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:50.457+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:50 compute-0 ceph-mon[75677]: pgmap v784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]: {
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "osd_id": 2,
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "type": "bluestore"
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:     },
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "osd_id": 1,
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "type": "bluestore"
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:     },
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "osd_id": 0,
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:         "type": "bluestore"
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]:     }
Nov 24 20:09:50 compute-0 hungry_montalcini[248563]: }
Nov 24 20:09:50 compute-0 systemd[1]: libpod-a00042de900dac10080b6c42b8659c8ad91092bae4f121734758f656d05e1a71.scope: Deactivated successfully.
Nov 24 20:09:50 compute-0 podman[248547]: 2025-11-24 20:09:50.624269823 +0000 UTC m=+1.223232513 container died a00042de900dac10080b6c42b8659c8ad91092bae4f121734758f656d05e1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:09:50 compute-0 systemd[1]: libpod-a00042de900dac10080b6c42b8659c8ad91092bae4f121734758f656d05e1a71.scope: Consumed 1.047s CPU time.
Nov 24 20:09:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-84e93ff3f3779a525d4e9543f5992355c380a6432669b8dd5dc36693ba6b0633-merged.mount: Deactivated successfully.
Nov 24 20:09:50 compute-0 podman[248547]: 2025-11-24 20:09:50.681104721 +0000 UTC m=+1.280067401 container remove a00042de900dac10080b6c42b8659c8ad91092bae4f121734758f656d05e1a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 20:09:50 compute-0 systemd[1]: libpod-conmon-a00042de900dac10080b6c42b8659c8ad91092bae4f121734758f656d05e1a71.scope: Deactivated successfully.
Nov 24 20:09:50 compute-0 sudo[248441]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:09:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:09:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:09:50 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:09:50 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev fb876382-fd11-4b34-803c-5425b8196dee does not exist
Nov 24 20:09:50 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 847292e7-4e97-42e9-87d6-4de8206dc82f does not exist
Nov 24 20:09:50 compute-0 sudo[248607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:09:50 compute-0 sudo[248607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:50 compute-0 sudo[248607]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:50 compute-0 sudo[248632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:09:50 compute-0 sudo[248632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:09:50 compute-0 sudo[248632]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:51.168+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:51.441+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:51 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:09:51 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:09:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1107 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:52.186+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:52.472+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:52 compute-0 ceph-mon[75677]: pgmap v785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1107 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:53.182+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:53.446+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:54 compute-0 sudo[248782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oawglrrseethykdtyvqdqniizprozwgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014993.7601173-997-215150249455624/AnsiballZ_file.py'
Nov 24 20:09:54 compute-0 sudo[248782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:54.225+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:54 compute-0 python3.9[248784]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:09:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:54.424+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:54 compute-0 sudo[248782]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:54 compute-0 podman[248785]: 2025-11-24 20:09:54.527436461 +0000 UTC m=+0.097037968 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:09:54 compute-0 ceph-mon[75677]: pgmap v786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:54 compute-0 sudo[248953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtrwvlhxeupsppcvsixjxhgrdmuumckw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014994.6387413-997-273866907408904/AnsiballZ_file.py'
Nov 24 20:09:54 compute-0 sudo[248953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:55 compute-0 python3.9[248955]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:09:55 compute-0 sudo[248953]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:55.215+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:55.391+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:55 compute-0 sudo[249105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsntxqmuaomyuvopgbhppzqsaxqghnjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014995.4285583-997-269298089047617/AnsiballZ_file.py'
Nov 24 20:09:55 compute-0 sudo[249105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:55 compute-0 python3.9[249107]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:09:55 compute-0 sudo[249105]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:56.234+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:56.437+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:56 compute-0 sudo[249257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-temcgxwtpsnfqmurmcefyurshwyaueof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014996.1969151-1019-276078441574868/AnsiballZ_file.py'
Nov 24 20:09:56 compute-0 sudo[249257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1112 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:09:56 compute-0 python3.9[249259]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:09:56 compute-0 ceph-mon[75677]: pgmap v787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:56 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1112 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:09:56 compute-0 sudo[249257]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:57.256+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:57 compute-0 sudo[249409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jimweyodempomioovkehloupwmjmeakn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014996.9822016-1019-118928932189008/AnsiballZ_file.py'
Nov 24 20:09:57 compute-0 sudo[249409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:57.464+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:57 compute-0 python3.9[249411]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:09:57 compute-0 sudo[249409]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:58 compute-0 sudo[249561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hytaoanjobqjektyzxgnibflzboflofs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014997.7055771-1019-192443343793379/AnsiballZ_file.py'
Nov 24 20:09:58 compute-0 sudo[249561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:58 compute-0 python3.9[249563]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:09:58 compute-0 sudo[249561]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:58.301+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:58.445+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:58 compute-0 sudo[249713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paltkypakwemcvyvgpujlwcfbpxnzrhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014998.4631739-1019-55291207225269/AnsiballZ_file.py'
Nov 24 20:09:58 compute-0 sudo[249713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:58 compute-0 ceph-mon[75677]: pgmap v788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:09:59 compute-0 python3.9[249715]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:09:59 compute-0 sudo[249713]: pam_unix(sudo:session): session closed for user root
Nov 24 20:09:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:09:59.329+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:09:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:09:59.418+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:09:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:59 compute-0 sudo[249865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anpsxnihqddkcpsxbktzntdbnrjdbapw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764014999.268508-1019-110133044729874/AnsiballZ_file.py'
Nov 24 20:09:59 compute-0 sudo[249865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:09:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:09:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:09:59 compute-0 python3.9[249867]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:09:59 compute-0 sudo[249865]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:00.355+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:00.437+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:00 compute-0 sudo[250027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iovlstxqrelthulzsdmcxtkumdcwrche ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015000.1256337-1019-90611042293383/AnsiballZ_file.py'
Nov 24 20:10:00 compute-0 sudo[250027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:00 compute-0 podman[249991]: 2025-11-24 20:10:00.573012126 +0000 UTC m=+0.090155368 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 20:10:00 compute-0 python3.9[250035]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:10:00 compute-0 sudo[250027]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:00 compute-0 ceph-mon[75677]: pgmap v789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:01.311+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:01 compute-0 sudo[250195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrlasjewmtbslhcexgyicsbxrqectbal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015000.959338-1019-269120213539148/AnsiballZ_file.py'
Nov 24 20:10:01 compute-0 sudo[250195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:01.479+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:01 compute-0 python3.9[250197]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:10:01 compute-0 sudo[250195]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1117 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:01 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1117 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:02.354+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:02.513+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:02 compute-0 ceph-mon[75677]: pgmap v790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:03.345+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:03.538+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:04 compute-0 ceph-mon[75677]: pgmap v791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:04.357+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:04.547+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:05.397+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:05.557+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:06 compute-0 ceph-mon[75677]: pgmap v792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:06.352+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:06.551+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1122 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1122 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:07.379+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:07.512+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:07 compute-0 sudo[250347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmmziirmysxelzlebwogtuacahanznu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015007.353789-1208-186698302815698/AnsiballZ_getent.py'
Nov 24 20:10:07 compute-0 sudo[250347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:08 compute-0 python3.9[250349]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 24 20:10:08 compute-0 sudo[250347]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:08 compute-0 ceph-mon[75677]: pgmap v793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:08.422+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:08.494+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:08 compute-0 sudo[250500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvpufbmqllekroyfhdfnwpgahrzytrff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015008.289809-1216-141926203157832/AnsiballZ_group.py'
Nov 24 20:10:08 compute-0 sudo[250500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:09 compute-0 python3.9[250502]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 20:10:09 compute-0 groupadd[250503]: group added to /etc/group: name=nova, GID=42436
Nov 24 20:10:09 compute-0 groupadd[250503]: group added to /etc/gshadow: name=nova
Nov 24 20:10:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:09 compute-0 groupadd[250503]: new group: name=nova, GID=42436
Nov 24 20:10:09 compute-0 sudo[250500]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:10:09.362 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:10:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:10:09.363 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:10:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:10:09.363 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:10:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:09.470+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:09.532+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:10 compute-0 sudo[250658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skulyfkoiwqlromgortymkimlvvyhilh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015009.6082408-1224-85892199026555/AnsiballZ_user.py'
Nov 24 20:10:10 compute-0 sudo[250658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:10 compute-0 ceph-mon[75677]: pgmap v794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:10.422+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:10 compute-0 python3.9[250660]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 20:10:10 compute-0 useradd[250662]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 24 20:10:10 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:10:10 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:10:10 compute-0 useradd[250662]: add 'nova' to group 'libvirt'
Nov 24 20:10:10 compute-0 useradd[250662]: add 'nova' to shadow group 'libvirt'
Nov 24 20:10:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:10.508+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:10 compute-0 sudo[250658]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:11.405+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:11 compute-0 sshd-session[250694]: Accepted publickey for zuul from 192.168.122.30 port 40272 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 20:10:11 compute-0 systemd-logind[795]: New session 51 of user zuul.
Nov 24 20:10:11 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 24 20:10:11 compute-0 sshd-session[250694]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 20:10:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:11.555+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:11 compute-0 sshd-session[250698]: Received disconnect from 192.168.122.30 port 40272:11: disconnected by user
Nov 24 20:10:11 compute-0 sshd-session[250698]: Disconnected from user zuul 192.168.122.30 port 40272
Nov 24 20:10:11 compute-0 sshd-session[250694]: pam_unix(sshd:session): session closed for user zuul
Nov 24 20:10:11 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 24 20:10:11 compute-0 systemd-logind[795]: Session 51 logged out. Waiting for processes to exit.
Nov 24 20:10:11 compute-0 systemd-logind[795]: Removed session 51.
Nov 24 20:10:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1132 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:12 compute-0 ceph-mon[75677]: pgmap v795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1132 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:12.417+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:12 compute-0 python3.9[250849]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:10:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:12.535+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:13 compute-0 python3.9[250970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764015011.8987675-1249-51626050277152/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:10:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:13.449+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:13.514+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:14 compute-0 python3.9[251120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:10:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:14 compute-0 ceph-mon[75677]: pgmap v796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:14.404+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:14 compute-0 podman[251170]: 2025-11-24 20:10:14.448904216 +0000 UTC m=+0.099617609 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 20:10:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:14.520+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:14 compute-0 python3.9[251205]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:10:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:15.405+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:15 compute-0 python3.9[251364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:10:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:15.555+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:16 compute-0 python3.9[251485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764015014.8169901-1249-144768363237290/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:10:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:16 compute-0 ceph-mon[75677]: pgmap v797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:16.371+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:16.576+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:16 compute-0 python3.9[251635]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:10:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:17.329+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1137 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:17.539+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:17 compute-0 python3.9[251756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764015016.3341348-1249-11968733538331/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:10:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:18.288+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:18 compute-0 ceph-mon[75677]: pgmap v798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:18 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1137 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:18 compute-0 python3.9[251906]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:10:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:18.583+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:19 compute-0 python3.9[252027]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764015017.8530295-1249-41883210625715/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:10:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:19.279+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:19.612+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:19 compute-0 python3.9[252177]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:10:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:20.250+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:20 compute-0 ceph-mon[75677]: pgmap v799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:20.652+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:20 compute-0 python3.9[252298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764015019.4627109-1249-169800858027920/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:10:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:21.212+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:21 compute-0 sudo[252448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aakywbfxlxbvieersgmfyljmqtuaqxkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015020.9655423-1332-31217347715740/AnsiballZ_file.py'
Nov 24 20:10:21 compute-0 sudo[252448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:21 compute-0 python3.9[252450]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:10:21 compute-0 sudo[252448]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:21.697+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:22.163+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:22 compute-0 sudo[252600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgwoojvrggsjfhmbpalgyrvzcqxrsxqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015021.8179364-1340-169314292002737/AnsiballZ_copy.py'
Nov 24 20:10:22 compute-0 sudo[252600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:22 compute-0 python3.9[252602]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:10:22 compute-0 sudo[252600]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:22 compute-0 ceph-mon[75677]: pgmap v800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:22.704+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:22 compute-0 sudo[252752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnxmvdtqqjovvdduooedrrcslgekaout ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015022.6077015-1348-46035927501827/AnsiballZ_stat.py'
Nov 24 20:10:22 compute-0 sudo[252752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:23.116+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:23 compute-0 python3.9[252754]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:10:23 compute-0 sudo[252752]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:23.749+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:23 compute-0 sudo[252904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pepyabjtpalpdcacipkukipzdbgaksrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015023.4419787-1356-215300896493208/AnsiballZ_stat.py'
Nov 24 20:10:23 compute-0 sudo[252904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:24 compute-0 python3.9[252906]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:10:24 compute-0 sudo[252904]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:24.072+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:10:24
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['volumes', 'images', 'vms', '.mgr', 'backups', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root']
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:10:24 compute-0 sudo[253027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfnfugicslreffdekocxlqfktvcvlayr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015023.4419787-1356-215300896493208/AnsiballZ_copy.py'
Nov 24 20:10:24 compute-0 sudo[253027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:24 compute-0 ceph-mon[75677]: pgmap v801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:24 compute-0 python3.9[253029]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764015023.4419787-1356-215300896493208/.source _original_basename=.imrb_nk5 follow=False checksum=e0cdf5f5c8ffe587db19d6a6d5e1352a9cd54635 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 24 20:10:24 compute-0 sudo[253027]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:24.717+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:24 compute-0 podman[253030]: 2025-11-24 20:10:24.732020482 +0000 UTC m=+0.065803057 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:10:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:25.070+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:25 compute-0 python3.9[253203]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:10:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:25.718+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:26.080+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:26 compute-0 python3.9[253355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:10:26 compute-0 ceph-mon[75677]: pgmap v802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1142 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:26.756+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:26 compute-0 python3.9[253476]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764015025.7582347-1382-167702721760977/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:10:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:27.046+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1142 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:27 compute-0 python3.9[253626]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 20:10:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:27.802+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:27.997+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:28 compute-0 python3.9[253747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764015027.171998-1397-202801010554878/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 20:10:28 compute-0 sshd-session[250696]: Invalid user btf from 27.79.44.141 port 54806
Nov 24 20:10:28 compute-0 ceph-mon[75677]: pgmap v803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:28.836+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:29.022+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:29 compute-0 sudo[253897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjgvkssjclnnzuytplukedgkqlkigpum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015028.6533453-1414-20980277070263/AnsiballZ_container_config_data.py'
Nov 24 20:10:29 compute-0 sudo[253897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:29 compute-0 python3.9[253899]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 24 20:10:29 compute-0 sshd-session[250696]: Connection closed by invalid user btf 27.79.44.141 port 54806 [preauth]
Nov 24 20:10:29 compute-0 sudo[253897]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:29 compute-0 sudo[254049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbwswdmvduzexatmzsqmhtqkbdfxslva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015029.4715247-1423-200726419113177/AnsiballZ_container_config_hash.py'
Nov 24 20:10:29 compute-0 sudo[254049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:29.879+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:29.980+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:30 compute-0 python3.9[254051]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 20:10:30 compute-0 sudo[254049]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:30 compute-0 ceph-mon[75677]: pgmap v804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:30 compute-0 sudo[254212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oozhljycadgvfmolthcffplerotkneam ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764015030.4297228-1433-50203116146244/AnsiballZ_edpm_container_manage.py'
Nov 24 20:10:30 compute-0 sudo[254212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:30.885+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:30 compute-0 podman[254175]: 2025-11-24 20:10:30.894803731 +0000 UTC m=+0.134227604 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 20:10:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:30.979+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:31 compute-0 python3[254221]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 20:10:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1152 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:31.851+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:31.998+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:32 compute-0 ceph-mon[75677]: pgmap v805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1152 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:32.850+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:33.047+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:33.853+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:34.088+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:10:34 compute-0 ceph-mon[75677]: pgmap v806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:34.874+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:35.090+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:35.851+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:36.073+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:36.869+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:37.056+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:37 compute-0 ceph-mon[75677]: pgmap v807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1157 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:37.830+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:38.063+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:38 compute-0 ceph-mon[75677]: pgmap v808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1157 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:38.853+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:39.096+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:39.821+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:40.121+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:10:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:10:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:10:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:10:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:10:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:40.844+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:41.078+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:41 compute-0 ceph-mon[75677]: pgmap v809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:41 compute-0 podman[254242]: 2025-11-24 20:10:41.63151007 +0000 UTC m=+10.394738987 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 24 20:10:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:41.820+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:41 compute-0 podman[254325]: 2025-11-24 20:10:41.823008993 +0000 UTC m=+0.069728611 container create 616d8bdd4f7ee8e33d103b2df1b99211dd45c569a4f80473f8993359bd7ca180 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:10:41 compute-0 podman[254325]: 2025-11-24 20:10:41.781846767 +0000 UTC m=+0.028566445 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 24 20:10:41 compute-0 python3[254221]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 24 20:10:41 compute-0 sudo[254212]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:42.064+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:42 compute-0 ceph-mon[75677]: pgmap v810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:42 compute-0 sudo[254513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwyynzkxahgldoptioayevoflbuegmrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015042.300582-1441-94587109589967/AnsiballZ_stat.py'
Nov 24 20:10:42 compute-0 sudo[254513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:42.784+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:42 compute-0 python3.9[254515]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:10:42 compute-0 sudo[254513]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:43.074+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:43 compute-0 sudo[254667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbvjkmerdljzfwxdzqfrxmzmjzjpbkmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015043.3720944-1453-74361349211396/AnsiballZ_container_config_data.py'
Nov 24 20:10:43 compute-0 sudo[254667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:43.753+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:43 compute-0 python3.9[254669]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 24 20:10:44 compute-0 sudo[254667]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:44.101+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:44 compute-0 ceph-mon[75677]: pgmap v811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:44 compute-0 sudo[254830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtffdslmoilboigdbycmwgpthbifvtle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015044.253954-1462-15205073726952/AnsiballZ_container_config_hash.py'
Nov 24 20:10:44 compute-0 sudo[254830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:44 compute-0 podman[254793]: 2025-11-24 20:10:44.688003224 +0000 UTC m=+0.106475350 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Nov 24 20:10:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:44.751+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:44 compute-0 python3.9[254837]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 20:10:44 compute-0 sudo[254830]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:45.146+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:45 compute-0 sudo[254990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzlhhngksbfsaavmumybpckgftlljqsd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764015045.302606-1472-119536641000616/AnsiballZ_edpm_container_manage.py'
Nov 24 20:10:45 compute-0 sudo[254990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:45.760+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:46 compute-0 python3[254992]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 20:10:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:46.184+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:46 compute-0 podman[255029]: 2025-11-24 20:10:46.25770504 +0000 UTC m=+0.029624853 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 24 20:10:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:46 compute-0 ceph-mon[75677]: pgmap v812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:46 compute-0 podman[255029]: 2025-11-24 20:10:46.504474541 +0000 UTC m=+0.276394264 container create 1de384cbb450998ff223a57a3613b23a1d0cc873023971a11d9c28cb830c031e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0)
Nov 24 20:10:46 compute-0 python3[254992]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 24 20:10:46 compute-0 sudo[254990]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1162 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:46.794+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:47.152+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:47 compute-0 sudo[255217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txpjnbyxqzdodcplwlvetzshazgjqbah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015046.964811-1480-191681166119984/AnsiballZ_stat.py'
Nov 24 20:10:47 compute-0 sudo[255217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:47 compute-0 python3.9[255219]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:10:47 compute-0 sudo[255217]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1162 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:47.765+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:48.158+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:48 compute-0 sudo[255373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjisyhtovitjmaprqrkwomkdzimqtuzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015047.8558087-1489-206287585767526/AnsiballZ_file.py'
Nov 24 20:10:48 compute-0 sudo[255373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:48 compute-0 python3.9[255375]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:10:48 compute-0 sudo[255373]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:48 compute-0 ceph-mon[75677]: pgmap v813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:48.724+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:49 compute-0 sudo[255524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvtnyzxcijcbrrevwggmcncihaobgvjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015048.5015426-1489-82028987442596/AnsiballZ_copy.py'
Nov 24 20:10:49 compute-0 sudo[255524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:49.134+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:49 compute-0 python3.9[255526]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764015048.5015426-1489-82028987442596/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 20:10:49 compute-0 sudo[255524]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:49 compute-0 sudo[255600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfdbsnycxuhazetkjlkidvvnkbnzyvio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015048.5015426-1489-82028987442596/AnsiballZ_systemd.py'
Nov 24 20:10:49 compute-0 sudo[255600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:49.767+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:49 compute-0 python3.9[255602]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 20:10:49 compute-0 systemd[1]: Reloading.
Nov 24 20:10:49 compute-0 systemd-rc-local-generator[255630]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:10:50 compute-0 systemd-sysv-generator[255635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:10:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:50.113+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:50 compute-0 sudo[255600]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:50 compute-0 ceph-mon[75677]: pgmap v814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:50 compute-0 sudo[255711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsvcnzqoqjasgzjwsycdbzaqqtlqmytn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015048.5015426-1489-82028987442596/AnsiballZ_systemd.py'
Nov 24 20:10:50 compute-0 sudo[255711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:50.748+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:50 compute-0 sudo[255714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:10:50 compute-0 sudo[255714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:50 compute-0 sudo[255714]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:50 compute-0 python3.9[255713]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 20:10:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:51 compute-0 sudo[255740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:10:51 compute-0 systemd[1]: Reloading.
Nov 24 20:10:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:51.078+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:51 compute-0 systemd-rc-local-generator[255786]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 20:10:51 compute-0 systemd-sysv-generator[255791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 20:10:51 compute-0 sudo[255740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:51 compute-0 sudo[255740]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:51 compute-0 systemd[1]: Starting nova_compute container...
Nov 24 20:10:51 compute-0 sudo[255803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:10:51 compute-0 sudo[255803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:51 compute-0 sudo[255803]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:10:51 compute-0 sudo[255839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:51 compute-0 sudo[255839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:51 compute-0 podman[255802]: 2025-11-24 20:10:51.6108625 +0000 UTC m=+0.159176290 container init 1de384cbb450998ff223a57a3613b23a1d0cc873023971a11d9c28cb830c031e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=edpm, managed_by=edpm_ansible)
Nov 24 20:10:51 compute-0 podman[255802]: 2025-11-24 20:10:51.62220148 +0000 UTC m=+0.170515250 container start 1de384cbb450998ff223a57a3613b23a1d0cc873023971a11d9c28cb830c031e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Nov 24 20:10:51 compute-0 podman[255802]: nova_compute
Nov 24 20:10:51 compute-0 nova_compute[255851]: + sudo -E kolla_set_configs
Nov 24 20:10:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:51 compute-0 systemd[1]: Started nova_compute container.
Nov 24 20:10:51 compute-0 sudo[255711]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Validating config file
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying service configuration files
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Deleting /etc/ceph
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Creating directory /etc/ceph
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Writing out command to execute
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 20:10:51 compute-0 nova_compute[255851]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 20:10:51 compute-0 nova_compute[255851]: ++ cat /run_command
Nov 24 20:10:51 compute-0 nova_compute[255851]: + CMD=nova-compute
Nov 24 20:10:51 compute-0 nova_compute[255851]: + ARGS=
Nov 24 20:10:51 compute-0 nova_compute[255851]: + sudo kolla_copy_cacerts
Nov 24 20:10:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1172 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:51 compute-0 nova_compute[255851]: + [[ ! -n '' ]]
Nov 24 20:10:51 compute-0 nova_compute[255851]: + . kolla_extend_start
Nov 24 20:10:51 compute-0 nova_compute[255851]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 20:10:51 compute-0 nova_compute[255851]: Running command: 'nova-compute'
Nov 24 20:10:51 compute-0 nova_compute[255851]: + umask 0022
Nov 24 20:10:51 compute-0 nova_compute[255851]: + exec nova-compute
Nov 24 20:10:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:51.793+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:52.087+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:52 compute-0 sudo[255839]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:10:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:10:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:10:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:10:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:10:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:10:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev db094fb3-d420-4e74-899d-326c4169d9fd does not exist
Nov 24 20:10:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f3e3f335-0ce3-4595-882c-5855e728525e does not exist
Nov 24 20:10:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a70d646e-9332-4ddc-ba95-5a3dadac478a does not exist
Nov 24 20:10:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:10:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:10:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:10:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:10:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:10:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:10:52 compute-0 sudo[255939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:10:52 compute-0 sudo[255939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:52 compute-0 sudo[255939]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:52 compute-0 sudo[255986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:10:52 compute-0 sudo[255986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:52 compute-0 sudo[255986]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:52 compute-0 sudo[256036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:10:52 compute-0 sudo[256036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:52 compute-0 sudo[256036]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:52 compute-0 sudo[256084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:10:52 compute-0 sudo[256084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:52 compute-0 ceph-mon[75677]: pgmap v815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1172 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:10:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:10:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:10:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:10:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:10:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:10:52 compute-0 python3.9[256159]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:10:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:52.761+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:52 compute-0 podman[256217]: 2025-11-24 20:10:52.826509224 +0000 UTC m=+0.064195184 container create bedeec77b163bd35a85c8a83c870ef17d9b85cd1bf4fcabb09403b999f4f84d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_zhukovsky, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 20:10:52 compute-0 systemd[1]: Started libpod-conmon-bedeec77b163bd35a85c8a83c870ef17d9b85cd1bf4fcabb09403b999f4f84d7.scope.
Nov 24 20:10:52 compute-0 podman[256217]: 2025-11-24 20:10:52.80134808 +0000 UTC m=+0.039034040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:10:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:10:52 compute-0 podman[256217]: 2025-11-24 20:10:52.955636692 +0000 UTC m=+0.193322712 container init bedeec77b163bd35a85c8a83c870ef17d9b85cd1bf4fcabb09403b999f4f84d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_zhukovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:10:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:52 compute-0 podman[256217]: 2025-11-24 20:10:52.96995012 +0000 UTC m=+0.207636070 container start bedeec77b163bd35a85c8a83c870ef17d9b85cd1bf4fcabb09403b999f4f84d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:10:52 compute-0 awesome_zhukovsky[256239]: 167 167
Nov 24 20:10:52 compute-0 systemd[1]: libpod-bedeec77b163bd35a85c8a83c870ef17d9b85cd1bf4fcabb09403b999f4f84d7.scope: Deactivated successfully.
Nov 24 20:10:52 compute-0 podman[256217]: 2025-11-24 20:10:52.982437139 +0000 UTC m=+0.220123099 container attach bedeec77b163bd35a85c8a83c870ef17d9b85cd1bf4fcabb09403b999f4f84d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 20:10:52 compute-0 podman[256217]: 2025-11-24 20:10:52.985044777 +0000 UTC m=+0.222730727 container died bedeec77b163bd35a85c8a83c870ef17d9b85cd1bf4fcabb09403b999f4f84d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_zhukovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b792dcd5ba91fa544e61e3ceefc35e889b1e81dea70700cadb5dc4640c4bb196-merged.mount: Deactivated successfully.
Nov 24 20:10:53 compute-0 podman[256217]: 2025-11-24 20:10:53.060150249 +0000 UTC m=+0.297836169 container remove bedeec77b163bd35a85c8a83c870ef17d9b85cd1bf4fcabb09403b999f4f84d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_zhukovsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:10:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:53.057+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:53 compute-0 systemd[1]: libpod-conmon-bedeec77b163bd35a85c8a83c870ef17d9b85cd1bf4fcabb09403b999f4f84d7.scope: Deactivated successfully.
Nov 24 20:10:53 compute-0 podman[256338]: 2025-11-24 20:10:53.296881185 +0000 UTC m=+0.049441345 container create 99f9ddfba73cf05385860bbccef3b556ce65a5b67ec78aacf6a2bd5056f2e7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 20:10:53 compute-0 systemd[1]: Started libpod-conmon-99f9ddfba73cf05385860bbccef3b556ce65a5b67ec78aacf6a2bd5056f2e7c8.scope.
Nov 24 20:10:53 compute-0 podman[256338]: 2025-11-24 20:10:53.273333704 +0000 UTC m=+0.025893884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:10:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d87c00e3c198835cf2ccc59969a8ade815942ac626692146e44a6efc132e0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d87c00e3c198835cf2ccc59969a8ade815942ac626692146e44a6efc132e0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d87c00e3c198835cf2ccc59969a8ade815942ac626692146e44a6efc132e0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d87c00e3c198835cf2ccc59969a8ade815942ac626692146e44a6efc132e0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d87c00e3c198835cf2ccc59969a8ade815942ac626692146e44a6efc132e0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:53 compute-0 podman[256338]: 2025-11-24 20:10:53.391155613 +0000 UTC m=+0.143715813 container init 99f9ddfba73cf05385860bbccef3b556ce65a5b67ec78aacf6a2bd5056f2e7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nobel, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:10:53 compute-0 podman[256338]: 2025-11-24 20:10:53.411273433 +0000 UTC m=+0.163833583 container start 99f9ddfba73cf05385860bbccef3b556ce65a5b67ec78aacf6a2bd5056f2e7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:10:53 compute-0 podman[256338]: 2025-11-24 20:10:53.43577096 +0000 UTC m=+0.188331110 container attach 99f9ddfba73cf05385860bbccef3b556ce65a5b67ec78aacf6a2bd5056f2e7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nobel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 20:10:53 compute-0 python3.9[256409]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:10:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:53.766+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:53 compute-0 nova_compute[255851]: 2025-11-24 20:10:53.791 255871 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 20:10:53 compute-0 nova_compute[255851]: 2025-11-24 20:10:53.791 255871 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 20:10:53 compute-0 nova_compute[255851]: 2025-11-24 20:10:53.791 255871 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 20:10:53 compute-0 nova_compute[255851]: 2025-11-24 20:10:53.792 255871 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 24 20:10:53 compute-0 nova_compute[255851]: 2025-11-24 20:10:53.928 255871 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:10:53 compute-0 nova_compute[255851]: 2025-11-24 20:10:53.964 255871 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:10:53 compute-0 nova_compute[255851]: 2025-11-24 20:10:53.964 255871 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 24 20:10:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:54.104+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:10:54 compute-0 keen_nobel[256378]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:10:54 compute-0 keen_nobel[256378]: --> relative data size: 1.0
Nov 24 20:10:54 compute-0 keen_nobel[256378]: --> All data devices are unavailable
Nov 24 20:10:54 compute-0 systemd[1]: libpod-99f9ddfba73cf05385860bbccef3b556ce65a5b67ec78aacf6a2bd5056f2e7c8.scope: Deactivated successfully.
Nov 24 20:10:54 compute-0 systemd[1]: libpod-99f9ddfba73cf05385860bbccef3b556ce65a5b67ec78aacf6a2bd5056f2e7c8.scope: Consumed 1.026s CPU time.
Nov 24 20:10:54 compute-0 podman[256338]: 2025-11-24 20:10:54.54327569 +0000 UTC m=+1.295835880 container died 99f9ddfba73cf05385860bbccef3b556ce65a5b67ec78aacf6a2bd5056f2e7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.551 255871 INFO nova.virt.driver [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 24 20:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-29d87c00e3c198835cf2ccc59969a8ade815942ac626692146e44a6efc132e0a-merged.mount: Deactivated successfully.
Nov 24 20:10:54 compute-0 python3.9[256580]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 20:10:54 compute-0 podman[256338]: 2025-11-24 20:10:54.611795949 +0000 UTC m=+1.364356109 container remove 99f9ddfba73cf05385860bbccef3b556ce65a5b67ec78aacf6a2bd5056f2e7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_nobel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 20:10:54 compute-0 systemd[1]: libpod-conmon-99f9ddfba73cf05385860bbccef3b556ce65a5b67ec78aacf6a2bd5056f2e7c8.scope: Deactivated successfully.
Nov 24 20:10:54 compute-0 sudo[256084]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:54 compute-0 ceph-mon[75677]: pgmap v816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:54 compute-0 sudo[256605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:10:54 compute-0 sudo[256605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:54 compute-0 sudo[256605]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.743 255871 INFO nova.compute.provider_config [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.757 255871 DEBUG oslo_concurrency.lockutils [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.757 255871 DEBUG oslo_concurrency.lockutils [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.757 255871 DEBUG oslo_concurrency.lockutils [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.758 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.758 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.758 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.758 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.759 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.759 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.759 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.759 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.760 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.760 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.760 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.760 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.760 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.761 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.761 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.761 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.761 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.761 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.762 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.762 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.762 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.762 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.763 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.763 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.763 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.763 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.763 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.764 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.764 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.764 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.764 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.765 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.765 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.765 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.765 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.765 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.766 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.766 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.766 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.766 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.767 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.767 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.767 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.767 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.767 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.768 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.768 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.768 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.768 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.768 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.769 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.769 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.769 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.769 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.769 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.770 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.770 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.770 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.770 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.770 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.771 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.771 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.771 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.771 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.771 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.772 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.772 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.772 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.772 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.772 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.773 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.773 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.773 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.773 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.773 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.774 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.774 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.774 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.774 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.774 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.775 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.775 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.775 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.775 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.775 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.776 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.776 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.776 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.776 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.776 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.776 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.777 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.777 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.777 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.777 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.777 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.778 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.778 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.778 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.778 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.778 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.779 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.779 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.779 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.779 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.780 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.780 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.780 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.780 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.780 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.781 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.781 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.781 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.781 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.781 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.782 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.782 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.782 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.782 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.782 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.783 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.783 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.783 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.783 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.783 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.783 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.783 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.784 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.784 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.784 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.784 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.784 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.784 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.784 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.785 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.785 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.785 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.785 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.785 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.785 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.785 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.786 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.786 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.786 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.786 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.786 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.786 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.786 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.787 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.787 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.787 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.787 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.787 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.787 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.787 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.788 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.788 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.788 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.788 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.788 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.788 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.788 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.789 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.789 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.789 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.789 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.789 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.789 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.789 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.790 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.790 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.790 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.790 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.790 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.790 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.791 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.791 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.791 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.791 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.791 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.791 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.792 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.792 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.792 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.792 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.792 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.792 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.792 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.793 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.793 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.793 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.793 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.793 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.793 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.794 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.794 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.794 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.794 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.794 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.794 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.794 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.795 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.795 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.795 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.795 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.795 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.795 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.795 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.795 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.796 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.796 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.796 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.796 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.796 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.796 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.796 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.797 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.797 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.797 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.797 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.797 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.797 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.797 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.797 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.798 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.798 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.798 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.798 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.798 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.798 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.798 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.799 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.799 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.799 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.799 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.799 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 sudo[256648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.799 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.800 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.800 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.800 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.800 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.800 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.800 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.800 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.801 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.801 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.801 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.801 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.801 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.801 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.802 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.802 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.802 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.802 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.802 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.802 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.802 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.802 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.803 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.803 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.803 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 sudo[256648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.803 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.803 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.803 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.803 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.804 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.804 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.804 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.804 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.804 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.804 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.805 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.805 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.805 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.805 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.805 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.806 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.806 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.806 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.806 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.806 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 sudo[256648]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.806 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.807 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.807 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.807 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.807 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.807 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.808 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.808 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.808 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.808 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.808 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.808 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.809 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.809 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.809 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.809 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.809 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.809 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.810 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.810 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.810 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.810 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.810 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.810 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.811 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.811 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.811 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.811 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.811 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.811 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.811 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.812 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.812 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.812 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.812 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.812 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.812 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.812 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.812 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.813 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.813 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.813 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:54.811+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.813 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.813 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.814 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.814 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.814 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.814 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.814 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.814 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.815 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.815 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.815 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.815 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.815 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.816 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.816 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.816 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.816 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.816 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.816 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.816 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.817 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.817 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.817 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.817 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.817 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.817 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.817 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.818 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.818 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.818 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.818 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.818 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.818 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.819 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.819 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.819 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.819 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.819 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.820 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.820 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.820 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.820 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.820 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.820 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.820 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.821 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.821 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.821 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.821 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.821 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.821 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.822 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.822 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.822 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.822 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.822 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.822 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.822 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.823 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.823 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.823 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.823 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.823 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.823 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.823 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.824 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.824 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.824 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.824 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.824 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.824 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.824 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.825 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.825 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.825 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.825 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.825 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.825 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.825 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.826 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.826 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.826 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.826 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.826 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.827 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.827 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.827 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.827 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.827 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.827 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.827 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.828 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.828 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.828 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.828 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.828 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.828 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.829 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.829 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.829 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.829 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.829 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.830 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.830 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.830 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.830 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.830 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.830 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.831 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.831 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.831 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.831 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.831 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.831 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.831 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.832 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.832 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.832 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.832 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.832 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.832 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.833 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.833 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.833 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.833 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.833 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.833 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.834 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.834 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.834 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.834 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.834 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.834 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.835 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.835 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.835 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.835 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.835 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.835 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.835 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.836 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.836 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.836 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.836 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.836 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.836 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.836 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.837 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.837 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.837 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.837 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.837 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.837 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.837 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.837 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.838 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.838 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.838 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.838 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.838 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.838 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.838 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.839 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.839 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.839 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.839 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.839 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.839 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.840 255871 WARNING oslo_config.cfg [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 20:10:54 compute-0 nova_compute[255851]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 20:10:54 compute-0 nova_compute[255851]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 20:10:54 compute-0 nova_compute[255851]: and ``live_migration_inbound_addr`` respectively.
Nov 24 20:10:54 compute-0 nova_compute[255851]: ).  Its value may be silently ignored in the future.
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.840 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.840 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.840 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.840 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.840 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.841 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.841 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.841 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.841 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.841 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.841 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.841 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.842 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.842 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.842 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.842 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.842 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.842 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.843 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.rbd_secret_uuid        = 05e060a3-406b-57f0-89d2-ec35f5b09305 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.843 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.843 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.843 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.843 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.844 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.844 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.844 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.844 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.844 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.844 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.845 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.845 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.845 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.845 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.846 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.846 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.846 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.846 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.846 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.847 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.847 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.847 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.847 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.847 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.848 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.848 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.848 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.848 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.848 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.848 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.849 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.849 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.849 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.849 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.849 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.850 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.850 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.850 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.850 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.850 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.850 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.851 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.851 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.851 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.851 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.851 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.851 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.851 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.852 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.852 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.852 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.852 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.852 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.852 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.852 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.853 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.853 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.853 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.853 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.853 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.853 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.854 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.854 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.854 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.854 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.854 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.854 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.855 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.855 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.855 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.855 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.855 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.855 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.856 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.856 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.856 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.856 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.856 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.856 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.856 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.857 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.857 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.857 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.857 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.857 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.857 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.857 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.858 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.858 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.858 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.858 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.858 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.858 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.859 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.859 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.859 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.859 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.859 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.859 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.859 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.860 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.860 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.860 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.860 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.860 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.860 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.860 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.861 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.861 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.861 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.861 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.861 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.861 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.862 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.862 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.862 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.862 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.862 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.862 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.863 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.863 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.863 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.863 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.863 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.863 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.864 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.864 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.864 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.864 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.864 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.864 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.864 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.865 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.865 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.865 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.865 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.865 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.865 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.865 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.866 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.866 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.866 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.866 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.866 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.866 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.866 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.867 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.867 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.867 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.867 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.867 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.867 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.867 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.868 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.868 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.868 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.868 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.868 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.868 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.868 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.869 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.869 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.869 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.869 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.869 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.869 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.870 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.870 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.870 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.870 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.870 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.870 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.870 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 podman[256653]: 2025-11-24 20:10:54.870840273 +0000 UTC m=+0.103854461 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.871 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.871 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.871 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.871 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.871 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.871 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.871 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.872 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.872 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.872 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.872 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.872 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.872 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.873 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.873 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.873 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.873 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.873 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.873 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.873 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.873 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.874 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.874 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.874 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.874 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.874 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.874 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.874 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.875 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.875 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.875 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.875 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.875 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.875 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.875 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.876 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.876 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.876 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.876 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.876 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.876 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.876 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.877 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.877 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.877 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.877 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.877 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.877 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.877 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.878 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.878 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.878 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.878 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.878 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.878 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.878 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.879 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.879 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.879 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.879 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.879 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.880 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.880 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.880 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.880 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.880 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.880 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.880 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.880 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.881 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.881 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.881 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.881 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.882 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.882 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.882 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.882 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.882 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.882 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.882 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.883 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.883 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.883 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.883 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.883 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.883 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.884 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.884 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.884 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.884 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.884 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.884 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.884 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.885 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.885 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.885 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.885 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.885 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.885 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.885 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.886 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.886 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.886 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.886 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.886 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.886 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.887 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.887 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.887 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.887 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.887 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.887 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.887 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.888 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.888 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.888 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.888 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.888 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.888 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.888 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.889 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.889 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.889 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.889 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.889 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.889 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.889 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.890 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.890 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.890 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.890 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.890 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.890 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.890 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.891 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.891 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.891 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.891 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.891 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.891 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.891 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.892 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.892 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.892 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.892 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.892 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.892 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.892 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 sudo[256689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.893 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.893 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.893 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.893 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.893 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.893 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.893 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.894 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.894 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.894 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.894 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.894 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.894 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.894 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.895 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.895 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.895 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.895 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.895 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.895 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.895 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.895 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 sudo[256689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.896 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.896 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.896 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.896 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.896 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.896 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.896 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.897 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.897 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.897 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.897 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.897 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.897 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.897 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.898 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.898 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.898 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.898 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.898 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.898 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.899 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.899 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.899 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.899 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.899 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.899 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.899 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.900 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 sudo[256689]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.900 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.900 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.900 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.900 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.900 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.900 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.901 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.901 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.901 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.901 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.901 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.901 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.901 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.902 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.902 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.902 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.902 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.902 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.902 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.902 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.903 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.903 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.903 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.903 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.903 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.903 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.903 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.904 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.904 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.904 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.904 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.904 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.904 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.904 255871 DEBUG oslo_service.service [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.906 255871 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.917 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.917 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.918 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 24 20:10:54 compute-0 nova_compute[255851]: 2025-11-24 20:10:54.918 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 24 20:10:54 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 20:10:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:54 compute-0 sudo[256756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:10:54 compute-0 sudo[256756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:54 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 24 20:10:55 compute-0 nova_compute[255851]: 2025-11-24 20:10:55.011 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f48ad9e6460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 24 20:10:55 compute-0 nova_compute[255851]: 2025-11-24 20:10:55.013 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f48ad9e6460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 24 20:10:55 compute-0 nova_compute[255851]: 2025-11-24 20:10:55.015 255871 INFO nova.virt.libvirt.driver [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Connection event '1' reason 'None'
Nov 24 20:10:55 compute-0 nova_compute[255851]: 2025-11-24 20:10:55.039 255871 WARNING nova.virt.libvirt.driver [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 20:10:55 compute-0 nova_compute[255851]: 2025-11-24 20:10:55.040 255871 DEBUG nova.virt.libvirt.volume.mount [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 24 20:10:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:55.103+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:55 compute-0 podman[256918]: 2025-11-24 20:10:55.443570665 +0000 UTC m=+0.065373497 container create b6cbf7445c9f14f918f7ef428c3842909971d3e47900b05306aacf33c58c9721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 20:10:55 compute-0 podman[256918]: 2025-11-24 20:10:55.40473861 +0000 UTC m=+0.026541512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:10:55 compute-0 sudo[256970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoinkdcrhspllvtdgfubnmnurjjmvtrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015054.8627048-1549-39290169310748/AnsiballZ_podman_container.py'
Nov 24 20:10:55 compute-0 systemd[1]: Started libpod-conmon-b6cbf7445c9f14f918f7ef428c3842909971d3e47900b05306aacf33c58c9721.scope.
Nov 24 20:10:55 compute-0 sudo[256970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:10:55 compute-0 podman[256918]: 2025-11-24 20:10:55.612221964 +0000 UTC m=+0.234024826 container init b6cbf7445c9f14f918f7ef428c3842909971d3e47900b05306aacf33c58c9721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 20:10:55 compute-0 podman[256918]: 2025-11-24 20:10:55.627268851 +0000 UTC m=+0.249071713 container start b6cbf7445c9f14f918f7ef428c3842909971d3e47900b05306aacf33c58c9721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 24 20:10:55 compute-0 objective_neumann[256975]: 167 167
Nov 24 20:10:55 compute-0 systemd[1]: libpod-b6cbf7445c9f14f918f7ef428c3842909971d3e47900b05306aacf33c58c9721.scope: Deactivated successfully.
Nov 24 20:10:55 compute-0 podman[256918]: 2025-11-24 20:10:55.648317667 +0000 UTC m=+0.270120579 container attach b6cbf7445c9f14f918f7ef428c3842909971d3e47900b05306aacf33c58c9721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:10:55 compute-0 podman[256918]: 2025-11-24 20:10:55.649279652 +0000 UTC m=+0.271082544 container died b6cbf7445c9f14f918f7ef428c3842909971d3e47900b05306aacf33c58c9721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 20:10:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:55 compute-0 python3.9[256977]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 20:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-df5b262c4cf8ad9b54d92207e1e84d75e033ff9eb2b2cc9094a3e1d0463f95f4-merged.mount: Deactivated successfully.
Nov 24 20:10:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:55.851+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:55 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:10:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:55 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:10:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:55 compute-0 podman[256918]: 2025-11-24 20:10:55.958775398 +0000 UTC m=+0.580578250 container remove b6cbf7445c9f14f918f7ef428c3842909971d3e47900b05306aacf33c58c9721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:10:56 compute-0 systemd[1]: libpod-conmon-b6cbf7445c9f14f918f7ef428c3842909971d3e47900b05306aacf33c58c9721.scope: Deactivated successfully.
Nov 24 20:10:56 compute-0 sudo[256970]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.122 255871 INFO nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 20:10:56 compute-0 nova_compute[255851]: 
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <host>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <uuid>e19f0d46-fa86-4b57-a68a-08490f1ee667</uuid>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <cpu>
Nov 24 20:10:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:56.120+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <arch>x86_64</arch>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model>EPYC-Rome-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <vendor>AMD</vendor>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <microcode version='16777317'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <signature family='23' model='49' stepping='0'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='x2apic'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='tsc-deadline'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='osxsave'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='hypervisor'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='tsc_adjust'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='spec-ctrl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='stibp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='arch-capabilities'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='cmp_legacy'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='topoext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='virt-ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='lbrv'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='tsc-scale'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='vmcb-clean'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='pause-filter'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='pfthreshold'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='svme-addr-chk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='rdctl-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='skip-l1dfl-vmentry'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='mds-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature name='pschange-mc-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <pages unit='KiB' size='4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <pages unit='KiB' size='2048'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <pages unit='KiB' size='1048576'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </cpu>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <power_management>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <suspend_mem/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </power_management>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <iommu support='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <migration_features>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <live/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <uri_transports>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <uri_transport>tcp</uri_transport>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <uri_transport>rdma</uri_transport>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </uri_transports>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </migration_features>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <topology>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <cells num='1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <cell id='0'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:           <memory unit='KiB'>7864308</memory>
Nov 24 20:10:56 compute-0 nova_compute[255851]:           <pages unit='KiB' size='4'>1966077</pages>
Nov 24 20:10:56 compute-0 nova_compute[255851]:           <pages unit='KiB' size='2048'>0</pages>
Nov 24 20:10:56 compute-0 nova_compute[255851]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 24 20:10:56 compute-0 nova_compute[255851]:           <distances>
Nov 24 20:10:56 compute-0 nova_compute[255851]:             <sibling id='0' value='10'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:           </distances>
Nov 24 20:10:56 compute-0 nova_compute[255851]:           <cpus num='8'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:           </cpus>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         </cell>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </cells>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </topology>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <cache>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </cache>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <secmodel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model>selinux</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <doi>0</doi>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </secmodel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <secmodel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model>dac</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <doi>0</doi>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </secmodel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </host>
Nov 24 20:10:56 compute-0 nova_compute[255851]: 
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <guest>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <os_type>hvm</os_type>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <arch name='i686'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <wordsize>32</wordsize>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <domain type='qemu'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <domain type='kvm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </arch>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <features>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <pae/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <nonpae/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <acpi default='on' toggle='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <apic default='on' toggle='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <cpuselection/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <deviceboot/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <disksnapshot default='on' toggle='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <externalSnapshot/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </features>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </guest>
Nov 24 20:10:56 compute-0 nova_compute[255851]: 
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <guest>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <os_type>hvm</os_type>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <arch name='x86_64'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <wordsize>64</wordsize>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <domain type='qemu'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <domain type='kvm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </arch>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <features>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <acpi default='on' toggle='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <apic default='on' toggle='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <cpuselection/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <deviceboot/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <disksnapshot default='on' toggle='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <externalSnapshot/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </features>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </guest>
Nov 24 20:10:56 compute-0 nova_compute[255851]: 
Nov 24 20:10:56 compute-0 nova_compute[255851]: </capabilities>
Nov 24 20:10:56 compute-0 nova_compute[255851]: 
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.135 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.169 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 20:10:56 compute-0 nova_compute[255851]: <domainCapabilities>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <domain>kvm</domain>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <arch>i686</arch>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <vcpu max='240'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <iothreads supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <os supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <enum name='firmware'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <loader supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>rom</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pflash</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='readonly'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>yes</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>no</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='secure'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>no</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </loader>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </os>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <cpu>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='host-passthrough' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='hostPassthroughMigratable'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>on</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>off</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='maximum' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='maximumMigratable'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>on</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>off</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='host-model' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <vendor>AMD</vendor>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='x2apic'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='hypervisor'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='stibp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='overflow-recov'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='succor'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='lbrv'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc-scale'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='flushbyasid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='pause-filter'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='pfthreshold'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='disable' name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='custom' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Dhyana-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Genoa'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='auto-ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='auto-ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-128'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-256'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-512'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v6'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v7'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='KnightsMill'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4fmaps'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4vnniw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512er'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512pf'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='KnightsMill-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4fmaps'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4vnniw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512er'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512pf'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G4-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tbm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G5-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tbm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SierraForest'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ne-convert'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cmpccxadd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SierraForest-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ne-convert'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cmpccxadd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='athlon'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='athlon-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='core2duo'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='core2duo-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='coreduo'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='coreduo-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='n270'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='n270-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='phenom'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='phenom-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </cpu>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <memoryBacking supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <enum name='sourceType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>file</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>anonymous</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>memfd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </memoryBacking>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <devices>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <disk supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='diskDevice'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>disk</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>cdrom</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>floppy</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>lun</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='bus'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>ide</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>fdc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>scsi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>sata</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-non-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </disk>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <graphics supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vnc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>egl-headless</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dbus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </graphics>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <video supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='modelType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vga</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>cirrus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>none</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>bochs</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>ramfb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </video>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <hostdev supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='mode'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>subsystem</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='startupPolicy'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>default</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>mandatory</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>requisite</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>optional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='subsysType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pci</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>scsi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='capsType'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='pciBackend'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </hostdev>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <rng supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-non-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>random</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>egd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>builtin</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </rng>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <filesystem supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='driverType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>path</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>handle</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtiofs</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </filesystem>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <tpm supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tpm-tis</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tpm-crb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>emulator</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>external</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendVersion'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>2.0</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </tpm>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <redirdev supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='bus'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </redirdev>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <channel supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pty</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>unix</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </channel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <crypto supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>qemu</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>builtin</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </crypto>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <interface supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>default</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>passt</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </interface>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <panic supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>isa</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>hyperv</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </panic>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <console supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>null</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pty</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dev</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>file</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pipe</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>stdio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>udp</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tcp</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>unix</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>qemu-vdagent</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dbus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </console>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </devices>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <features>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <gic supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <vmcoreinfo supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <genid supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <backingStoreInput supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <backup supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <async-teardown supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <ps2 supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <sev supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <sgx supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <hyperv supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='features'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>relaxed</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vapic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>spinlocks</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vpindex</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>runtime</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>synic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>stimer</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>reset</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vendor_id</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>frequencies</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>reenlightenment</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tlbflush</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>ipi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>avic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>emsr_bitmap</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>xmm_input</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <defaults>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <spinlocks>4095</spinlocks>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <stimer_direct>on</stimer_direct>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </defaults>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </hyperv>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <launchSecurity supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='sectype'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tdx</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </launchSecurity>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </features>
Nov 24 20:10:56 compute-0 nova_compute[255851]: </domainCapabilities>
Nov 24 20:10:56 compute-0 nova_compute[255851]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.182 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 20:10:56 compute-0 nova_compute[255851]: <domainCapabilities>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <domain>kvm</domain>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <arch>i686</arch>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <vcpu max='4096'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <iothreads supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <os supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <enum name='firmware'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <loader supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>rom</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pflash</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='readonly'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>yes</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>no</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='secure'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>no</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </loader>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </os>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <cpu>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='host-passthrough' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='hostPassthroughMigratable'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>on</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>off</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='maximum' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='maximumMigratable'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>on</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>off</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='host-model' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <vendor>AMD</vendor>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='x2apic'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='hypervisor'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='stibp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='overflow-recov'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='succor'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='lbrv'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc-scale'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='flushbyasid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='pause-filter'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='pfthreshold'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='disable' name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='custom' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server'>
Nov 24 20:10:56 compute-0 podman[257052]: 2025-11-24 20:10:56.23891785 +0000 UTC m=+0.061726270 container create e758b83647d07de5a3e17e99f0de3f497c895fd7443f1f1352c5cc8f8bb1fc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Dhyana-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Genoa'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='auto-ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='auto-ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-128'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-256'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-512'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v6'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v7'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='KnightsMill'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4fmaps'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4vnniw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512er'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512pf'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='KnightsMill-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4fmaps'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4vnniw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512er'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512pf'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G4-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tbm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G5-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tbm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SierraForest'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ne-convert'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cmpccxadd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SierraForest-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ne-convert'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cmpccxadd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 20:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1b1d579a703c392cd7da509bb455fe5d9263e92e7aa7f3514f3a822b1727fe1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1b1d579a703c392cd7da509bb455fe5d9263e92e7aa7f3514f3a822b1727fe1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1b1d579a703c392cd7da509bb455fe5d9263e92e7aa7f3514f3a822b1727fe1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1b1d579a703c392cd7da509bb455fe5d9263e92e7aa7f3514f3a822b1727fe1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 systemd[1]: Started libpod-conmon-e758b83647d07de5a3e17e99f0de3f497c895fd7443f1f1352c5cc8f8bb1fc4d.scope.
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='athlon'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='athlon-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='core2duo'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='core2duo-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='coreduo'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='coreduo-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='n270'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='n270-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='phenom'>
Nov 24 20:10:56 compute-0 podman[257052]: 2025-11-24 20:10:56.213420126 +0000 UTC m=+0.036228616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='phenom-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </cpu>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <memoryBacking supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <enum name='sourceType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>file</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>anonymous</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>memfd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </memoryBacking>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <devices>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <disk supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='diskDevice'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>disk</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>cdrom</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>floppy</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>lun</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='bus'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>fdc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>scsi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>sata</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-non-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </disk>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <graphics supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vnc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>egl-headless</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dbus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </graphics>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <video supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='modelType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vga</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>cirrus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>none</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>bochs</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>ramfb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </video>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <hostdev supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='mode'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>subsystem</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='startupPolicy'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>default</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>mandatory</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>requisite</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>optional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='subsysType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pci</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>scsi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='capsType'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='pciBackend'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </hostdev>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <rng supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-non-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>random</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>egd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>builtin</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </rng>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <filesystem supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='driverType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>path</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>handle</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtiofs</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </filesystem>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <tpm supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tpm-tis</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tpm-crb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>emulator</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>external</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendVersion'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>2.0</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </tpm>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <redirdev supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='bus'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </redirdev>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <channel supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pty</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>unix</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </channel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <crypto supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>qemu</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>builtin</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </crypto>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <interface supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>default</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>passt</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </interface>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <panic supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>isa</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>hyperv</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </panic>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <console supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>null</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pty</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dev</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>file</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pipe</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>stdio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>udp</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tcp</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>unix</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>qemu-vdagent</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dbus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </console>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </devices>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <features>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <gic supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <vmcoreinfo supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <genid supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <backingStoreInput supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <backup supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <async-teardown supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <ps2 supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <sev supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <sgx supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <hyperv supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='features'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>relaxed</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vapic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>spinlocks</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vpindex</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>runtime</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>synic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>stimer</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>reset</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vendor_id</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>frequencies</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>reenlightenment</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tlbflush</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>ipi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>avic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>emsr_bitmap</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>xmm_input</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <defaults>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <spinlocks>4095</spinlocks>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <stimer_direct>on</stimer_direct>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </defaults>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </hyperv>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <launchSecurity supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='sectype'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tdx</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </launchSecurity>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </features>
Nov 24 20:10:56 compute-0 nova_compute[255851]: </domainCapabilities>
Nov 24 20:10:56 compute-0 nova_compute[255851]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.223 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.228 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 20:10:56 compute-0 nova_compute[255851]: <domainCapabilities>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <domain>kvm</domain>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <arch>x86_64</arch>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <vcpu max='240'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <iothreads supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <os supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <enum name='firmware'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <loader supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>rom</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pflash</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='readonly'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>yes</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>no</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='secure'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>no</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </loader>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </os>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <cpu>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='host-passthrough' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='hostPassthroughMigratable'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>on</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>off</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='maximum' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='maximumMigratable'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>on</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>off</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='host-model' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <vendor>AMD</vendor>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='x2apic'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 20:10:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='hypervisor'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='stibp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='overflow-recov'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='succor'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='lbrv'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc-scale'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='flushbyasid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='pause-filter'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='pfthreshold'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='disable' name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='custom' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Dhyana-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Genoa'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='auto-ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='auto-ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-128'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-256'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-512'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v6'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v7'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='KnightsMill'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4fmaps'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4vnniw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512er'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512pf'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='KnightsMill-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4fmaps'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4vnniw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512er'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512pf'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G4-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tbm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G5-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tbm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 podman[257052]: 2025-11-24 20:10:56.36327468 +0000 UTC m=+0.186083090 container init e758b83647d07de5a3e17e99f0de3f497c895fd7443f1f1352c5cc8f8bb1fc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SierraForest'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ne-convert'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cmpccxadd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SierraForest-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ne-convert'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cmpccxadd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 podman[257052]: 2025-11-24 20:10:56.370171212 +0000 UTC m=+0.192979602 container start e758b83647d07de5a3e17e99f0de3f497c895fd7443f1f1352c5cc8f8bb1fc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='athlon'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='athlon-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='core2duo'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='core2duo-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='coreduo'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='coreduo-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='n270'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='n270-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='phenom'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='phenom-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </cpu>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <memoryBacking supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <enum name='sourceType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>file</value>
Nov 24 20:10:56 compute-0 podman[257052]: 2025-11-24 20:10:56.383708899 +0000 UTC m=+0.206517309 container attach e758b83647d07de5a3e17e99f0de3f497c895fd7443f1f1352c5cc8f8bb1fc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>anonymous</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>memfd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </memoryBacking>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <devices>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <disk supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='diskDevice'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>disk</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>cdrom</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>floppy</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>lun</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='bus'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>ide</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>fdc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>scsi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>sata</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-non-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </disk>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <graphics supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vnc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>egl-headless</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dbus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </graphics>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <video supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='modelType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vga</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>cirrus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>none</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>bochs</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>ramfb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </video>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <hostdev supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='mode'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>subsystem</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='startupPolicy'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>default</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>mandatory</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>requisite</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>optional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='subsysType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pci</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>scsi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='capsType'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='pciBackend'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </hostdev>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <rng supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-non-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>random</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>egd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>builtin</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </rng>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <filesystem supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='driverType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>path</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>handle</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtiofs</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </filesystem>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <tpm supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tpm-tis</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tpm-crb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>emulator</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>external</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendVersion'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>2.0</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </tpm>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <redirdev supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='bus'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </redirdev>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <channel supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pty</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>unix</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </channel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <crypto supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>qemu</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>builtin</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </crypto>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <interface supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>default</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>passt</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </interface>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <panic supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>isa</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>hyperv</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </panic>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <console supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>null</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pty</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dev</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>file</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pipe</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>stdio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>udp</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tcp</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>unix</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>qemu-vdagent</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dbus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </console>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </devices>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <features>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <gic supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <vmcoreinfo supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <genid supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <backingStoreInput supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <backup supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <async-teardown supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <ps2 supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <sev supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <sgx supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <hyperv supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='features'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>relaxed</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vapic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>spinlocks</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vpindex</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>runtime</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>synic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>stimer</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>reset</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vendor_id</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>frequencies</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>reenlightenment</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tlbflush</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>ipi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>avic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>emsr_bitmap</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>xmm_input</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <defaults>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <spinlocks>4095</spinlocks>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <stimer_direct>on</stimer_direct>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </defaults>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </hyperv>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <launchSecurity supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='sectype'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tdx</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </launchSecurity>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </features>
Nov 24 20:10:56 compute-0 nova_compute[255851]: </domainCapabilities>
Nov 24 20:10:56 compute-0 nova_compute[255851]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.298 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 20:10:56 compute-0 nova_compute[255851]: <domainCapabilities>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <domain>kvm</domain>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <arch>x86_64</arch>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <vcpu max='4096'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <iothreads supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <os supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <enum name='firmware'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>efi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <loader supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>rom</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pflash</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='readonly'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>yes</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>no</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='secure'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>yes</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>no</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </loader>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </os>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <cpu>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='host-passthrough' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='hostPassthroughMigratable'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>on</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>off</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='maximum' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='maximumMigratable'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>on</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>off</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='host-model' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <vendor>AMD</vendor>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='x2apic'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='hypervisor'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='stibp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='overflow-recov'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='succor'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='lbrv'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='tsc-scale'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='flushbyasid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='pause-filter'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='pfthreshold'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <feature policy='disable' name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <mode name='custom' supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Broadwell-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Cooperlake-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Denverton-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Dhyana-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Genoa'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='auto-ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='auto-ibrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Milan-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amd-psfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='no-nested-data-bp'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='null-sel-clr-base'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='stibp-always-on'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-Rome-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='EPYC-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='GraniteRapids-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-128'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-256'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx10-512'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='prefetchiti'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Haswell-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v6'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Icelake-Server-v7'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='IvyBridge-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='KnightsMill'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4fmaps'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4vnniw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512er'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512pf'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='KnightsMill-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4fmaps'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-4vnniw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512er'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512pf'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G4-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tbm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Opteron_G5-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fma4'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tbm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xop'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SapphireRapids-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='amx-tile'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-bf16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-fp16'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bitalg'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vbmi2'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrc'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fzrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='la57'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='taa-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='tsx-ldtrk'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xfd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SierraForest'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ne-convert'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cmpccxadd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='SierraForest-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ifma'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-ne-convert'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx-vnni-int8'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='bus-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cmpccxadd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fbsdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='fsrs'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ibrs-all'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mcdt-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pbrsb-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='psdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='serialize'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vaes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='vpclmulqdq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Client-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='hle'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='rtm'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Skylake-Server-v5'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512bw'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512cd'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512dq'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512f'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='avx512vl'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='invpcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pcid'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='pku'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='mpx'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v2'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v3'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='core-capability'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='split-lock-detect'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='Snowridge-v4'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='cldemote'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='erms'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='gfni'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdir64b'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='movdiri'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='xsaves'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='athlon'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='athlon-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='core2duo'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='core2duo-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='coreduo'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='coreduo-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='n270'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='n270-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='ss'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='phenom'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <blockers model='phenom-v1'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnow'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <feature name='3dnowext'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </blockers>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </mode>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </cpu>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <memoryBacking supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <enum name='sourceType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>file</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>anonymous</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <value>memfd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </memoryBacking>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <devices>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <disk supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='diskDevice'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>disk</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>cdrom</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>floppy</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>lun</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='bus'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>fdc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>scsi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>sata</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-non-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </disk>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <graphics supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vnc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>egl-headless</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dbus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </graphics>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <video supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='modelType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vga</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>cirrus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>none</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>bochs</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>ramfb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </video>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <hostdev supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='mode'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>subsystem</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='startupPolicy'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>default</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>mandatory</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>requisite</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>optional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='subsysType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pci</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>scsi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='capsType'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='pciBackend'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </hostdev>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <rng supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtio-non-transitional</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>random</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>egd</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>builtin</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </rng>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <filesystem supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='driverType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>path</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>handle</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>virtiofs</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </filesystem>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <tpm supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tpm-tis</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tpm-crb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>emulator</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>external</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendVersion'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>2.0</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </tpm>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <redirdev supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='bus'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>usb</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </redirdev>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <channel supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pty</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>unix</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </channel>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <crypto supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>qemu</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendModel'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>builtin</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </crypto>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <interface supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='backendType'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>default</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>passt</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </interface>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <panic supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='model'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>isa</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>hyperv</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </panic>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <console supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='type'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>null</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vc</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pty</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dev</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>file</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>pipe</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>stdio</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>udp</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tcp</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>unix</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>qemu-vdagent</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>dbus</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </console>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </devices>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   <features>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <gic supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <vmcoreinfo supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <genid supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <backingStoreInput supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <backup supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <async-teardown supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <ps2 supported='yes'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <sev supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <sgx supported='no'/>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <hyperv supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='features'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>relaxed</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vapic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>spinlocks</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vpindex</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>runtime</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>synic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>stimer</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>reset</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>vendor_id</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>frequencies</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>reenlightenment</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tlbflush</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>ipi</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>avic</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>emsr_bitmap</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>xmm_input</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <defaults>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <spinlocks>4095</spinlocks>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <stimer_direct>on</stimer_direct>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </defaults>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </hyperv>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     <launchSecurity supported='yes'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       <enum name='sectype'>
Nov 24 20:10:56 compute-0 nova_compute[255851]:         <value>tdx</value>
Nov 24 20:10:56 compute-0 nova_compute[255851]:       </enum>
Nov 24 20:10:56 compute-0 nova_compute[255851]:     </launchSecurity>
Nov 24 20:10:56 compute-0 nova_compute[255851]:   </features>
Nov 24 20:10:56 compute-0 nova_compute[255851]: </domainCapabilities>
Nov 24 20:10:56 compute-0 nova_compute[255851]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.384 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.385 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.385 255871 DEBUG nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.385 255871 INFO nova.virt.libvirt.host [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Secure Boot support detected
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.388 255871 INFO nova.virt.libvirt.driver [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.388 255871 INFO nova.virt.libvirt.driver [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.403 255871 DEBUG nova.virt.libvirt.driver [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.446 255871 INFO nova.virt.node [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Determined node identity 36172ea5-11d9-49c4-91b9-fe09a4a54b66 from /var/lib/nova/compute_id
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.466 255871 WARNING nova.compute.manager [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Compute nodes ['36172ea5-11d9-49c4-91b9-fe09a4a54b66'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.508 255871 INFO nova.compute.manager [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.549 255871 WARNING nova.compute.manager [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.549 255871 DEBUG oslo_concurrency.lockutils [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.550 255871 DEBUG oslo_concurrency.lockutils [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.550 255871 DEBUG oslo_concurrency.lockutils [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.550 255871 DEBUG nova.compute.resource_tracker [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.551 255871 DEBUG oslo_concurrency.processutils [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:10:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:10:56 compute-0 sudo[257226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgwjwwudjxmskkrlobtjdgdpoyrirsgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015056.3416164-1557-155015546689000/AnsiballZ_systemd.py'
Nov 24 20:10:56 compute-0 sudo[257226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:10:56 compute-0 ceph-mon[75677]: pgmap v817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:56.889+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:10:56 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/226088652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:10:56 compute-0 nova_compute[255851]: 2025-11-24 20:10:56.995 255871 DEBUG oslo_concurrency.processutils [None req-792fc743-062f-4e80-b120-834867de5268 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:10:57 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 20:10:57 compute-0 brave_shtern[257083]: {
Nov 24 20:10:57 compute-0 brave_shtern[257083]:     "0": [
Nov 24 20:10:57 compute-0 brave_shtern[257083]:         {
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "devices": [
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "/dev/loop3"
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             ],
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_name": "ceph_lv0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_size": "21470642176",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "name": "ceph_lv0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "tags": {
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.cluster_name": "ceph",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.crush_device_class": "",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.encrypted": "0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.osd_id": "0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.type": "block",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.vdo": "0"
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             },
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "type": "block",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "vg_name": "ceph_vg0"
Nov 24 20:10:57 compute-0 brave_shtern[257083]:         }
Nov 24 20:10:57 compute-0 brave_shtern[257083]:     ],
Nov 24 20:10:57 compute-0 brave_shtern[257083]:     "1": [
Nov 24 20:10:57 compute-0 brave_shtern[257083]:         {
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "devices": [
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "/dev/loop4"
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             ],
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_name": "ceph_lv1",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_size": "21470642176",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "name": "ceph_lv1",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "tags": {
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.cluster_name": "ceph",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.crush_device_class": "",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.encrypted": "0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.osd_id": "1",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.type": "block",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.vdo": "0"
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             },
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "type": "block",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "vg_name": "ceph_vg1"
Nov 24 20:10:57 compute-0 brave_shtern[257083]:         }
Nov 24 20:10:57 compute-0 brave_shtern[257083]:     ],
Nov 24 20:10:57 compute-0 brave_shtern[257083]:     "2": [
Nov 24 20:10:57 compute-0 brave_shtern[257083]:         {
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "devices": [
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "/dev/loop5"
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             ],
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_name": "ceph_lv2",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_size": "21470642176",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "name": "ceph_lv2",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "tags": {
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.cluster_name": "ceph",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.crush_device_class": "",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.encrypted": "0",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.osd_id": "2",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.type": "block",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:                 "ceph.vdo": "0"
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             },
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "type": "block",
Nov 24 20:10:57 compute-0 brave_shtern[257083]:             "vg_name": "ceph_vg2"
Nov 24 20:10:57 compute-0 brave_shtern[257083]:         }
Nov 24 20:10:57 compute-0 brave_shtern[257083]:     ]
Nov 24 20:10:57 compute-0 brave_shtern[257083]: }
Nov 24 20:10:57 compute-0 python3.9[257228]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 20:10:57 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 24 20:10:57 compute-0 systemd[1]: libpod-e758b83647d07de5a3e17e99f0de3f497c895fd7443f1f1352c5cc8f8bb1fc4d.scope: Deactivated successfully.
Nov 24 20:10:57 compute-0 podman[257052]: 2025-11-24 20:10:57.102002452 +0000 UTC m=+0.924810892 container died e758b83647d07de5a3e17e99f0de3f497c895fd7443f1f1352c5cc8f8bb1fc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 24 20:10:57 compute-0 systemd[1]: Stopping nova_compute container...
Nov 24 20:10:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:57.141+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1b1d579a703c392cd7da509bb455fe5d9263e92e7aa7f3514f3a822b1727fe1-merged.mount: Deactivated successfully.
Nov 24 20:10:57 compute-0 podman[257052]: 2025-11-24 20:10:57.193038123 +0000 UTC m=+1.015846523 container remove e758b83647d07de5a3e17e99f0de3f497c895fd7443f1f1352c5cc8f8bb1fc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shtern, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:10:57 compute-0 systemd[1]: libpod-conmon-e758b83647d07de5a3e17e99f0de3f497c895fd7443f1f1352c5cc8f8bb1fc4d.scope: Deactivated successfully.
Nov 24 20:10:57 compute-0 nova_compute[255851]: 2025-11-24 20:10:57.208 255871 DEBUG oslo_concurrency.lockutils [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:10:57 compute-0 nova_compute[255851]: 2025-11-24 20:10:57.209 255871 DEBUG oslo_concurrency.lockutils [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:10:57 compute-0 nova_compute[255851]: 2025-11-24 20:10:57.209 255871 DEBUG oslo_concurrency.lockutils [None req-437812d3-446c-4b3f-9e83-a73ce942259b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:10:57 compute-0 sudo[256756]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:57 compute-0 sudo[257288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:10:57 compute-0 sudo[257288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:57 compute-0 sudo[257288]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:57 compute-0 sudo[257313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:10:57 compute-0 sudo[257313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:57 compute-0 sudo[257313]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:57 compute-0 sudo[257338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:10:57 compute-0 sudo[257338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:57 compute-0 sudo[257338]: pam_unix(sudo:session): session closed for user root
Nov 24 20:10:57 compute-0 virtqemud[256794]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 24 20:10:57 compute-0 virtqemud[256794]: hostname: compute-0
Nov 24 20:10:57 compute-0 virtqemud[256794]: End of file while reading data: Input/output error
Nov 24 20:10:57 compute-0 systemd[1]: libpod-1de384cbb450998ff223a57a3613b23a1d0cc873023971a11d9c28cb830c031e.scope: Deactivated successfully.
Nov 24 20:10:57 compute-0 systemd[1]: libpod-1de384cbb450998ff223a57a3613b23a1d0cc873023971a11d9c28cb830c031e.scope: Consumed 3.561s CPU time.
Nov 24 20:10:57 compute-0 podman[257260]: 2025-11-24 20:10:57.641788683 +0000 UTC m=+0.495655619 container died 1de384cbb450998ff223a57a3613b23a1d0cc873023971a11d9c28cb830c031e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 24 20:10:57 compute-0 sudo[257363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:10:57 compute-0 sudo[257363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1de384cbb450998ff223a57a3613b23a1d0cc873023971a11d9c28cb830c031e-userdata-shm.mount: Deactivated successfully.
Nov 24 20:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b-merged.mount: Deactivated successfully.
Nov 24 20:10:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:57.858+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:58 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1177 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:58.129+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:58 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/226088652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:10:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:58.834+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:10:59.089+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:10:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:59 compute-0 ceph-mon[75677]: pgmap v818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:10:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:59 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1177 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:10:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:10:59 compute-0 podman[257260]: 2025-11-24 20:10:59.641390371 +0000 UTC m=+2.495257277 container cleanup 1de384cbb450998ff223a57a3613b23a1d0cc873023971a11d9c28cb830c031e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Nov 24 20:10:59 compute-0 podman[257260]: nova_compute
Nov 24 20:10:59 compute-0 podman[257421]: nova_compute
Nov 24 20:10:59 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 24 20:10:59 compute-0 systemd[1]: Stopped nova_compute container.
Nov 24 20:10:59 compute-0 systemd[1]: Starting nova_compute container...
Nov 24 20:10:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:10:59.838+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:10:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:10:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d482882191da7ca5d54d2aab1d4f69f56d92544d0d3c670c6b30be033fab0b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 20:10:59 compute-0 podman[257469]: 2025-11-24 20:10:59.929054991 +0000 UTC m=+0.065061757 container create b192d437af7e3278d96cd643324abae0e88469d95d5d2b030c4c9fdc405143e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:10:59 compute-0 podman[257443]: 2025-11-24 20:10:59.94265881 +0000 UTC m=+0.162486438 container init 1de384cbb450998ff223a57a3613b23a1d0cc873023971a11d9c28cb830c031e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 20:10:59 compute-0 podman[257443]: 2025-11-24 20:10:59.957749158 +0000 UTC m=+0.177576726 container start 1de384cbb450998ff223a57a3613b23a1d0cc873023971a11d9c28cb830c031e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 24 20:10:59 compute-0 podman[257443]: nova_compute
Nov 24 20:10:59 compute-0 nova_compute[257476]: + sudo -E kolla_set_configs
Nov 24 20:10:59 compute-0 systemd[1]: Started libpod-conmon-b192d437af7e3278d96cd643324abae0e88469d95d5d2b030c4c9fdc405143e5.scope.
Nov 24 20:10:59 compute-0 systemd[1]: Started nova_compute container.
Nov 24 20:10:59 compute-0 podman[257469]: 2025-11-24 20:10:59.89908808 +0000 UTC m=+0.035094916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:11:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:11:00 compute-0 sudo[257226]: pam_unix(sudo:session): session closed for user root
Nov 24 20:11:00 compute-0 podman[257469]: 2025-11-24 20:11:00.040282775 +0000 UTC m=+0.176289561 container init b192d437af7e3278d96cd643324abae0e88469d95d5d2b030c4c9fdc405143e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:11:00 compute-0 podman[257469]: 2025-11-24 20:11:00.057197083 +0000 UTC m=+0.193203849 container start b192d437af7e3278d96cd643324abae0e88469d95d5d2b030c4c9fdc405143e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:11:00 compute-0 podman[257469]: 2025-11-24 20:11:00.060928331 +0000 UTC m=+0.196935097 container attach b192d437af7e3278d96cd643324abae0e88469d95d5d2b030c4c9fdc405143e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:11:00 compute-0 strange_bouman[257493]: 167 167
Nov 24 20:11:00 compute-0 systemd[1]: libpod-b192d437af7e3278d96cd643324abae0e88469d95d5d2b030c4c9fdc405143e5.scope: Deactivated successfully.
Nov 24 20:11:00 compute-0 podman[257469]: 2025-11-24 20:11:00.067617237 +0000 UTC m=+0.203624023 container died b192d437af7e3278d96cd643324abae0e88469d95d5d2b030c4c9fdc405143e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Validating config file
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying service configuration files
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 20:11:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:00.086+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Deleting /etc/ceph
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Creating directory /etc/ceph
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Writing out command to execute
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 20:11:00 compute-0 nova_compute[257476]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 20:11:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ce034ff9a425929fd3f10d79bc73fe645fe1b1e199c9e9af3a33227b2603a15-merged.mount: Deactivated successfully.
Nov 24 20:11:00 compute-0 nova_compute[257476]: ++ cat /run_command
Nov 24 20:11:00 compute-0 nova_compute[257476]: + CMD=nova-compute
Nov 24 20:11:00 compute-0 nova_compute[257476]: + ARGS=
Nov 24 20:11:00 compute-0 nova_compute[257476]: + sudo kolla_copy_cacerts
Nov 24 20:11:00 compute-0 podman[257469]: 2025-11-24 20:11:00.129194422 +0000 UTC m=+0.265201218 container remove b192d437af7e3278d96cd643324abae0e88469d95d5d2b030c4c9fdc405143e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:11:00 compute-0 systemd[1]: libpod-conmon-b192d437af7e3278d96cd643324abae0e88469d95d5d2b030c4c9fdc405143e5.scope: Deactivated successfully.
Nov 24 20:11:00 compute-0 nova_compute[257476]: Running command: 'nova-compute'
Nov 24 20:11:00 compute-0 nova_compute[257476]: + [[ ! -n '' ]]
Nov 24 20:11:00 compute-0 nova_compute[257476]: + . kolla_extend_start
Nov 24 20:11:00 compute-0 nova_compute[257476]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 20:11:00 compute-0 nova_compute[257476]: + umask 0022
Nov 24 20:11:00 compute-0 nova_compute[257476]: + exec nova-compute
Nov 24 20:11:00 compute-0 podman[257570]: 2025-11-24 20:11:00.450367066 +0000 UTC m=+0.136283447 container create 83b129e59ad6e691d26260ab922cdf233275d20d7ec9ef301b120193ae42f307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:11:00 compute-0 podman[257570]: 2025-11-24 20:11:00.367046747 +0000 UTC m=+0.052963198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:11:00 compute-0 systemd[1]: Started libpod-conmon-83b129e59ad6e691d26260ab922cdf233275d20d7ec9ef301b120193ae42f307.scope.
Nov 24 20:11:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:11:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6819c93e1c78210886a906ac2bf4588cc65b00b2f60208254eef1d1755b09a77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:11:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6819c93e1c78210886a906ac2bf4588cc65b00b2f60208254eef1d1755b09a77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:11:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6819c93e1c78210886a906ac2bf4588cc65b00b2f60208254eef1d1755b09a77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:11:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6819c93e1c78210886a906ac2bf4588cc65b00b2f60208254eef1d1755b09a77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:11:00 compute-0 podman[257570]: 2025-11-24 20:11:00.566723885 +0000 UTC m=+0.252640296 container init 83b129e59ad6e691d26260ab922cdf233275d20d7ec9ef301b120193ae42f307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:11:00 compute-0 podman[257570]: 2025-11-24 20:11:00.58014019 +0000 UTC m=+0.266056551 container start 83b129e59ad6e691d26260ab922cdf233275d20d7ec9ef301b120193ae42f307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 20:11:00 compute-0 podman[257570]: 2025-11-24 20:11:00.645194456 +0000 UTC m=+0.331110867 container attach 83b129e59ad6e691d26260ab922cdf233275d20d7ec9ef301b120193ae42f307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:11:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:00 compute-0 ceph-mon[75677]: pgmap v819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:00 compute-0 sudo[257691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nggiexklxccvwqkuhlskvxlpreadpxwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764015060.2916613-1566-159551550851770/AnsiballZ_podman_container.py'
Nov 24 20:11:00 compute-0 sudo[257691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 20:11:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:00.790+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:01 compute-0 python3.9[257693]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 20:11:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:01.077+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:01 compute-0 systemd[1]: Started libpod-conmon-616d8bdd4f7ee8e33d103b2df1b99211dd45c569a4f80473f8993359bd7ca180.scope.
Nov 24 20:11:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3957894e8e6c88dccaa20b6467fc0d5966fd2fe390bed161a705aee1ac43b32/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 24 20:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3957894e8e6c88dccaa20b6467fc0d5966fd2fe390bed161a705aee1ac43b32/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 20:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3957894e8e6c88dccaa20b6467fc0d5966fd2fe390bed161a705aee1ac43b32/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 24 20:11:01 compute-0 podman[257719]: 2025-11-24 20:11:01.42987345 +0000 UTC m=+0.253340776 container init 616d8bdd4f7ee8e33d103b2df1b99211dd45c569a4f80473f8993359bd7ca180 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:11:01 compute-0 podman[257732]: 2025-11-24 20:11:01.440265194 +0000 UTC m=+0.185088765 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 24 20:11:01 compute-0 podman[257719]: 2025-11-24 20:11:01.444971728 +0000 UTC m=+0.268439054 container start 616d8bdd4f7ee8e33d103b2df1b99211dd45c569a4f80473f8993359bd7ca180 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:11:01 compute-0 python3.9[257693]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Applying nova statedir ownership
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 24 20:11:01 compute-0 nova_compute_init[257775]: INFO:nova_statedir:Nova statedir ownership complete
Nov 24 20:11:01 compute-0 systemd[1]: libpod-616d8bdd4f7ee8e33d103b2df1b99211dd45c569a4f80473f8993359bd7ca180.scope: Deactivated successfully.
Nov 24 20:11:01 compute-0 conmon[257736]: conmon 616d8bdd4f7ee8e33d10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-616d8bdd4f7ee8e33d103b2df1b99211dd45c569a4f80473f8993359bd7ca180.scope/container/memory.events
Nov 24 20:11:01 compute-0 podman[257776]: 2025-11-24 20:11:01.519770232 +0000 UTC m=+0.030479796 container died 616d8bdd4f7ee8e33d103b2df1b99211dd45c569a4f80473f8993359bd7ca180 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 20:11:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:01 compute-0 nice_rosalind[257636]: {
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "osd_id": 2,
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "type": "bluestore"
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:     },
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "osd_id": 1,
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "type": "bluestore"
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:     },
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "osd_id": 0,
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:         "type": "bluestore"
Nov 24 20:11:01 compute-0 nice_rosalind[257636]:     }
Nov 24 20:11:01 compute-0 nice_rosalind[257636]: }
Nov 24 20:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-616d8bdd4f7ee8e33d103b2df1b99211dd45c569a4f80473f8993359bd7ca180-userdata-shm.mount: Deactivated successfully.
Nov 24 20:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3957894e8e6c88dccaa20b6467fc0d5966fd2fe390bed161a705aee1ac43b32-merged.mount: Deactivated successfully.
Nov 24 20:11:01 compute-0 systemd[1]: libpod-83b129e59ad6e691d26260ab922cdf233275d20d7ec9ef301b120193ae42f307.scope: Deactivated successfully.
Nov 24 20:11:01 compute-0 systemd[1]: libpod-83b129e59ad6e691d26260ab922cdf233275d20d7ec9ef301b120193ae42f307.scope: Consumed 1.065s CPU time.
Nov 24 20:11:01 compute-0 podman[257570]: 2025-11-24 20:11:01.725335575 +0000 UTC m=+1.411251966 container died 83b129e59ad6e691d26260ab922cdf233275d20d7ec9ef301b120193ae42f307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:11:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:01 compute-0 podman[257790]: 2025-11-24 20:11:01.789694293 +0000 UTC m=+0.267461917 container cleanup 616d8bdd4f7ee8e33d103b2df1b99211dd45c569a4f80473f8993359bd7ca180 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6819c93e1c78210886a906ac2bf4588cc65b00b2f60208254eef1d1755b09a77-merged.mount: Deactivated successfully.
Nov 24 20:11:01 compute-0 systemd[1]: libpod-conmon-616d8bdd4f7ee8e33d103b2df1b99211dd45c569a4f80473f8993359bd7ca180.scope: Deactivated successfully.
Nov 24 20:11:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:01.834+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:01 compute-0 podman[257570]: 2025-11-24 20:11:01.840909554 +0000 UTC m=+1.526825915 container remove 83b129e59ad6e691d26260ab922cdf233275d20d7ec9ef301b120193ae42f307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:11:01 compute-0 systemd[1]: libpod-conmon-83b129e59ad6e691d26260ab922cdf233275d20d7ec9ef301b120193ae42f307.scope: Deactivated successfully.
Nov 24 20:11:01 compute-0 sudo[257691]: pam_unix(sudo:session): session closed for user root
Nov 24 20:11:01 compute-0 sudo[257363]: pam_unix(sudo:session): session closed for user root
Nov 24 20:11:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:11:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:11:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:11:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:11:01 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3e6e2446-27e6-4cfc-b108-e5ecc652bdeb does not exist
Nov 24 20:11:01 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9343d41f-9480-4bf4-84cf-334b46e3e500 does not exist
Nov 24 20:11:01 compute-0 nova_compute[257476]: 2025-11-24 20:11:01.960 257491 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 20:11:01 compute-0 nova_compute[257476]: 2025-11-24 20:11:01.961 257491 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 20:11:01 compute-0 nova_compute[257476]: 2025-11-24 20:11:01.961 257491 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 20:11:01 compute-0 nova_compute[257476]: 2025-11-24 20:11:01.961 257491 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 24 20:11:02 compute-0 sudo[257853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:11:02 compute-0 sudo[257853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:11:02 compute-0 sudo[257853]: pam_unix(sudo:session): session closed for user root
Nov 24 20:11:02 compute-0 sudo[257898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.109 257491 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:11:02 compute-0 sudo[257898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:11:02 compute-0 sudo[257898]: pam_unix(sudo:session): session closed for user root
Nov 24 20:11:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:02.123+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.140 257491 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.141 257491 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 24 20:11:02 compute-0 sshd-session[226186]: Connection closed by 192.168.122.30 port 57602
Nov 24 20:11:02 compute-0 sshd-session[226176]: pam_unix(sshd:session): session closed for user zuul
Nov 24 20:11:02 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 24 20:11:02 compute-0 systemd[1]: session-50.scope: Consumed 2min 46.873s CPU time.
Nov 24 20:11:02 compute-0 systemd-logind[795]: Session 50 logged out. Waiting for processes to exit.
Nov 24 20:11:02 compute-0 systemd-logind[795]: Removed session 50.
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.588 257491 INFO nova.virt.driver [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.686 257491 INFO nova.compute.provider_config [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 24 20:11:02 compute-0 ceph-mon[75677]: pgmap v820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:11:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.721 257491 DEBUG oslo_concurrency.lockutils [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.722 257491 DEBUG oslo_concurrency.lockutils [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.722 257491 DEBUG oslo_concurrency.lockutils [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.723 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.723 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.723 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.723 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.723 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.724 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.724 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.724 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.724 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.725 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.725 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.725 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.725 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.725 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.726 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.726 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.726 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.726 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.726 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.727 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.727 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.727 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.727 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.727 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.728 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.728 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.728 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.728 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.729 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.729 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.729 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.729 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.729 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.730 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.730 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.730 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.730 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.731 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.731 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.731 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.732 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.732 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.732 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.733 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.733 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.733 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.733 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.733 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.734 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.734 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.734 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.734 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.735 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.735 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.735 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.735 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.735 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.736 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.736 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.736 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.737 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.737 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.737 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.737 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.738 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.738 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.738 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.738 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.739 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.739 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.739 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.739 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.740 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.740 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.740 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.740 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.741 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.741 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.741 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.742 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.742 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.742 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.742 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.742 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.743 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.743 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.743 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.743 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.744 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.744 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.744 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.744 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.744 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.744 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.745 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.745 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.745 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.745 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.745 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.746 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.746 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.746 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.746 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.747 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.747 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.747 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.747 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.747 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.748 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.748 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.748 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.748 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.748 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.749 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.749 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.749 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.749 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.749 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.750 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.750 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.750 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.750 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.750 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.751 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.751 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.751 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.751 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.751 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.751 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.752 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.752 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.752 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.752 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.752 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.753 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.753 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.753 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.753 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.753 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.754 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.754 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.754 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.754 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.754 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.755 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.755 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.755 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.755 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.756 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.756 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.756 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.756 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.756 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.757 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.757 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.757 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.757 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.757 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.758 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.758 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.758 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.759 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.759 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.759 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.759 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.760 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.760 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.760 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.760 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.760 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.761 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.761 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.761 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.761 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.761 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.761 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.761 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.762 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.762 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.762 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.762 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.762 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.762 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.762 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.763 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.763 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.763 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.763 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.763 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.763 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.764 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.764 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.764 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.764 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.764 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.764 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.765 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.765 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.765 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.765 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.765 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.765 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.765 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.766 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.766 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.766 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.766 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.766 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.766 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.766 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.766 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.767 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.767 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.767 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.767 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.767 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.768 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.768 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.768 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.768 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.768 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.768 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.768 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.769 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.769 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.769 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.769 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.769 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.769 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.769 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.770 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.770 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.770 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.770 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.770 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.770 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.770 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.771 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.771 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.771 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.771 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.771 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.771 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.771 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.772 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.772 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.772 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.772 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.772 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.772 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.772 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.773 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.773 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.773 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.773 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.773 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.773 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.773 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.774 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.774 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.774 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.774 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.774 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.774 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.774 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.775 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.775 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.775 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.775 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.775 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.775 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.775 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.776 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.776 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.776 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.776 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.776 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.776 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.776 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.777 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.777 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.777 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.777 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.777 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.777 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.777 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.778 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.778 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.778 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.778 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.778 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.778 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.778 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.779 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.779 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.779 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.779 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.779 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.779 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.780 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.780 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.780 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.780 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.780 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.780 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.780 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.781 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.781 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.781 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.781 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.781 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.781 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.781 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.782 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.782 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.782 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.782 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.782 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.782 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.782 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.783 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.783 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.783 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.783 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.783 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.783 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.783 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.784 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.784 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.784 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.784 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.784 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.784 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.784 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.784 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.785 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.785 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.785 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.785 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.785 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.785 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.785 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.786 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.786 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.786 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.786 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.786 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.786 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.786 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.787 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.787 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.787 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.787 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.787 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.787 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.788 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.788 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.788 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.788 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.788 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.788 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.788 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.789 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.789 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.789 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.789 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.789 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.789 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.789 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.790 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.790 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.790 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.790 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.790 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.790 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.790 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.791 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.791 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.791 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.791 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.791 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.791 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.791 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.792 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.792 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.792 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.792 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.792 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.792 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.792 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.793 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.793 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.793 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.793 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.793 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.793 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.793 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.794 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.794 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.794 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.794 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.794 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.794 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.794 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.795 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.795 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.795 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.795 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.795 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.795 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.795 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.796 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.796 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.796 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.796 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.796 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.796 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.796 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.797 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.797 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.797 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.797 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.797 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.797 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.797 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.798 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.798 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.798 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.798 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.798 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.798 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.798 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.799 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.799 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.799 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.799 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.799 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.799 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.799 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.799 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.800 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.800 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.800 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.800 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.800 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.800 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.800 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.801 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.801 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.801 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.801 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.801 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.801 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.801 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.802 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.802 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.802 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.802 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.802 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.802 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.802 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.803 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.803 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.803 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.803 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.803 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.803 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.803 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.804 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.804 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.804 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.804 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.804 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.804 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.804 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.805 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.805 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.805 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.805 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.805 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.805 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.805 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.805 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.806 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.806 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.806 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.806 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.806 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.806 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.806 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.807 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.807 257491 WARNING oslo_config.cfg [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 20:11:02 compute-0 nova_compute[257476]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 20:11:02 compute-0 nova_compute[257476]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 20:11:02 compute-0 nova_compute[257476]: and ``live_migration_inbound_addr`` respectively.
Nov 24 20:11:02 compute-0 nova_compute[257476]: ).  Its value may be silently ignored in the future.
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.807 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.807 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.807 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.808 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.808 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.808 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.808 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.808 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.809 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.809 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.809 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.809 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.809 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.809 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.809 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.810 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.810 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.810 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.810 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.rbd_secret_uuid        = 05e060a3-406b-57f0-89d2-ec35f5b09305 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.810 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.810 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.810 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.810 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.811 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.811 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.811 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.811 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.811 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.811 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.812 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.812 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.812 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.812 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.812 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.812 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.812 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.813 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.813 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.813 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.813 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.813 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.813 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.813 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.814 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.814 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.814 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.814 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.814 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.815 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.815 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.815 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.815 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.815 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.815 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.816 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.816 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.816 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.816 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.816 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.816 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.816 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.817 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.817 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.817 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.817 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.817 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.817 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.817 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.818 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.818 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.818 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.818 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.818 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.818 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.818 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.819 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.819 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.819 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.819 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.819 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.819 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.819 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.820 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.820 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.820 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.820 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.820 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.821 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.821 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.821 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.821 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.821 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.821 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.821 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.822 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.822 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.822 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.822 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.822 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.822 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.822 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.823 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.823 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.823 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.823 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.823 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.823 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.823 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.824 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.824 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.824 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.824 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.824 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.824 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.824 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.825 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.825 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.825 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.825 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.825 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.825 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.825 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.826 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.826 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.826 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.826 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.826 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.826 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.826 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.827 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.827 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.827 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.827 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.827 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.827 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.828 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.828 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.828 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.828 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.828 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.828 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.829 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.829 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.829 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.829 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.829 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.829 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.829 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.830 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.830 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.830 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.830 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.830 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.830 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.831 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.831 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.831 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.831 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.831 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.831 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.832 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.832 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.832 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.832 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.832 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.833 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.833 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.833 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.833 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.833 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.834 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.834 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.834 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.834 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.834 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.835 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.835 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.835 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.835 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:02.834+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.835 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.836 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.836 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.836 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.836 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.836 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.837 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.837 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.837 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.837 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.837 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.837 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.838 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.838 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.838 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.838 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.838 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.839 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.839 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.839 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.839 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.840 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.840 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.840 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.840 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.840 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.840 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.841 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.841 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.841 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.841 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.841 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.841 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.841 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.842 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.842 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.842 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.842 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.842 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.842 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.842 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.843 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.843 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.843 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.843 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.843 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.843 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.843 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.844 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.844 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.844 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.844 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.844 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.845 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.845 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.845 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.845 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.845 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.845 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.846 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.846 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.846 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.846 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.846 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.846 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.846 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.847 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.847 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.847 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.847 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.847 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.847 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.848 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.848 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.848 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.848 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.848 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.848 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.849 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.849 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.849 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.849 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.849 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.849 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.849 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.850 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.850 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.850 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.850 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.850 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.850 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.850 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.851 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.851 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.851 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.851 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.851 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.851 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.851 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.852 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.852 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.852 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.852 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.852 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.853 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.853 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.853 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.853 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.853 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.853 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.854 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.854 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.854 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.854 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.854 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.855 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.855 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.855 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.855 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.855 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.856 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.856 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.856 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.856 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.856 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.857 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.857 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.857 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.857 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.857 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.858 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.858 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.858 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.858 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.858 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.859 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.859 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.859 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.859 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.859 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.859 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.859 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.860 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.860 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.860 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.860 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.860 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.860 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.861 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.861 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.861 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.861 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.861 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.861 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.861 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.862 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.862 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.862 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.862 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.862 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.862 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.863 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.863 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.863 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.863 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.863 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.863 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.864 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.864 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.864 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.864 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.864 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.864 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.864 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.865 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.865 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.865 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.865 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.865 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.865 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.865 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.866 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.866 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.866 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.866 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.866 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.866 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.866 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.867 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.867 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.867 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.867 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.867 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.867 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.867 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.868 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.868 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.868 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.868 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.868 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.868 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.868 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.869 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.869 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.869 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.869 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.869 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.869 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.869 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.870 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.870 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.870 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.870 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.870 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.871 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.871 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.871 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.871 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.871 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.871 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.871 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.872 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.872 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.872 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.872 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.872 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.872 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.872 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.873 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.873 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.873 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.873 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.873 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.873 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.873 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.874 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.874 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.874 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.874 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.874 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.874 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.875 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.875 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.875 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.875 257491 DEBUG oslo_service.service [None req-12011c49-6f18-4f67-8b0c-08a4e9c4cad3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.877 257491 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.890 257491 INFO nova.virt.node [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Determined node identity 36172ea5-11d9-49c4-91b9-fe09a4a54b66 from /var/lib/nova/compute_id
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.891 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.892 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.892 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.892 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.908 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f9531677580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.911 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f9531677580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.911 257491 INFO nova.virt.libvirt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Connection event '1' reason 'None'
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.921 257491 INFO nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 20:11:02 compute-0 nova_compute[257476]: 
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <host>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <uuid>e19f0d46-fa86-4b57-a68a-08490f1ee667</uuid>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <cpu>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <arch>x86_64</arch>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model>EPYC-Rome-v4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <vendor>AMD</vendor>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <microcode version='16777317'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <signature family='23' model='49' stepping='0'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='x2apic'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='tsc-deadline'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='osxsave'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='hypervisor'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='tsc_adjust'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='spec-ctrl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='stibp'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='arch-capabilities'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='ssbd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='cmp_legacy'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='topoext'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='virt-ssbd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='lbrv'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='tsc-scale'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='vmcb-clean'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='pause-filter'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='pfthreshold'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='svme-addr-chk'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='rdctl-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='skip-l1dfl-vmentry'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='mds-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature name='pschange-mc-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <pages unit='KiB' size='4'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <pages unit='KiB' size='2048'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <pages unit='KiB' size='1048576'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </cpu>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <power_management>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <suspend_mem/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </power_management>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <iommu support='no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <migration_features>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <live/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <uri_transports>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <uri_transport>tcp</uri_transport>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <uri_transport>rdma</uri_transport>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </uri_transports>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </migration_features>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <topology>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <cells num='1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <cell id='0'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:           <memory unit='KiB'>7864308</memory>
Nov 24 20:11:02 compute-0 nova_compute[257476]:           <pages unit='KiB' size='4'>1966077</pages>
Nov 24 20:11:02 compute-0 nova_compute[257476]:           <pages unit='KiB' size='2048'>0</pages>
Nov 24 20:11:02 compute-0 nova_compute[257476]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 24 20:11:02 compute-0 nova_compute[257476]:           <distances>
Nov 24 20:11:02 compute-0 nova_compute[257476]:             <sibling id='0' value='10'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:           </distances>
Nov 24 20:11:02 compute-0 nova_compute[257476]:           <cpus num='8'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:           </cpus>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         </cell>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </cells>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </topology>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <cache>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </cache>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <secmodel>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model>selinux</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <doi>0</doi>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </secmodel>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <secmodel>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model>dac</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <doi>0</doi>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </secmodel>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   </host>
Nov 24 20:11:02 compute-0 nova_compute[257476]: 
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <guest>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <os_type>hvm</os_type>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <arch name='i686'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <wordsize>32</wordsize>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <domain type='qemu'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <domain type='kvm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </arch>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <features>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <pae/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <nonpae/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <acpi default='on' toggle='yes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <apic default='on' toggle='no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <cpuselection/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <deviceboot/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <disksnapshot default='on' toggle='no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <externalSnapshot/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </features>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   </guest>
Nov 24 20:11:02 compute-0 nova_compute[257476]: 
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <guest>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <os_type>hvm</os_type>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <arch name='x86_64'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <wordsize>64</wordsize>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <domain type='qemu'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <domain type='kvm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </arch>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <features>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <acpi default='on' toggle='yes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <apic default='on' toggle='no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <cpuselection/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <deviceboot/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <disksnapshot default='on' toggle='no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <externalSnapshot/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </features>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   </guest>
Nov 24 20:11:02 compute-0 nova_compute[257476]: 
Nov 24 20:11:02 compute-0 nova_compute[257476]: </capabilities>
Nov 24 20:11:02 compute-0 nova_compute[257476]: 
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.923 257491 DEBUG nova.virt.libvirt.volume.mount [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.929 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 20:11:02 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.935 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 20:11:02 compute-0 nova_compute[257476]: <domainCapabilities>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <domain>kvm</domain>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <arch>i686</arch>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <vcpu max='4096'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <iothreads supported='yes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <os supported='yes'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <enum name='firmware'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <loader supported='yes'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <value>rom</value>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <value>pflash</value>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <enum name='readonly'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <value>yes</value>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <value>no</value>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <enum name='secure'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <value>no</value>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </loader>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   </os>
Nov 24 20:11:02 compute-0 nova_compute[257476]:   <cpu>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <mode name='host-passthrough' supported='yes'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <enum name='hostPassthroughMigratable'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <value>on</value>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <value>off</value>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <mode name='maximum' supported='yes'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <enum name='maximumMigratable'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <value>on</value>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <value>off</value>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <mode name='host-model' supported='yes'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <vendor>AMD</vendor>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='x2apic'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='hypervisor'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='stibp'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='ssbd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='overflow-recov'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='succor'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='ibrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='lbrv'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc-scale'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='flushbyasid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='pause-filter'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='pfthreshold'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <feature policy='disable' name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:02 compute-0 nova_compute[257476]:     <mode name='custom' supported='yes'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Broadwell'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Broadwell-IBRS'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Broadwell-noTSX'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v4'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Cooperlake'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Cooperlake-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Cooperlake-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Denverton'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Denverton-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Denverton-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Denverton-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Dhyana-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-Genoa'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='auto-ibrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='auto-ibrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='EPYC-v4'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx10'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx10-128'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx10-256'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx10-512'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Haswell'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Haswell-IBRS'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Haswell-noTSX'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Haswell-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Haswell-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Haswell-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Haswell-v4'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v4'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v5'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v6'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v7'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='IvyBridge'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-IBRS'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='KnightsMill'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-4fmaps'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-4vnniw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512er'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512pf'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='KnightsMill-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-4fmaps'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-4vnniw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512er'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512pf'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Opteron_G4'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Opteron_G4-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Opteron_G5'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='tbm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Opteron_G5-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='tbm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='SierraForest'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-ne-convert'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni-int8'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='cmpccxadd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='SierraForest-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-ifma'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-ne-convert'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx-vnni-int8'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='cmpccxadd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v4'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v4'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v5'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Snowridge'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v1'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v2'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v3'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v4'>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 20:11:02 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='athlon'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='athlon-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='core2duo'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='core2duo-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='coreduo'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='coreduo-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='n270'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='n270-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='phenom'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='phenom-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </cpu>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <memoryBacking supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <enum name='sourceType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>file</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>anonymous</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>memfd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </memoryBacking>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <devices>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <disk supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='diskDevice'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>disk</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>cdrom</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>floppy</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>lun</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='bus'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>fdc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>scsi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>sata</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-non-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <graphics supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vnc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>egl-headless</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dbus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </graphics>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <video supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='modelType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vga</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>cirrus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>none</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>bochs</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>ramfb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </video>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <hostdev supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='mode'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>subsystem</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='startupPolicy'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>default</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>mandatory</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>requisite</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>optional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='subsysType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pci</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>scsi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='capsType'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='pciBackend'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </hostdev>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <rng supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-non-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>random</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>egd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>builtin</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </rng>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <filesystem supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='driverType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>path</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>handle</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtiofs</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </filesystem>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <tpm supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tpm-tis</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tpm-crb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>emulator</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>external</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendVersion'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>2.0</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </tpm>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <redirdev supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='bus'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </redirdev>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <channel supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pty</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>unix</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </channel>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <crypto supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>qemu</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>builtin</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </crypto>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <interface supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>default</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>passt</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </interface>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <panic supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>isa</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>hyperv</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </panic>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <console supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>null</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pty</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dev</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>file</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pipe</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>stdio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>udp</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tcp</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>unix</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>qemu-vdagent</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dbus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </console>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </devices>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <features>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <gic supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <vmcoreinfo supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <genid supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <backingStoreInput supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <backup supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <async-teardown supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <ps2 supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <sev supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <sgx supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <hyperv supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='features'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>relaxed</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vapic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>spinlocks</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vpindex</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>runtime</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>synic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>stimer</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>reset</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vendor_id</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>frequencies</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>reenlightenment</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tlbflush</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>ipi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>avic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>emsr_bitmap</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>xmm_input</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <defaults>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <spinlocks>4095</spinlocks>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <stimer_direct>on</stimer_direct>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </defaults>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </hyperv>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <launchSecurity supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='sectype'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tdx</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </launchSecurity>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </features>
Nov 24 20:11:03 compute-0 nova_compute[257476]: </domainCapabilities>
Nov 24 20:11:03 compute-0 nova_compute[257476]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:02.942 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 20:11:03 compute-0 nova_compute[257476]: <domainCapabilities>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <domain>kvm</domain>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <arch>i686</arch>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <vcpu max='240'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <iothreads supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <os supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <enum name='firmware'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <loader supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>rom</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pflash</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='readonly'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>yes</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>no</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='secure'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>no</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </loader>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </os>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <cpu>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='host-passthrough' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='hostPassthroughMigratable'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>on</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>off</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='maximum' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='maximumMigratable'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>on</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>off</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='host-model' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <vendor>AMD</vendor>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='x2apic'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='hypervisor'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='stibp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='ssbd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='overflow-recov'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='succor'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='ibrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='lbrv'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc-scale'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='flushbyasid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='pause-filter'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='pfthreshold'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='disable' name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='custom' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cooperlake'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cooperlake-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cooperlake-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Dhyana-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Genoa'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='auto-ibrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='auto-ibrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10-128'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10-256'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10-512'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v6'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v7'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='KnightsMill'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4fmaps'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4vnniw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512er'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512pf'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='KnightsMill-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4fmaps'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4vnniw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512er'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512pf'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G4-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tbm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G5-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tbm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SierraForest'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ne-convert'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cmpccxadd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SierraForest-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ne-convert'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cmpccxadd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='athlon'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='athlon-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='core2duo'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='core2duo-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='coreduo'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='coreduo-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='n270'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='n270-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='phenom'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='phenom-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </cpu>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <memoryBacking supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <enum name='sourceType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>file</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>anonymous</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>memfd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </memoryBacking>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <devices>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <disk supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='diskDevice'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>disk</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>cdrom</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>floppy</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>lun</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='bus'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>ide</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>fdc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>scsi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>sata</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-non-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <graphics supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vnc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>egl-headless</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dbus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </graphics>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <video supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='modelType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vga</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>cirrus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>none</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>bochs</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>ramfb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </video>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <hostdev supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='mode'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>subsystem</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='startupPolicy'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>default</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>mandatory</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>requisite</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>optional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='subsysType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pci</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>scsi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='capsType'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='pciBackend'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </hostdev>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <rng supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-non-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>random</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>egd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>builtin</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </rng>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <filesystem supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='driverType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>path</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>handle</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtiofs</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </filesystem>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <tpm supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tpm-tis</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tpm-crb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>emulator</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>external</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendVersion'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>2.0</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </tpm>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <redirdev supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='bus'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </redirdev>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <channel supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pty</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>unix</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </channel>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <crypto supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>qemu</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>builtin</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </crypto>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <interface supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>default</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>passt</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </interface>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <panic supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>isa</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>hyperv</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </panic>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <console supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>null</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pty</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dev</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>file</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pipe</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>stdio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>udp</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tcp</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>unix</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>qemu-vdagent</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dbus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </console>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </devices>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <features>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <gic supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <vmcoreinfo supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <genid supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <backingStoreInput supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <backup supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <async-teardown supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <ps2 supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <sev supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <sgx supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <hyperv supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='features'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>relaxed</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vapic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>spinlocks</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vpindex</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>runtime</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>synic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>stimer</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>reset</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vendor_id</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>frequencies</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>reenlightenment</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tlbflush</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>ipi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>avic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>emsr_bitmap</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>xmm_input</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <defaults>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <spinlocks>4095</spinlocks>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <stimer_direct>on</stimer_direct>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </defaults>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </hyperv>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <launchSecurity supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='sectype'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tdx</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </launchSecurity>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </features>
Nov 24 20:11:03 compute-0 nova_compute[257476]: </domainCapabilities>
Nov 24 20:11:03 compute-0 nova_compute[257476]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.001 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.006 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 20:11:03 compute-0 nova_compute[257476]: <domainCapabilities>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <domain>kvm</domain>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <arch>x86_64</arch>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <vcpu max='4096'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <iothreads supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <os supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <enum name='firmware'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>efi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <loader supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>rom</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pflash</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='readonly'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>yes</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>no</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='secure'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>yes</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>no</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </loader>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </os>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <cpu>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='host-passthrough' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='hostPassthroughMigratable'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>on</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>off</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='maximum' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='maximumMigratable'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>on</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>off</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='host-model' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <vendor>AMD</vendor>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='x2apic'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='hypervisor'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='stibp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='ssbd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='overflow-recov'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='succor'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='ibrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='lbrv'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc-scale'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='flushbyasid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='pause-filter'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='pfthreshold'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='disable' name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='custom' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cooperlake'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cooperlake-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cooperlake-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Dhyana-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Genoa'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='auto-ibrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='auto-ibrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10-128'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10-256'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10-512'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v6'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v7'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='KnightsMill'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4fmaps'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4vnniw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512er'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512pf'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='KnightsMill-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4fmaps'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4vnniw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512er'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512pf'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G4-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tbm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G5-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tbm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:03.138+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SierraForest'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ne-convert'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cmpccxadd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SierraForest-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ne-convert'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cmpccxadd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='athlon'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='athlon-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='core2duo'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='core2duo-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='coreduo'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='coreduo-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='n270'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='n270-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='phenom'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='phenom-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </cpu>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <memoryBacking supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <enum name='sourceType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>file</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>anonymous</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>memfd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </memoryBacking>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <devices>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <disk supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='diskDevice'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>disk</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>cdrom</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>floppy</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>lun</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='bus'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>fdc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>scsi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>sata</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-non-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <graphics supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vnc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>egl-headless</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dbus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </graphics>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <video supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='modelType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vga</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>cirrus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>none</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>bochs</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>ramfb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </video>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <hostdev supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='mode'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>subsystem</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='startupPolicy'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>default</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>mandatory</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>requisite</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>optional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='subsysType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pci</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>scsi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='capsType'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='pciBackend'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </hostdev>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <rng supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-non-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>random</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>egd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>builtin</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </rng>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <filesystem supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='driverType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>path</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>handle</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtiofs</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </filesystem>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <tpm supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tpm-tis</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tpm-crb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>emulator</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>external</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendVersion'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>2.0</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </tpm>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <redirdev supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='bus'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </redirdev>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <channel supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pty</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>unix</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </channel>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <crypto supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>qemu</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>builtin</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </crypto>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <interface supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>default</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>passt</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </interface>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <panic supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>isa</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>hyperv</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </panic>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <console supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>null</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pty</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dev</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>file</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pipe</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>stdio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>udp</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tcp</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>unix</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>qemu-vdagent</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dbus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </console>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </devices>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <features>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <gic supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <vmcoreinfo supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <genid supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <backingStoreInput supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <backup supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <async-teardown supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <ps2 supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <sev supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <sgx supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <hyperv supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='features'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>relaxed</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vapic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>spinlocks</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vpindex</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>runtime</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>synic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>stimer</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>reset</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vendor_id</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>frequencies</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>reenlightenment</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tlbflush</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>ipi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>avic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>emsr_bitmap</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>xmm_input</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <defaults>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <spinlocks>4095</spinlocks>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <stimer_direct>on</stimer_direct>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </defaults>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </hyperv>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <launchSecurity supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='sectype'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tdx</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </launchSecurity>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </features>
Nov 24 20:11:03 compute-0 nova_compute[257476]: </domainCapabilities>
Nov 24 20:11:03 compute-0 nova_compute[257476]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.081 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 20:11:03 compute-0 nova_compute[257476]: <domainCapabilities>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <domain>kvm</domain>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <arch>x86_64</arch>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <vcpu max='240'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <iothreads supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <os supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <enum name='firmware'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <loader supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>rom</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pflash</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='readonly'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>yes</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>no</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='secure'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>no</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </loader>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </os>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <cpu>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='host-passthrough' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='hostPassthroughMigratable'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>on</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>off</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='maximum' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='maximumMigratable'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>on</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>off</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='host-model' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <vendor>AMD</vendor>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='x2apic'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='hypervisor'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='stibp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='ssbd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='overflow-recov'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='succor'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='ibrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='lbrv'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='tsc-scale'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='flushbyasid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='pause-filter'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='pfthreshold'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <feature policy='disable' name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <mode name='custom' supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Broadwell-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cooperlake'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cooperlake-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Cooperlake-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Denverton-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Dhyana-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Genoa'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='auto-ibrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='auto-ibrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Milan-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amd-psfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='no-nested-data-bp'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='null-sel-clr-base'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='stibp-always-on'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-Rome-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='EPYC-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='GraniteRapids-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10-128'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10-256'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx10-512'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='prefetchiti'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Haswell-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v6'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Icelake-Server-v7'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='IvyBridge-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='KnightsMill'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4fmaps'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4vnniw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512er'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512pf'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='KnightsMill-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4fmaps'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-4vnniw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512er'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512pf'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G4-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tbm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Opteron_G5-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fma4'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tbm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xop'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SapphireRapids-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='amx-tile'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-bf16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-fp16'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512-vpopcntdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bitalg'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vbmi2'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrc'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fzrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='la57'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='taa-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='tsx-ldtrk'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xfd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SierraForest'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ne-convert'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cmpccxadd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='SierraForest-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ifma'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-ne-convert'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx-vnni-int8'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='bus-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cmpccxadd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fbsdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='fsrs'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ibrs-all'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mcdt-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pbrsb-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='psdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='sbdr-ssdp-no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='serialize'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vaes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='vpclmulqdq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Client-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='hle'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='rtm'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Skylake-Server-v5'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512bw'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512cd'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512dq'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512f'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='avx512vl'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='invpcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pcid'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='pku'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='mpx'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v2'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v3'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='core-capability'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='split-lock-detect'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='Snowridge-v4'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='cldemote'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='erms'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='gfni'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdir64b'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='movdiri'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='xsaves'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='athlon'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='athlon-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='core2duo'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='core2duo-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='coreduo'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='coreduo-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='n270'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='n270-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='ss'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='phenom'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <blockers model='phenom-v1'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnow'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <feature name='3dnowext'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </blockers>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </mode>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </cpu>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <memoryBacking supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <enum name='sourceType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>file</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>anonymous</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <value>memfd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </memoryBacking>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <devices>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <disk supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='diskDevice'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>disk</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>cdrom</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>floppy</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>lun</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='bus'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>ide</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>fdc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>scsi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>sata</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-non-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <graphics supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vnc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>egl-headless</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dbus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </graphics>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <video supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='modelType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vga</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>cirrus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>none</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>bochs</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>ramfb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </video>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <hostdev supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='mode'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>subsystem</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='startupPolicy'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>default</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>mandatory</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>requisite</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>optional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='subsysType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pci</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>scsi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='capsType'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='pciBackend'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </hostdev>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <rng supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtio-non-transitional</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>random</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>egd</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>builtin</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </rng>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <filesystem supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='driverType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>path</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>handle</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>virtiofs</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </filesystem>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <tpm supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tpm-tis</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tpm-crb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>emulator</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>external</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendVersion'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>2.0</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </tpm>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <redirdev supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='bus'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>usb</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </redirdev>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <channel supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pty</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>unix</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </channel>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <crypto supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>qemu</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendModel'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>builtin</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </crypto>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <interface supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='backendType'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>default</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>passt</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </interface>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <panic supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='model'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>isa</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>hyperv</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </panic>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <console supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='type'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>null</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vc</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pty</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dev</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>file</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>pipe</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>stdio</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>udp</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tcp</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>unix</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>qemu-vdagent</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>dbus</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </console>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </devices>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   <features>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <gic supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <vmcoreinfo supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <genid supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <backingStoreInput supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <backup supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <async-teardown supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <ps2 supported='yes'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <sev supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <sgx supported='no'/>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <hyperv supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='features'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>relaxed</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vapic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>spinlocks</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vpindex</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>runtime</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>synic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>stimer</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>reset</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>vendor_id</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>frequencies</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>reenlightenment</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tlbflush</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>ipi</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>avic</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>emsr_bitmap</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>xmm_input</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <defaults>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <spinlocks>4095</spinlocks>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <stimer_direct>on</stimer_direct>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </defaults>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </hyperv>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     <launchSecurity supported='yes'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       <enum name='sectype'>
Nov 24 20:11:03 compute-0 nova_compute[257476]:         <value>tdx</value>
Nov 24 20:11:03 compute-0 nova_compute[257476]:       </enum>
Nov 24 20:11:03 compute-0 nova_compute[257476]:     </launchSecurity>
Nov 24 20:11:03 compute-0 nova_compute[257476]:   </features>
Nov 24 20:11:03 compute-0 nova_compute[257476]: </domainCapabilities>
Nov 24 20:11:03 compute-0 nova_compute[257476]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.176 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.176 257491 INFO nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Secure Boot support detected
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.180 257491 INFO nova.virt.libvirt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.180 257491 INFO nova.virt.libvirt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.197 257491 DEBUG nova.virt.libvirt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.223 257491 INFO nova.virt.node [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Determined node identity 36172ea5-11d9-49c4-91b9-fe09a4a54b66 from /var/lib/nova/compute_id
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.238 257491 WARNING nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Compute nodes ['36172ea5-11d9-49c4-91b9-fe09a4a54b66'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.259 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.274 257491 WARNING nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.275 257491 DEBUG oslo_concurrency.lockutils [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.275 257491 DEBUG oslo_concurrency.lockutils [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.275 257491 DEBUG oslo_concurrency.lockutils [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.275 257491 DEBUG nova.compute.resource_tracker [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.276 257491 DEBUG oslo_concurrency.processutils [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:11:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:11:03 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456651169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.737 257491 DEBUG oslo_concurrency.processutils [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:11:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:03.868+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.966 257491 WARNING nova.virt.libvirt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.968 257491 DEBUG nova.compute.resource_tracker [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.968 257491 DEBUG oslo_concurrency.lockutils [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.968 257491 DEBUG oslo_concurrency.lockutils [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:11:03 compute-0 nova_compute[257476]: 2025-11-24 20:11:03.984 257491 WARNING nova.compute.resource_tracker [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] No compute node record for compute-0.ctlplane.example.com:36172ea5-11d9-49c4-91b9-fe09a4a54b66: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 36172ea5-11d9-49c4-91b9-fe09a4a54b66 could not be found.
Nov 24 20:11:04 compute-0 nova_compute[257476]: 2025-11-24 20:11:04.037 257491 INFO nova.compute.resource_tracker [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 36172ea5-11d9-49c4-91b9-fe09a4a54b66
Nov 24 20:11:04 compute-0 nova_compute[257476]: 2025-11-24 20:11:04.116 257491 DEBUG nova.compute.resource_tracker [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:11:04 compute-0 nova_compute[257476]: 2025-11-24 20:11:04.116 257491 DEBUG nova.compute.resource_tracker [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:11:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:04.118+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:04 compute-0 ceph-mon[75677]: pgmap v821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:04 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/456651169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:11:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:04.884+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:04 compute-0 nova_compute[257476]: 2025-11-24 20:11:04.906 257491 INFO nova.scheduler.client.report [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [req-7c950a10-154b-4672-8d49-a5568979a7c8] Created resource provider record via placement API for resource provider with UUID 36172ea5-11d9-49c4-91b9-fe09a4a54b66 and name compute-0.ctlplane.example.com.
Nov 24 20:11:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:05.145+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:05 compute-0 nova_compute[257476]: 2025-11-24 20:11:05.310 257491 DEBUG oslo_concurrency.processutils [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:11:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:11:05 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/893358609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:11:05 compute-0 nova_compute[257476]: 2025-11-24 20:11:05.815 257491 DEBUG oslo_concurrency.processutils [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:11:05 compute-0 nova_compute[257476]: 2025-11-24 20:11:05.820 257491 DEBUG nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 24 20:11:05 compute-0 nova_compute[257476]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 24 20:11:05 compute-0 nova_compute[257476]: 2025-11-24 20:11:05.820 257491 INFO nova.virt.libvirt.host [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] kernel doesn't support AMD SEV
Nov 24 20:11:05 compute-0 nova_compute[257476]: 2025-11-24 20:11:05.821 257491 DEBUG nova.compute.provider_tree [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Updating inventory in ProviderTree for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 20:11:05 compute-0 nova_compute[257476]: 2025-11-24 20:11:05.822 257491 DEBUG nova.virt.libvirt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 20:11:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:05.843+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:05 compute-0 nova_compute[257476]: 2025-11-24 20:11:05.893 257491 DEBUG nova.scheduler.client.report [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Updated inventory for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 24 20:11:05 compute-0 nova_compute[257476]: 2025-11-24 20:11:05.893 257491 DEBUG nova.compute.provider_tree [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Updating resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 20:11:05 compute-0 nova_compute[257476]: 2025-11-24 20:11:05.893 257491 DEBUG nova.compute.provider_tree [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Updating inventory in ProviderTree for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 20:11:05 compute-0 nova_compute[257476]: 2025-11-24 20:11:05.988 257491 DEBUG nova.compute.provider_tree [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Updating resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 20:11:06 compute-0 nova_compute[257476]: 2025-11-24 20:11:06.027 257491 DEBUG nova.compute.resource_tracker [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:11:06 compute-0 nova_compute[257476]: 2025-11-24 20:11:06.028 257491 DEBUG oslo_concurrency.lockutils [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:11:06 compute-0 nova_compute[257476]: 2025-11-24 20:11:06.028 257491 DEBUG nova.service [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 24 20:11:06 compute-0 nova_compute[257476]: 2025-11-24 20:11:06.111 257491 DEBUG nova.service [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 24 20:11:06 compute-0 nova_compute[257476]: 2025-11-24 20:11:06.111 257491 DEBUG nova.servicegroup.drivers.db [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 24 20:11:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:06.157+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:06 compute-0 ceph-mon[75677]: pgmap v822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:06 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/893358609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:11:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1182 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:06.803+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:07.173+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:07 compute-0 sshd-session[257990]: Invalid user guest1 from 27.79.44.141 port 33586
Nov 24 20:11:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1182 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:07.804+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:08 compute-0 sshd-session[257990]: Connection closed by invalid user guest1 27.79.44.141 port 33586 [preauth]
Nov 24 20:11:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:08.161+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:08.757+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:08 compute-0 ceph-mon[75677]: pgmap v823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:09.207+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:11:09.363 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:11:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:11:09.364 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:11:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:11:09.364 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:11:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:09.747+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:10.240+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:10.718+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:10 compute-0 ceph-mon[75677]: pgmap v824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:11.290+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:11.710+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1187 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:11 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1187 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:12.334+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:12.747+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:12 compute-0 ceph-mon[75677]: pgmap v825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:13.302+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:13.709+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:14.271+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:14.717+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:14 compute-0 ceph-mon[75677]: pgmap v826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:14 compute-0 podman[257992]: 2025-11-24 20:11:14.849838012 +0000 UTC m=+0.078556174 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 20:11:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:15.228+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:15.692+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:16.233+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:16.705+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1192 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:16 compute-0 ceph-mon[75677]: pgmap v827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:16 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1192 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:17.236+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:17.674+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:18.229+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:18.699+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:18 compute-0 ceph-mon[75677]: pgmap v828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:19.191+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:19.662+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:19 compute-0 ceph-mon[75677]: pgmap v829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:20.235+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:20.614+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:21.215+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:21.571+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1202 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:22 compute-0 ceph-mon[75677]: pgmap v830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:22 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1202 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:22.205+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:22.536+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:23.237+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:23.519+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:24 compute-0 ceph-mon[75677]: pgmap v831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:24.213+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:11:24
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'vms', '.rgw.root']
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:11:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:24.490+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:25.189+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:25.458+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:25 compute-0 podman[258011]: 2025-11-24 20:11:25.910461779 +0000 UTC m=+0.136304297 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 20:11:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:11:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1212789718' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:11:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:11:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1212789718' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:11:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:11:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2599988468' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:11:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:11:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2599988468' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:11:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:26 compute-0 ceph-mon[75677]: pgmap v832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:26 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1212789718' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:11:26 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1212789718' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:11:26 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2599988468' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:11:26 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2599988468' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:11:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:26.232+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:11:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767719649' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:11:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:11:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767719649' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:11:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:26.421+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.754552) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015086754664, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1744, "num_deletes": 255, "total_data_size": 2069886, "memory_usage": 2116768, "flush_reason": "Manual Compaction"}
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015086774321, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 2026577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20567, "largest_seqno": 22310, "table_properties": {"data_size": 2018978, "index_size": 4097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20763, "raw_average_key_size": 21, "raw_value_size": 2001493, "raw_average_value_size": 2065, "num_data_blocks": 182, "num_entries": 969, "num_filter_entries": 969, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764014968, "oldest_key_time": 1764014968, "file_creation_time": 1764015086, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 19837 microseconds, and 10017 cpu microseconds.
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.774391) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 2026577 bytes OK
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.774420) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.776147) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.776169) EVENT_LOG_v1 {"time_micros": 1764015086776161, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.776194) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2061743, prev total WAL file size 2061743, number of live WAL files 2.
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.777533) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353038' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1979KB)], [44(7182KB)]
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015086777643, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 9381644, "oldest_snapshot_seqno": -1}
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 6681 keys, 9194355 bytes, temperature: kUnknown
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015086847504, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 9194355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9151712, "index_size": 24867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 174479, "raw_average_key_size": 26, "raw_value_size": 9031580, "raw_average_value_size": 1351, "num_data_blocks": 1003, "num_entries": 6681, "num_filter_entries": 6681, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764015086, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.848106) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 9194355 bytes
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.849734) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.9 rd, 131.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.0 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(9.2) write-amplify(4.5) OK, records in: 7203, records dropped: 522 output_compression: NoCompression
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.849763) EVENT_LOG_v1 {"time_micros": 1764015086849750, "job": 22, "event": "compaction_finished", "compaction_time_micros": 70041, "compaction_time_cpu_micros": 33519, "output_level": 6, "num_output_files": 1, "total_output_size": 9194355, "num_input_records": 7203, "num_output_records": 6681, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015086850562, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015086853187, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.777438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.853993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.854005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.854008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.854012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:11:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:11:26.854015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:11:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:27.247+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:27 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2767719649' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:11:27 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2767719649' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:11:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:27.397+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1207 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:28.248+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:28 compute-0 ceph-mon[75677]: pgmap v833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:28 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1207 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:28.380+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:29.260+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:29.377+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:30.249+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:30 compute-0 ceph-mon[75677]: pgmap v834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:30.422+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:31.227+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:31.438+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:31 compute-0 podman[258032]: 2025-11-24 20:11:31.913460847 +0000 UTC m=+0.139629236 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 24 20:11:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:32.274+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:32 compute-0 ceph-mon[75677]: pgmap v835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:32.414+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:33.305+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:33.408+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:34.324+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:34.407+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:34 compute-0 ceph-mon[75677]: pgmap v836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:11:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:35.352+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:35.429+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:36.304+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:36.401+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:36 compute-0 ceph-mon[75677]: pgmap v837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1212 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:37.319+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:37.445+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:37 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1212 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:38.357+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:38.475+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:38 compute-0 ceph-mon[75677]: pgmap v838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:39.333+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:39.469+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:40.287+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:11:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:11:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:11:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:11:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:11:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:40.483+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:40 compute-0 ceph-mon[75677]: pgmap v839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:41.260+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:41.469+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1222 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:42.225+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:42.426+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:42 compute-0 ceph-mon[75677]: pgmap v840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:42 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1222 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:43.226+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:43.434+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:44 compute-0 nova_compute[257476]: 2025-11-24 20:11:44.115 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:11:44 compute-0 nova_compute[257476]: 2025-11-24 20:11:44.142 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:11:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:44.204+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:44.427+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:44 compute-0 ceph-mon[75677]: pgmap v841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:45.222+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:45.400+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:45 compute-0 podman[258058]: 2025-11-24 20:11:45.857457246 +0000 UTC m=+0.086988184 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Nov 24 20:11:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:46.205+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:46.428+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:46 compute-0 ceph-mon[75677]: pgmap v842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:47.182+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:47.449+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1227 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:48.176+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:48.413+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:48 compute-0 ceph-mon[75677]: pgmap v843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:48 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1227 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:49.179+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:49.393+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:50.212+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:50.444+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:50 compute-0 ceph-mon[75677]: pgmap v844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:51.214+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:51.489+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:52.258+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:52.496+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:52 compute-0 ceph-mon[75677]: pgmap v845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:53.255+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:53.481+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:54.242+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:11:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:54.490+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:54 compute-0 ceph-mon[75677]: pgmap v846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 24 20:11:54 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3836053216' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 20:11:54 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14339 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 20:11:54 compute-0 ceph-mgr[75975]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 20:11:54 compute-0 ceph-mgr[75975]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 20:11:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:55.228+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:55.466+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:55 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3836053216' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 20:11:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:56.251+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:56.463+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1232 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:11:56 compute-0 ceph-mon[75677]: from='client.14339 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 20:11:56 compute-0 ceph-mon[75677]: pgmap v847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:56 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1232 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:11:56 compute-0 podman[258078]: 2025-11-24 20:11:56.827330415 +0000 UTC m=+0.067028709 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:11:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:57.231+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:57.491+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:58.255+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:58.496+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:58 compute-0 ceph-mon[75677]: pgmap v848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:11:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:11:59.297+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:11:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:11:59.519+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:11:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:11:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:11:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:00.257+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:00.519+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:00 compute-0 ceph-mon[75677]: pgmap v849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:01.243+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:01.553+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1237 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:01 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1237 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.153 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.155 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.156 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.156 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.177 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.178 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.179 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.179 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.179 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.180 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.180 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.181 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.181 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.225 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.226 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.226 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.227 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.228 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:12:02 compute-0 sudo[258100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:02 compute-0 sudo[258100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:02.248+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:02 compute-0 sudo[258100]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:02 compute-0 sudo[258132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:12:02 compute-0 sudo[258132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:02 compute-0 sudo[258132]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:02 compute-0 podman[258124]: 2025-11-24 20:12:02.403374166 +0000 UTC m=+0.142271087 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 20:12:02 compute-0 sudo[258175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:02 compute-0 sudo[258175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:02 compute-0 sudo[258175]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:02 compute-0 sudo[258220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 20:12:02 compute-0 sudo[258220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:02.584+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:12:02 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/859185293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.708 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:12:02 compute-0 ceph-mon[75677]: pgmap v850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:02 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/859185293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.948 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.949 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5163MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.950 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:12:02 compute-0 nova_compute[257476]: 2025-11-24 20:12:02.950 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:12:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:03 compute-0 nova_compute[257476]: 2025-11-24 20:12:03.039 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:12:03 compute-0 nova_compute[257476]: 2025-11-24 20:12:03.039 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:12:03 compute-0 nova_compute[257476]: 2025-11-24 20:12:03.088 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:12:03 compute-0 podman[258320]: 2025-11-24 20:12:03.208688826 +0000 UTC m=+0.107660228 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:12:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:03.210+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:03 compute-0 podman[258320]: 2025-11-24 20:12:03.360470898 +0000 UTC m=+0.259442220 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:12:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:12:03 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1345162221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:12:03 compute-0 nova_compute[257476]: 2025-11-24 20:12:03.532 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:12:03 compute-0 nova_compute[257476]: 2025-11-24 20:12:03.538 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:12:03 compute-0 nova_compute[257476]: 2025-11-24 20:12:03.550 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:12:03 compute-0 nova_compute[257476]: 2025-11-24 20:12:03.551 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:12:03 compute-0 nova_compute[257476]: 2025-11-24 20:12:03.551 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:12:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:03.576+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:03 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1345162221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:12:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:04.216+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:04 compute-0 sudo[258220]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:12:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:12:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:12:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:12:04 compute-0 sudo[258503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:04 compute-0 sudo[258503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:04 compute-0 sudo[258503]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:04 compute-0 sudo[258528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:12:04 compute-0 sudo[258528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:04 compute-0 sudo[258528]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:04 compute-0 sudo[258553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:04 compute-0 sudo[258553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:04 compute-0 sudo[258553]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:04.609+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:04 compute-0 sudo[258578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:12:04 compute-0 sudo[258578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:05 compute-0 ceph-mon[75677]: pgmap v851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:12:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:12:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:05.166+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:05 compute-0 sudo[258578]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:12:05 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:12:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:12:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:12:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:12:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:12:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 10a9d2b8-27e1-4b45-b8ec-b1196d590933 does not exist
Nov 24 20:12:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7a97f843-b23e-4eb2-a717-911b8a4a9107 does not exist
Nov 24 20:12:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev b94c0aa2-e588-47d8-a259-8757eb680dae does not exist
Nov 24 20:12:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:12:05 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:12:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:12:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:12:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:12:05 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:12:05 compute-0 sudo[258634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:05 compute-0 sudo[258634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:05 compute-0 sudo[258634]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:05.585+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:05 compute-0 sudo[258659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:12:05 compute-0 sudo[258659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:05 compute-0 sudo[258659]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:05 compute-0 sudo[258684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:05 compute-0 sudo[258684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:05 compute-0 sudo[258684]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:05 compute-0 sudo[258709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:12:05 compute-0 sudo[258709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:06 compute-0 ceph-mon[75677]: pgmap v852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:12:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:12:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:12:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:12:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:12:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:12:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:06.171+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:06 compute-0 podman[258774]: 2025-11-24 20:12:06.300716692 +0000 UTC m=+0.041170585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:12:06 compute-0 podman[258774]: 2025-11-24 20:12:06.425734995 +0000 UTC m=+0.166188828 container create 9e76c53897e3c9687225ab615862d8cb8b6f5170cd7207ef051a167f91ee82b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:12:06 compute-0 systemd[1]: Started libpod-conmon-9e76c53897e3c9687225ab615862d8cb8b6f5170cd7207ef051a167f91ee82b4.scope.
Nov 24 20:12:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:12:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:06.541+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:06 compute-0 podman[258774]: 2025-11-24 20:12:06.555551957 +0000 UTC m=+0.296005860 container init 9e76c53897e3c9687225ab615862d8cb8b6f5170cd7207ef051a167f91ee82b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sanderson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:12:06 compute-0 podman[258774]: 2025-11-24 20:12:06.570407736 +0000 UTC m=+0.310861549 container start 9e76c53897e3c9687225ab615862d8cb8b6f5170cd7207ef051a167f91ee82b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:12:06 compute-0 vibrant_sanderson[258790]: 167 167
Nov 24 20:12:06 compute-0 systemd[1]: libpod-9e76c53897e3c9687225ab615862d8cb8b6f5170cd7207ef051a167f91ee82b4.scope: Deactivated successfully.
Nov 24 20:12:06 compute-0 podman[258774]: 2025-11-24 20:12:06.577327031 +0000 UTC m=+0.317780864 container attach 9e76c53897e3c9687225ab615862d8cb8b6f5170cd7207ef051a167f91ee82b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:12:06 compute-0 podman[258774]: 2025-11-24 20:12:06.577802324 +0000 UTC m=+0.318256187 container died 9e76c53897e3c9687225ab615862d8cb8b6f5170cd7207ef051a167f91ee82b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 24 20:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-009915ea58b61b6f85fe049c7f56519bc424ac1fa0b980681fff18b2cc72e6b1-merged.mount: Deactivated successfully.
Nov 24 20:12:06 compute-0 podman[258774]: 2025-11-24 20:12:06.70627001 +0000 UTC m=+0.446723833 container remove 9e76c53897e3c9687225ab615862d8cb8b6f5170cd7207ef051a167f91ee82b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:12:06 compute-0 systemd[1]: libpod-conmon-9e76c53897e3c9687225ab615862d8cb8b6f5170cd7207ef051a167f91ee82b4.scope: Deactivated successfully.
Nov 24 20:12:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1242 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:06 compute-0 podman[258814]: 2025-11-24 20:12:06.916081548 +0000 UTC m=+0.051615796 container create ad14ba69f5b9f1217829c58678bb93c30adbe41d6df6ea692c6573f24fe5abb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 20:12:06 compute-0 podman[258814]: 2025-11-24 20:12:06.8893584 +0000 UTC m=+0.024892658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:12:06 compute-0 systemd[1]: Started libpod-conmon-ad14ba69f5b9f1217829c58678bb93c30adbe41d6df6ea692c6573f24fe5abb0.scope.
Nov 24 20:12:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa54024f6c6a3e3cf642180779d397732186433ffe77a3555c6c23f7b7d2d2ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa54024f6c6a3e3cf642180779d397732186433ffe77a3555c6c23f7b7d2d2ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa54024f6c6a3e3cf642180779d397732186433ffe77a3555c6c23f7b7d2d2ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa54024f6c6a3e3cf642180779d397732186433ffe77a3555c6c23f7b7d2d2ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa54024f6c6a3e3cf642180779d397732186433ffe77a3555c6c23f7b7d2d2ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:07 compute-0 podman[258814]: 2025-11-24 20:12:07.03621345 +0000 UTC m=+0.171747678 container init ad14ba69f5b9f1217829c58678bb93c30adbe41d6df6ea692c6573f24fe5abb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:12:07 compute-0 podman[258814]: 2025-11-24 20:12:07.051484459 +0000 UTC m=+0.187018717 container start ad14ba69f5b9f1217829c58678bb93c30adbe41d6df6ea692c6573f24fe5abb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 20:12:07 compute-0 podman[258814]: 2025-11-24 20:12:07.055707813 +0000 UTC m=+0.191242031 container attach ad14ba69f5b9f1217829c58678bb93c30adbe41d6df6ea692c6573f24fe5abb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:12:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:07.164+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1242 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:07 compute-0 sshd-session[258098]: Connection closed by authenticating user root 27.79.44.141 port 55298 [preauth]
Nov 24 20:12:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:07.505+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:08.187+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:08 compute-0 ceph-mon[75677]: pgmap v853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:08 compute-0 condescending_lumiere[258831]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:12:08 compute-0 condescending_lumiere[258831]: --> relative data size: 1.0
Nov 24 20:12:08 compute-0 condescending_lumiere[258831]: --> All data devices are unavailable
Nov 24 20:12:08 compute-0 systemd[1]: libpod-ad14ba69f5b9f1217829c58678bb93c30adbe41d6df6ea692c6573f24fe5abb0.scope: Deactivated successfully.
Nov 24 20:12:08 compute-0 podman[258814]: 2025-11-24 20:12:08.30978318 +0000 UTC m=+1.445317408 container died ad14ba69f5b9f1217829c58678bb93c30adbe41d6df6ea692c6573f24fe5abb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 20:12:08 compute-0 systemd[1]: libpod-ad14ba69f5b9f1217829c58678bb93c30adbe41d6df6ea692c6573f24fe5abb0.scope: Consumed 1.205s CPU time.
Nov 24 20:12:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:08.535+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa54024f6c6a3e3cf642180779d397732186433ffe77a3555c6c23f7b7d2d2ba-merged.mount: Deactivated successfully.
Nov 24 20:12:08 compute-0 podman[258814]: 2025-11-24 20:12:08.805409014 +0000 UTC m=+1.940943262 container remove ad14ba69f5b9f1217829c58678bb93c30adbe41d6df6ea692c6573f24fe5abb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:12:08 compute-0 sudo[258709]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:08 compute-0 systemd[1]: libpod-conmon-ad14ba69f5b9f1217829c58678bb93c30adbe41d6df6ea692c6573f24fe5abb0.scope: Deactivated successfully.
Nov 24 20:12:08 compute-0 sudo[258872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:08 compute-0 sudo[258872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:08 compute-0 sudo[258872]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:09 compute-0 sudo[258897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:12:09 compute-0 sudo[258897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:09 compute-0 sudo[258897]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:09 compute-0 sudo[258922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:09 compute-0 sudo[258922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:09 compute-0 sudo[258922]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:09 compute-0 sudo[258947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:12:09 compute-0 sudo[258947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:09.182+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:12:09.364 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:12:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:12:09.364 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:12:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:12:09.365 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:12:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:09.561+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:09 compute-0 podman[259012]: 2025-11-24 20:12:09.635728935 +0000 UTC m=+0.117958055 container create 5161116f8a5cb4abcf9af93ce4d8647abe923aa076c8ff29432a4f359921e29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:12:09 compute-0 podman[259012]: 2025-11-24 20:12:09.554869276 +0000 UTC m=+0.037098436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:12:09 compute-0 systemd[1]: Started libpod-conmon-5161116f8a5cb4abcf9af93ce4d8647abe923aa076c8ff29432a4f359921e29b.scope.
Nov 24 20:12:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:12:09 compute-0 podman[259012]: 2025-11-24 20:12:09.871144629 +0000 UTC m=+0.353373799 container init 5161116f8a5cb4abcf9af93ce4d8647abe923aa076c8ff29432a4f359921e29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:12:09 compute-0 podman[259012]: 2025-11-24 20:12:09.88271059 +0000 UTC m=+0.364939720 container start 5161116f8a5cb4abcf9af93ce4d8647abe923aa076c8ff29432a4f359921e29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 20:12:09 compute-0 vigorous_lalande[259028]: 167 167
Nov 24 20:12:09 compute-0 systemd[1]: libpod-5161116f8a5cb4abcf9af93ce4d8647abe923aa076c8ff29432a4f359921e29b.scope: Deactivated successfully.
Nov 24 20:12:09 compute-0 podman[259012]: 2025-11-24 20:12:09.910044103 +0000 UTC m=+0.392273283 container attach 5161116f8a5cb4abcf9af93ce4d8647abe923aa076c8ff29432a4f359921e29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:12:09 compute-0 podman[259012]: 2025-11-24 20:12:09.910534056 +0000 UTC m=+0.392763176 container died 5161116f8a5cb4abcf9af93ce4d8647abe923aa076c8ff29432a4f359921e29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-28d07489d4caa0784d2fe80471928f92c9054d38c32e1c425e842c831d82180b-merged.mount: Deactivated successfully.
Nov 24 20:12:10 compute-0 podman[259012]: 2025-11-24 20:12:10.196495986 +0000 UTC m=+0.678725096 container remove 5161116f8a5cb4abcf9af93ce4d8647abe923aa076c8ff29432a4f359921e29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:12:10 compute-0 systemd[1]: libpod-conmon-5161116f8a5cb4abcf9af93ce4d8647abe923aa076c8ff29432a4f359921e29b.scope: Deactivated successfully.
Nov 24 20:12:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:10.221+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:10 compute-0 ceph-mon[75677]: pgmap v854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:10 compute-0 podman[259052]: 2025-11-24 20:12:10.493139043 +0000 UTC m=+0.131034266 container create 1781263ba94f394a594d1a441cf178e9ec9e910b9fd876ee0e97c4cc5161d664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 24 20:12:10 compute-0 podman[259052]: 2025-11-24 20:12:10.405393229 +0000 UTC m=+0.043288512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:12:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:10.573+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:10 compute-0 systemd[1]: Started libpod-conmon-1781263ba94f394a594d1a441cf178e9ec9e910b9fd876ee0e97c4cc5161d664.scope.
Nov 24 20:12:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7d36eb19d973d78b3a7796061fc1c9ff870a08b08eae3d144157a310a38180/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7d36eb19d973d78b3a7796061fc1c9ff870a08b08eae3d144157a310a38180/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7d36eb19d973d78b3a7796061fc1c9ff870a08b08eae3d144157a310a38180/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7d36eb19d973d78b3a7796061fc1c9ff870a08b08eae3d144157a310a38180/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:10 compute-0 podman[259052]: 2025-11-24 20:12:10.72044768 +0000 UTC m=+0.358342933 container init 1781263ba94f394a594d1a441cf178e9ec9e910b9fd876ee0e97c4cc5161d664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:12:10 compute-0 podman[259052]: 2025-11-24 20:12:10.735624246 +0000 UTC m=+0.373519469 container start 1781263ba94f394a594d1a441cf178e9ec9e910b9fd876ee0e97c4cc5161d664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:12:10 compute-0 podman[259052]: 2025-11-24 20:12:10.796277663 +0000 UTC m=+0.434172936 container attach 1781263ba94f394a594d1a441cf178e9ec9e910b9fd876ee0e97c4cc5161d664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:12:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:11.218+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.435990) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015131436067, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 837, "num_deletes": 251, "total_data_size": 825678, "memory_usage": 842280, "flush_reason": "Manual Compaction"}
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015131451049, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 813134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22311, "largest_seqno": 23147, "table_properties": {"data_size": 808954, "index_size": 1704, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11265, "raw_average_key_size": 20, "raw_value_size": 799847, "raw_average_value_size": 1486, "num_data_blocks": 75, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764015087, "oldest_key_time": 1764015087, "file_creation_time": 1764015131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 15326 microseconds, and 6113 cpu microseconds.
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.451320) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 813134 bytes OK
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.451419) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.457268) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.457326) EVENT_LOG_v1 {"time_micros": 1764015131457310, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.457356) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 821272, prev total WAL file size 821272, number of live WAL files 2.
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.458712) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(794KB)], [47(8978KB)]
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015131458776, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 10007489, "oldest_snapshot_seqno": -1}
Nov 24 20:12:11 compute-0 elegant_swartz[259068]: {
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:     "0": [
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:         {
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "devices": [
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "/dev/loop3"
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             ],
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_name": "ceph_lv0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_size": "21470642176",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "name": "ceph_lv0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "tags": {
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.cluster_name": "ceph",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.crush_device_class": "",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.encrypted": "0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.osd_id": "0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.type": "block",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.vdo": "0"
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             },
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "type": "block",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "vg_name": "ceph_vg0"
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:         }
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:     ],
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:     "1": [
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:         {
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "devices": [
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "/dev/loop4"
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             ],
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_name": "ceph_lv1",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_size": "21470642176",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "name": "ceph_lv1",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "tags": {
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.cluster_name": "ceph",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.crush_device_class": "",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.encrypted": "0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.osd_id": "1",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.type": "block",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.vdo": "0"
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             },
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "type": "block",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "vg_name": "ceph_vg1"
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:         }
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:     ],
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:     "2": [
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:         {
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "devices": [
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "/dev/loop5"
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             ],
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_name": "ceph_lv2",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_size": "21470642176",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "name": "ceph_lv2",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "tags": {
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.cluster_name": "ceph",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.crush_device_class": "",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.encrypted": "0",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.osd_id": "2",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.type": "block",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:                 "ceph.vdo": "0"
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             },
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "type": "block",
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:             "vg_name": "ceph_vg2"
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:         }
Nov 24 20:12:11 compute-0 elegant_swartz[259068]:     ]
Nov 24 20:12:11 compute-0 elegant_swartz[259068]: }
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 6705 keys, 8543299 bytes, temperature: kUnknown
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015131511189, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 8543299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8501368, "index_size": 24091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 176317, "raw_average_key_size": 26, "raw_value_size": 8381509, "raw_average_value_size": 1250, "num_data_blocks": 964, "num_entries": 6705, "num_filter_entries": 6705, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764015131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:12:11 compute-0 systemd[1]: libpod-1781263ba94f394a594d1a441cf178e9ec9e910b9fd876ee0e97c4cc5161d664.scope: Deactivated successfully.
Nov 24 20:12:11 compute-0 podman[259052]: 2025-11-24 20:12:11.512662519 +0000 UTC m=+1.150557742 container died 1781263ba94f394a594d1a441cf178e9ec9e910b9fd876ee0e97c4cc5161d664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.511423) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8543299 bytes
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.513656) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.7 rd, 162.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.8 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(22.8) write-amplify(10.5) OK, records in: 7219, records dropped: 514 output_compression: NoCompression
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.513674) EVENT_LOG_v1 {"time_micros": 1764015131513666, "job": 24, "event": "compaction_finished", "compaction_time_micros": 52486, "compaction_time_cpu_micros": 21686, "output_level": 6, "num_output_files": 1, "total_output_size": 8543299, "num_input_records": 7219, "num_output_records": 6705, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015131514034, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015131515519, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.458619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.515599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.515603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.515604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.515606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:12:11 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:12:11.515607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:12:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac7d36eb19d973d78b3a7796061fc1c9ff870a08b08eae3d144157a310a38180-merged.mount: Deactivated successfully.
Nov 24 20:12:11 compute-0 podman[259052]: 2025-11-24 20:12:11.589170181 +0000 UTC m=+1.227065414 container remove 1781263ba94f394a594d1a441cf178e9ec9e910b9fd876ee0e97c4cc5161d664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swartz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:12:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:11.593+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:11 compute-0 systemd[1]: libpod-conmon-1781263ba94f394a594d1a441cf178e9ec9e910b9fd876ee0e97c4cc5161d664.scope: Deactivated successfully.
Nov 24 20:12:11 compute-0 sudo[258947]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:11 compute-0 sudo[259091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:11 compute-0 sudo[259091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:11 compute-0 sudo[259091]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1252 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:11 compute-0 sudo[259116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:12:11 compute-0 sudo[259116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:11 compute-0 sudo[259116]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:11 compute-0 sudo[259141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:11 compute-0 sudo[259141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:11 compute-0 sudo[259141]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:11 compute-0 sudo[259166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:12:11 compute-0 sudo[259166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:12.261+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:12 compute-0 podman[259232]: 2025-11-24 20:12:12.39592619 +0000 UTC m=+0.055221333 container create 4bcd538aadc1dffaffc17111a24b01e4108d34a52a1253184c574ab119a40fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 20:12:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:12 compute-0 ceph-mon[75677]: pgmap v855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1252 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:12 compute-0 systemd[1]: Started libpod-conmon-4bcd538aadc1dffaffc17111a24b01e4108d34a52a1253184c574ab119a40fff.scope.
Nov 24 20:12:12 compute-0 podman[259232]: 2025-11-24 20:12:12.36873795 +0000 UTC m=+0.028033133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:12:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:12:12 compute-0 podman[259232]: 2025-11-24 20:12:12.48954378 +0000 UTC m=+0.148838933 container init 4bcd538aadc1dffaffc17111a24b01e4108d34a52a1253184c574ab119a40fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 20:12:12 compute-0 podman[259232]: 2025-11-24 20:12:12.501083411 +0000 UTC m=+0.160378554 container start 4bcd538aadc1dffaffc17111a24b01e4108d34a52a1253184c574ab119a40fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:12:12 compute-0 podman[259232]: 2025-11-24 20:12:12.505531669 +0000 UTC m=+0.164826862 container attach 4bcd538aadc1dffaffc17111a24b01e4108d34a52a1253184c574ab119a40fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 20:12:12 compute-0 upbeat_darwin[259248]: 167 167
Nov 24 20:12:12 compute-0 systemd[1]: libpod-4bcd538aadc1dffaffc17111a24b01e4108d34a52a1253184c574ab119a40fff.scope: Deactivated successfully.
Nov 24 20:12:12 compute-0 podman[259232]: 2025-11-24 20:12:12.507457511 +0000 UTC m=+0.166752624 container died 4bcd538aadc1dffaffc17111a24b01e4108d34a52a1253184c574ab119a40fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 20:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b928230df3f8108203785103ba01648498b797eaae19e7db3f27d68d8c4da07-merged.mount: Deactivated successfully.
Nov 24 20:12:12 compute-0 podman[259232]: 2025-11-24 20:12:12.558948082 +0000 UTC m=+0.218243225 container remove 4bcd538aadc1dffaffc17111a24b01e4108d34a52a1253184c574ab119a40fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:12:12 compute-0 systemd[1]: libpod-conmon-4bcd538aadc1dffaffc17111a24b01e4108d34a52a1253184c574ab119a40fff.scope: Deactivated successfully.
Nov 24 20:12:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:12.578+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:12 compute-0 podman[259275]: 2025-11-24 20:12:12.812426181 +0000 UTC m=+0.086481840 container create ae0f451803768c5940ceb08c0e929cf0139224e01797b4c5d27d75512d965c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:12:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 24 20:12:12 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295202901' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 20:12:12 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.14345 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 20:12:12 compute-0 ceph-mgr[75975]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 20:12:12 compute-0 ceph-mgr[75975]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 20:12:12 compute-0 podman[259275]: 2025-11-24 20:12:12.776230141 +0000 UTC m=+0.050285860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:12:12 compute-0 systemd[1]: Started libpod-conmon-ae0f451803768c5940ceb08c0e929cf0139224e01797b4c5d27d75512d965c17.scope.
Nov 24 20:12:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d123762801f83c6406e2ea9ae6d44e169ab900fc5a3ecaa0218c52e0e7e0db7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d123762801f83c6406e2ea9ae6d44e169ab900fc5a3ecaa0218c52e0e7e0db7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d123762801f83c6406e2ea9ae6d44e169ab900fc5a3ecaa0218c52e0e7e0db7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d123762801f83c6406e2ea9ae6d44e169ab900fc5a3ecaa0218c52e0e7e0db7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:12:12 compute-0 podman[259275]: 2025-11-24 20:12:12.935491012 +0000 UTC m=+0.209546681 container init ae0f451803768c5940ceb08c0e929cf0139224e01797b4c5d27d75512d965c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 20:12:12 compute-0 podman[259275]: 2025-11-24 20:12:12.948722637 +0000 UTC m=+0.222778296 container start ae0f451803768c5940ceb08c0e929cf0139224e01797b4c5d27d75512d965c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:12:12 compute-0 podman[259275]: 2025-11-24 20:12:12.954958725 +0000 UTC m=+0.229014374 container attach ae0f451803768c5940ceb08c0e929cf0139224e01797b4c5d27d75512d965c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:12:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:13.265+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:13 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/295202901' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 20:12:13 compute-0 ceph-mon[75677]: from='client.14345 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 20:12:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:13.600+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:13 compute-0 sshd-session[259268]: Invalid user userroot from 182.93.7.194 port 64478
Nov 24 20:12:14 compute-0 sshd-session[259268]: Received disconnect from 182.93.7.194 port 64478:11: Bye Bye [preauth]
Nov 24 20:12:14 compute-0 sshd-session[259268]: Disconnected from invalid user userroot 182.93.7.194 port 64478 [preauth]
Nov 24 20:12:14 compute-0 focused_dhawan[259292]: {
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "osd_id": 2,
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "type": "bluestore"
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:     },
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "osd_id": 1,
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "type": "bluestore"
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:     },
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "osd_id": 0,
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:         "type": "bluestore"
Nov 24 20:12:14 compute-0 focused_dhawan[259292]:     }
Nov 24 20:12:14 compute-0 focused_dhawan[259292]: }
Nov 24 20:12:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:14.232+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:14 compute-0 systemd[1]: libpod-ae0f451803768c5940ceb08c0e929cf0139224e01797b4c5d27d75512d965c17.scope: Deactivated successfully.
Nov 24 20:12:14 compute-0 systemd[1]: libpod-ae0f451803768c5940ceb08c0e929cf0139224e01797b4c5d27d75512d965c17.scope: Consumed 1.294s CPU time.
Nov 24 20:12:14 compute-0 podman[259325]: 2025-11-24 20:12:14.304742508 +0000 UTC m=+0.045949983 container died ae0f451803768c5940ceb08c0e929cf0139224e01797b4c5d27d75512d965c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d123762801f83c6406e2ea9ae6d44e169ab900fc5a3ecaa0218c52e0e7e0db7-merged.mount: Deactivated successfully.
Nov 24 20:12:14 compute-0 podman[259325]: 2025-11-24 20:12:14.372617589 +0000 UTC m=+0.113825034 container remove ae0f451803768c5940ceb08c0e929cf0139224e01797b4c5d27d75512d965c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:12:14 compute-0 systemd[1]: libpod-conmon-ae0f451803768c5940ceb08c0e929cf0139224e01797b4c5d27d75512d965c17.scope: Deactivated successfully.
Nov 24 20:12:14 compute-0 sudo[259166]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:12:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:12:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:12:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:12:14 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c8ce4b0e-4237-4aae-af89-04ff8a4931c8 does not exist
Nov 24 20:12:14 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 833efd5c-e9d6-426d-8a66-4b03418f8ee3 does not exist
Nov 24 20:12:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:14 compute-0 ceph-mon[75677]: pgmap v856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:14 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:12:14 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:12:14 compute-0 sudo[259340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:12:14 compute-0 sudo[259340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:14 compute-0 sudo[259340]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:14 compute-0 sudo[259365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:12:14 compute-0 sudo[259365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:12:14 compute-0 sudo[259365]: pam_unix(sudo:session): session closed for user root
Nov 24 20:12:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:14.644+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:15.248+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:15.677+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:16.269+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:16 compute-0 ceph-mon[75677]: pgmap v857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:16.659+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:16 compute-0 podman[259390]: 2025-11-24 20:12:16.883892207 +0000 UTC m=+0.102044647 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:12:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:17.243+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1257 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:17.677+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:18.287+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:18 compute-0 ceph-mon[75677]: pgmap v858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:18 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1257 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:18.709+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:19.266+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:19.661+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:20.249+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:20 compute-0 ceph-mon[75677]: pgmap v859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:20.663+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:21.268+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:21.667+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:22.289+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:22 compute-0 ceph-mon[75677]: pgmap v860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:22.681+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:23.255+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:23.650+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:24.245+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:12:24
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:12:24 compute-0 ceph-mon[75677]: pgmap v861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:24.669+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:25.263+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:25.663+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:26.230+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:26 compute-0 ceph-mon[75677]: pgmap v862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:26.685+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1262 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:27.195+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1262 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:27.707+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:27 compute-0 podman[259411]: 2025-11-24 20:12:27.870053522 +0000 UTC m=+0.085655559 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 20:12:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:28.162+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:28 compute-0 ceph-mon[75677]: pgmap v863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:28.739+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:29.153+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:29.706+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:30.166+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:30 compute-0 ceph-mon[75677]: pgmap v864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:30.730+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:31.133+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1272 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:31.767+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:32.116+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:32 compute-0 ceph-mon[75677]: pgmap v865: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1272 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:32.740+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:32 compute-0 podman[259433]: 2025-11-24 20:12:32.895332801 +0000 UTC m=+0.121243694 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 20:12:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:33.129+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:33.762+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:34.167+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:12:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:12:34 compute-0 ceph-mon[75677]: pgmap v866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:34.794+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:35.143+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:35.802+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:36.185+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:36 compute-0 ceph-mon[75677]: pgmap v867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:36.812+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:37.197+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1277 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:37.833+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:38.165+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:38 compute-0 ceph-mon[75677]: pgmap v868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1277 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:38.870+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:39.129+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:39.869+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:40.131+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:12:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:12:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:12:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:12:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:12:40 compute-0 ceph-mon[75677]: pgmap v869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:40.848+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:41.142+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:41.833+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:42.163+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:42 compute-0 ceph-mon[75677]: pgmap v870: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:42.804+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:43.150+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:43.796+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:44.107+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:44.801+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:44 compute-0 ceph-mon[75677]: pgmap v871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:45.058+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:45.810+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:46.072+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1282 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:46.773+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:46 compute-0 ceph-mon[75677]: pgmap v872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:46 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1282 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:47.069+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:47.772+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:47 compute-0 podman[259460]: 2025-11-24 20:12:47.848996491 +0000 UTC m=+0.075781670 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 20:12:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:48.028+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:48.817+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:48 compute-0 ceph-mon[75677]: pgmap v873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:49.017+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:49.808+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:50.019+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:50 compute-0 sshd[192247]: Timeout before authentication for connection from 14.103.116.192 to 38.102.83.22, pid = 255246
Nov 24 20:12:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:50.788+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:50 compute-0 ceph-mon[75677]: pgmap v874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:51.028+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1287 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:51.787+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:51 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1287 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:52.048+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:52.801+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:52 compute-0 ceph-mon[75677]: pgmap v875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:53.058+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:53.825+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:54.012+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:12:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:54.829+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:54 compute-0 ceph-mon[75677]: pgmap v876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:55.040+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:55.849+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:56.040+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1292 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:12:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:56.885+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:56 compute-0 ceph-mon[75677]: pgmap v877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:56 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1292 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:12:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:56.993+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:57.887+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:58.031+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:58 compute-0 podman[259481]: 2025-11-24 20:12:58.881437692 +0000 UTC m=+0.097105975 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 24 20:12:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:58.903+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:58 compute-0 ceph-mon[75677]: pgmap v878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:12:59.037+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:12:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:12:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:12:59.922+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:12:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:12:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:12:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:00.022+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:00.875+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:00 compute-0 ceph-mon[75677]: pgmap v879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:01.006+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1297 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:01.831+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:01 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1297 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:02.042+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:02.818+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:02 compute-0 ceph-mon[75677]: pgmap v880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:03.088+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:03 compute-0 nova_compute[257476]: 2025-11-24 20:13:03.541 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:13:03 compute-0 nova_compute[257476]: 2025-11-24 20:13:03.564 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:13:03 compute-0 nova_compute[257476]: 2025-11-24 20:13:03.565 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:13:03 compute-0 nova_compute[257476]: 2025-11-24 20:13:03.565 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:13:03 compute-0 nova_compute[257476]: 2025-11-24 20:13:03.565 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:13:03 compute-0 nova_compute[257476]: 2025-11-24 20:13:03.565 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:13:03 compute-0 nova_compute[257476]: 2025-11-24 20:13:03.566 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:13:03 compute-0 nova_compute[257476]: 2025-11-24 20:13:03.566 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:13:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:03.804+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:03 compute-0 podman[259502]: 2025-11-24 20:13:03.927138818 +0000 UTC m=+0.145884922 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 20:13:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:03 compute-0 ceph-mon[75677]: pgmap v881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:04.066+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.149 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.150 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.167 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.167 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.194 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.195 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.195 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.195 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.196 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:13:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:13:04 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3789429026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.648 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:13:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:04.805+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.847 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.849 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5169MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.850 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.850 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.908 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.909 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:13:04 compute-0 nova_compute[257476]: 2025-11-24 20:13:04.923 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:13:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:04 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3789429026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:13:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:05.029+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:13:05 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492427632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:13:05 compute-0 nova_compute[257476]: 2025-11-24 20:13:05.376 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:13:05 compute-0 nova_compute[257476]: 2025-11-24 20:13:05.383 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:13:05 compute-0 nova_compute[257476]: 2025-11-24 20:13:05.400 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:13:05 compute-0 nova_compute[257476]: 2025-11-24 20:13:05.402 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:13:05 compute-0 nova_compute[257476]: 2025-11-24 20:13:05.402 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:13:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:05.836+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:05 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:13:05.960 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:13:05 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:13:05.961 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:13:05 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:13:05.962 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:13:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:06 compute-0 ceph-mon[75677]: pgmap v882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:06 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/492427632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:13:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:06.060+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1302 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:06.788+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1302 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:07.069+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:07.833+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:08 compute-0 ceph-mon[75677]: pgmap v883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:08.080+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:08.822+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:09.049+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:13:09.365 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:13:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:13:09.366 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:13:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:13:09.367 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:13:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:09.801+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:10 compute-0 ceph-mon[75677]: pgmap v884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:10.082+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:10.772+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:11.078+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:11.764+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1312 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:12 compute-0 ceph-mon[75677]: pgmap v885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1312 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:12.078+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:12.804+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:13.058+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:13.844+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:14.078+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:14 compute-0 ceph-mon[75677]: pgmap v886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:14 compute-0 sudo[259572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:13:14 compute-0 sudo[259572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:14 compute-0 sudo[259572]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:14.831+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:14 compute-0 sudo[259597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:13:14 compute-0 sudo[259597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:14 compute-0 sudo[259597]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:14 compute-0 sudo[259622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:13:14 compute-0 sudo[259622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:14 compute-0 sudo[259622]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:15 compute-0 sudo[259647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:13:15 compute-0 sudo[259647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:15.104+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:15 compute-0 sudo[259647]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:13:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:15.815+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:13:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:13:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:13:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:13:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:13:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2c2ad729-3aba-4ab3-a000-83aa68a5efe8 does not exist
Nov 24 20:13:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f0c69cb2-a9c4-44df-8c26-fa8f536860cd does not exist
Nov 24 20:13:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 81e2f53c-72f8-4e7e-9897-000c464e626b does not exist
Nov 24 20:13:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:13:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:13:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:13:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:13:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:13:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:13:15 compute-0 sudo[259704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:13:15 compute-0 sudo[259704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:15 compute-0 sudo[259704]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:16 compute-0 sudo[259729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:13:16 compute-0 sudo[259729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:16 compute-0 sudo[259729]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:16.096+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:16 compute-0 ceph-mon[75677]: pgmap v887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:13:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:13:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:13:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:13:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:13:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:13:16 compute-0 sudo[259754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:13:16 compute-0 sudo[259754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:16 compute-0 sudo[259754]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:16 compute-0 sudo[259779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:13:16 compute-0 sudo[259779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:13:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2427685178' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:13:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:13:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2427685178' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:13:16 compute-0 podman[259843]: 2025-11-24 20:13:16.730480393 +0000 UTC m=+0.074132214 container create b6a0e931701ed3ca92915caa5bf6612cb55c543bbf4eb78b911be676e1456568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_benz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:13:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:16 compute-0 podman[259843]: 2025-11-24 20:13:16.697440168 +0000 UTC m=+0.041092049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:13:16 compute-0 systemd[1]: Started libpod-conmon-b6a0e931701ed3ca92915caa5bf6612cb55c543bbf4eb78b911be676e1456568.scope.
Nov 24 20:13:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:16.831+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:13:16 compute-0 podman[259843]: 2025-11-24 20:13:16.866359801 +0000 UTC m=+0.210011672 container init b6a0e931701ed3ca92915caa5bf6612cb55c543bbf4eb78b911be676e1456568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:13:16 compute-0 podman[259843]: 2025-11-24 20:13:16.882894044 +0000 UTC m=+0.226545855 container start b6a0e931701ed3ca92915caa5bf6612cb55c543bbf4eb78b911be676e1456568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:13:16 compute-0 podman[259843]: 2025-11-24 20:13:16.888006285 +0000 UTC m=+0.231658106 container attach b6a0e931701ed3ca92915caa5bf6612cb55c543bbf4eb78b911be676e1456568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:13:16 compute-0 beautiful_benz[259859]: 167 167
Nov 24 20:13:16 compute-0 systemd[1]: libpod-b6a0e931701ed3ca92915caa5bf6612cb55c543bbf4eb78b911be676e1456568.scope: Deactivated successfully.
Nov 24 20:13:16 compute-0 podman[259843]: 2025-11-24 20:13:16.89476038 +0000 UTC m=+0.238412231 container died b6a0e931701ed3ca92915caa5bf6612cb55c543bbf4eb78b911be676e1456568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_benz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdb7c761f07f8c336b8a3d6d360446444cd14c67a25335e57ddb1d411ed325c9-merged.mount: Deactivated successfully.
Nov 24 20:13:16 compute-0 podman[259843]: 2025-11-24 20:13:16.94764976 +0000 UTC m=+0.291301541 container remove b6a0e931701ed3ca92915caa5bf6612cb55c543bbf4eb78b911be676e1456568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_benz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 20:13:16 compute-0 systemd[1]: libpod-conmon-b6a0e931701ed3ca92915caa5bf6612cb55c543bbf4eb78b911be676e1456568.scope: Deactivated successfully.
Nov 24 20:13:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1317 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2427685178' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:13:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2427685178' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:13:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:17.131+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:17 compute-0 podman[259882]: 2025-11-24 20:13:17.168794596 +0000 UTC m=+0.072376026 container create 5344b85c7829526cae85f99570720fdeb5ce363f5c765481fe48d8b337108b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:13:17 compute-0 systemd[1]: Started libpod-conmon-5344b85c7829526cae85f99570720fdeb5ce363f5c765481fe48d8b337108b26.scope.
Nov 24 20:13:17 compute-0 podman[259882]: 2025-11-24 20:13:17.13869933 +0000 UTC m=+0.042280810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:13:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc412f0182bfdc483a087f0b3cccdcc97600cacac946f500ccc327c09771366b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc412f0182bfdc483a087f0b3cccdcc97600cacac946f500ccc327c09771366b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc412f0182bfdc483a087f0b3cccdcc97600cacac946f500ccc327c09771366b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc412f0182bfdc483a087f0b3cccdcc97600cacac946f500ccc327c09771366b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc412f0182bfdc483a087f0b3cccdcc97600cacac946f500ccc327c09771366b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:17 compute-0 podman[259882]: 2025-11-24 20:13:17.29621677 +0000 UTC m=+0.199798270 container init 5344b85c7829526cae85f99570720fdeb5ce363f5c765481fe48d8b337108b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:13:17 compute-0 podman[259882]: 2025-11-24 20:13:17.312466266 +0000 UTC m=+0.216047706 container start 5344b85c7829526cae85f99570720fdeb5ce363f5c765481fe48d8b337108b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 20:13:17 compute-0 podman[259882]: 2025-11-24 20:13:17.316686863 +0000 UTC m=+0.220268303 container attach 5344b85c7829526cae85f99570720fdeb5ce363f5c765481fe48d8b337108b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 20:13:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:17.782+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:18.112+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:18 compute-0 ceph-mon[75677]: pgmap v888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:18 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1317 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:18 compute-0 gifted_nightingale[259899]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:13:18 compute-0 gifted_nightingale[259899]: --> relative data size: 1.0
Nov 24 20:13:18 compute-0 gifted_nightingale[259899]: --> All data devices are unavailable
Nov 24 20:13:18 compute-0 systemd[1]: libpod-5344b85c7829526cae85f99570720fdeb5ce363f5c765481fe48d8b337108b26.scope: Deactivated successfully.
Nov 24 20:13:18 compute-0 podman[259882]: 2025-11-24 20:13:18.507269088 +0000 UTC m=+1.410850518 container died 5344b85c7829526cae85f99570720fdeb5ce363f5c765481fe48d8b337108b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:13:18 compute-0 systemd[1]: libpod-5344b85c7829526cae85f99570720fdeb5ce363f5c765481fe48d8b337108b26.scope: Consumed 1.161s CPU time.
Nov 24 20:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc412f0182bfdc483a087f0b3cccdcc97600cacac946f500ccc327c09771366b-merged.mount: Deactivated successfully.
Nov 24 20:13:18 compute-0 podman[259882]: 2025-11-24 20:13:18.592527036 +0000 UTC m=+1.496108476 container remove 5344b85c7829526cae85f99570720fdeb5ce363f5c765481fe48d8b337108b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 20:13:18 compute-0 systemd[1]: libpod-conmon-5344b85c7829526cae85f99570720fdeb5ce363f5c765481fe48d8b337108b26.scope: Deactivated successfully.
Nov 24 20:13:18 compute-0 sudo[259779]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:18 compute-0 podman[259929]: 2025-11-24 20:13:18.637305135 +0000 UTC m=+0.083887222 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 24 20:13:18 compute-0 sudo[259959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:13:18 compute-0 sudo[259959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:18 compute-0 sudo[259959]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:18 compute-0 sudo[259984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:13:18 compute-0 sudo[259984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:18 compute-0 sudo[259984]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:18.822+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:18 compute-0 sudo[260009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:13:18 compute-0 sudo[260009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:18 compute-0 sudo[260009]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:18 compute-0 sudo[260034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:13:18 compute-0 sudo[260034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:19.084+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:19 compute-0 podman[260101]: 2025-11-24 20:13:19.461954123 +0000 UTC m=+0.062037672 container create 2f5fad5154a4b90f4ae8823c7548f84dd406c6c5a988ed8b934aadc84f96c3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bhabha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:13:19 compute-0 systemd[1]: Started libpod-conmon-2f5fad5154a4b90f4ae8823c7548f84dd406c6c5a988ed8b934aadc84f96c3f2.scope.
Nov 24 20:13:19 compute-0 podman[260101]: 2025-11-24 20:13:19.436005622 +0000 UTC m=+0.036089221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:13:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:13:19 compute-0 podman[260101]: 2025-11-24 20:13:19.559312964 +0000 UTC m=+0.159396543 container init 2f5fad5154a4b90f4ae8823c7548f84dd406c6c5a988ed8b934aadc84f96c3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bhabha, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:13:19 compute-0 podman[260101]: 2025-11-24 20:13:19.574970194 +0000 UTC m=+0.175053783 container start 2f5fad5154a4b90f4ae8823c7548f84dd406c6c5a988ed8b934aadc84f96c3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:13:19 compute-0 podman[260101]: 2025-11-24 20:13:19.580503956 +0000 UTC m=+0.180587605 container attach 2f5fad5154a4b90f4ae8823c7548f84dd406c6c5a988ed8b934aadc84f96c3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bhabha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:13:19 compute-0 trusting_bhabha[260117]: 167 167
Nov 24 20:13:19 compute-0 systemd[1]: libpod-2f5fad5154a4b90f4ae8823c7548f84dd406c6c5a988ed8b934aadc84f96c3f2.scope: Deactivated successfully.
Nov 24 20:13:19 compute-0 conmon[260117]: conmon 2f5fad5154a4b90f4ae8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f5fad5154a4b90f4ae8823c7548f84dd406c6c5a988ed8b934aadc84f96c3f2.scope/container/memory.events
Nov 24 20:13:19 compute-0 podman[260101]: 2025-11-24 20:13:19.585854813 +0000 UTC m=+0.185938402 container died 2f5fad5154a4b90f4ae8823c7548f84dd406c6c5a988ed8b934aadc84f96c3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c15bdbb69da854321baf1e85e3449fb701507ef48f3a7ede6a13ea80dbff985-merged.mount: Deactivated successfully.
Nov 24 20:13:19 compute-0 podman[260101]: 2025-11-24 20:13:19.648653535 +0000 UTC m=+0.248737114 container remove 2f5fad5154a4b90f4ae8823c7548f84dd406c6c5a988ed8b934aadc84f96c3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:13:19 compute-0 systemd[1]: libpod-conmon-2f5fad5154a4b90f4ae8823c7548f84dd406c6c5a988ed8b934aadc84f96c3f2.scope: Deactivated successfully.
Nov 24 20:13:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:19.830+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:19 compute-0 podman[260142]: 2025-11-24 20:13:19.897077539 +0000 UTC m=+0.075300006 container create a19c0a6e21c34527cd600df2e08be0047653d91661b807d3b91c5b630d62dca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_leakey, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:13:19 compute-0 systemd[1]: Started libpod-conmon-a19c0a6e21c34527cd600df2e08be0047653d91661b807d3b91c5b630d62dca7.scope.
Nov 24 20:13:19 compute-0 podman[260142]: 2025-11-24 20:13:19.866879931 +0000 UTC m=+0.045102448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:13:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03b6aceee9130ed3a8416e3db1a168030689a9a57e07873ccc23ea9f2f6f67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03b6aceee9130ed3a8416e3db1a168030689a9a57e07873ccc23ea9f2f6f67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03b6aceee9130ed3a8416e3db1a168030689a9a57e07873ccc23ea9f2f6f67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03b6aceee9130ed3a8416e3db1a168030689a9a57e07873ccc23ea9f2f6f67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:20 compute-0 podman[260142]: 2025-11-24 20:13:20.002537511 +0000 UTC m=+0.180759988 container init a19c0a6e21c34527cd600df2e08be0047653d91661b807d3b91c5b630d62dca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_leakey, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:13:20 compute-0 podman[260142]: 2025-11-24 20:13:20.016241467 +0000 UTC m=+0.194463934 container start a19c0a6e21c34527cd600df2e08be0047653d91661b807d3b91c5b630d62dca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:13:20 compute-0 podman[260142]: 2025-11-24 20:13:20.020444123 +0000 UTC m=+0.198666880 container attach a19c0a6e21c34527cd600df2e08be0047653d91661b807d3b91c5b630d62dca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_leakey, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:13:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:20.052+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:20 compute-0 ceph-mon[75677]: pgmap v889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:20 compute-0 nervous_leakey[260158]: {
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:     "0": [
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:         {
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "devices": [
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "/dev/loop3"
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             ],
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_name": "ceph_lv0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_size": "21470642176",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "name": "ceph_lv0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "tags": {
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.cluster_name": "ceph",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.crush_device_class": "",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.encrypted": "0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.osd_id": "0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.type": "block",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.vdo": "0"
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             },
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "type": "block",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "vg_name": "ceph_vg0"
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:         }
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:     ],
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:     "1": [
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:         {
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "devices": [
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "/dev/loop4"
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             ],
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_name": "ceph_lv1",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_size": "21470642176",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "name": "ceph_lv1",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "tags": {
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.cluster_name": "ceph",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.crush_device_class": "",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.encrypted": "0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.osd_id": "1",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.type": "block",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.vdo": "0"
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             },
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "type": "block",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "vg_name": "ceph_vg1"
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:         }
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:     ],
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:     "2": [
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:         {
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "devices": [
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "/dev/loop5"
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             ],
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_name": "ceph_lv2",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_size": "21470642176",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "name": "ceph_lv2",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "tags": {
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.cluster_name": "ceph",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.crush_device_class": "",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.encrypted": "0",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.osd_id": "2",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.type": "block",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:                 "ceph.vdo": "0"
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             },
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "type": "block",
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:             "vg_name": "ceph_vg2"
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:         }
Nov 24 20:13:20 compute-0 nervous_leakey[260158]:     ]
Nov 24 20:13:20 compute-0 nervous_leakey[260158]: }
Nov 24 20:13:20 compute-0 systemd[1]: libpod-a19c0a6e21c34527cd600df2e08be0047653d91661b807d3b91c5b630d62dca7.scope: Deactivated successfully.
Nov 24 20:13:20 compute-0 podman[260167]: 2025-11-24 20:13:20.839768885 +0000 UTC m=+0.037237842 container died a19c0a6e21c34527cd600df2e08be0047653d91661b807d3b91c5b630d62dca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 20:13:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:20.847+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d03b6aceee9130ed3a8416e3db1a168030689a9a57e07873ccc23ea9f2f6f67-merged.mount: Deactivated successfully.
Nov 24 20:13:20 compute-0 podman[260167]: 2025-11-24 20:13:20.929410774 +0000 UTC m=+0.126879681 container remove a19c0a6e21c34527cd600df2e08be0047653d91661b807d3b91c5b630d62dca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_leakey, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:13:20 compute-0 systemd[1]: libpod-conmon-a19c0a6e21c34527cd600df2e08be0047653d91661b807d3b91c5b630d62dca7.scope: Deactivated successfully.
Nov 24 20:13:20 compute-0 sudo[260034]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:21.017+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:21 compute-0 sudo[260181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:13:21 compute-0 sudo[260181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:21 compute-0 sudo[260181]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:21 compute-0 sudo[260206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:13:21 compute-0 sudo[260206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:21 compute-0 sudo[260206]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:21 compute-0 sudo[260231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:13:21 compute-0 sudo[260231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:21 compute-0 sudo[260231]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:21 compute-0 sudo[260256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:13:21 compute-0 sudo[260256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:21.819+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:21 compute-0 podman[260322]: 2025-11-24 20:13:21.866736943 +0000 UTC m=+0.077521447 container create a0cc5b5c90abe5a273d3af19a0f96f88a8f421e07bf2cf553125748fa4c28685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:13:21 compute-0 systemd[1]: Started libpod-conmon-a0cc5b5c90abe5a273d3af19a0f96f88a8f421e07bf2cf553125748fa4c28685.scope.
Nov 24 20:13:21 compute-0 podman[260322]: 2025-11-24 20:13:21.83525775 +0000 UTC m=+0.046042304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:13:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:13:21 compute-0 podman[260322]: 2025-11-24 20:13:21.972032141 +0000 UTC m=+0.182816625 container init a0cc5b5c90abe5a273d3af19a0f96f88a8f421e07bf2cf553125748fa4c28685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:13:21 compute-0 podman[260322]: 2025-11-24 20:13:21.984133263 +0000 UTC m=+0.194917757 container start a0cc5b5c90abe5a273d3af19a0f96f88a8f421e07bf2cf553125748fa4c28685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:13:21 compute-0 podman[260322]: 2025-11-24 20:13:21.98984304 +0000 UTC m=+0.200627504 container attach a0cc5b5c90abe5a273d3af19a0f96f88a8f421e07bf2cf553125748fa4c28685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:13:21 compute-0 intelligent_haslett[260338]: 167 167
Nov 24 20:13:21 compute-0 systemd[1]: libpod-a0cc5b5c90abe5a273d3af19a0f96f88a8f421e07bf2cf553125748fa4c28685.scope: Deactivated successfully.
Nov 24 20:13:21 compute-0 podman[260322]: 2025-11-24 20:13:21.992233326 +0000 UTC m=+0.203017830 container died a0cc5b5c90abe5a273d3af19a0f96f88a8f421e07bf2cf553125748fa4c28685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:13:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:21.991+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a817c7cc106635d4505881c9397515244db513d563a66d6201ea9579249a37c-merged.mount: Deactivated successfully.
Nov 24 20:13:22 compute-0 podman[260322]: 2025-11-24 20:13:22.044782906 +0000 UTC m=+0.255567410 container remove a0cc5b5c90abe5a273d3af19a0f96f88a8f421e07bf2cf553125748fa4c28685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 20:13:22 compute-0 systemd[1]: libpod-conmon-a0cc5b5c90abe5a273d3af19a0f96f88a8f421e07bf2cf553125748fa4c28685.scope: Deactivated successfully.
Nov 24 20:13:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:22 compute-0 ceph-mon[75677]: pgmap v890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:22 compute-0 podman[260360]: 2025-11-24 20:13:22.281021207 +0000 UTC m=+0.071274997 container create 481a86592c8b5880706aab070ad8640756aa81f04998dd4c32f045e0e72b867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:13:22 compute-0 systemd[1]: Started libpod-conmon-481a86592c8b5880706aab070ad8640756aa81f04998dd4c32f045e0e72b867a.scope.
Nov 24 20:13:22 compute-0 podman[260360]: 2025-11-24 20:13:22.251667721 +0000 UTC m=+0.041921571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:13:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070177be288e8c9f6cf98fedca48aec9c44825f11d66f72bc69d982ef21801b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070177be288e8c9f6cf98fedca48aec9c44825f11d66f72bc69d982ef21801b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070177be288e8c9f6cf98fedca48aec9c44825f11d66f72bc69d982ef21801b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070177be288e8c9f6cf98fedca48aec9c44825f11d66f72bc69d982ef21801b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:13:22 compute-0 podman[260360]: 2025-11-24 20:13:22.41204426 +0000 UTC m=+0.202298070 container init 481a86592c8b5880706aab070ad8640756aa81f04998dd4c32f045e0e72b867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cannon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:13:22 compute-0 podman[260360]: 2025-11-24 20:13:22.426314172 +0000 UTC m=+0.216567952 container start 481a86592c8b5880706aab070ad8640756aa81f04998dd4c32f045e0e72b867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:13:22 compute-0 podman[260360]: 2025-11-24 20:13:22.431812402 +0000 UTC m=+0.222066152 container attach 481a86592c8b5880706aab070ad8640756aa81f04998dd4c32f045e0e72b867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cannon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 20:13:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:22.810+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:22.969+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]: {
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "osd_id": 2,
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "type": "bluestore"
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:     },
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "osd_id": 1,
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "type": "bluestore"
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:     },
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "osd_id": 0,
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:         "type": "bluestore"
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]:     }
Nov 24 20:13:23 compute-0 stupefied_cannon[260377]: }
Nov 24 20:13:23 compute-0 systemd[1]: libpod-481a86592c8b5880706aab070ad8640756aa81f04998dd4c32f045e0e72b867a.scope: Deactivated successfully.
Nov 24 20:13:23 compute-0 podman[260360]: 2025-11-24 20:13:23.49667365 +0000 UTC m=+1.286927390 container died 481a86592c8b5880706aab070ad8640756aa81f04998dd4c32f045e0e72b867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:13:23 compute-0 systemd[1]: libpod-481a86592c8b5880706aab070ad8640756aa81f04998dd4c32f045e0e72b867a.scope: Consumed 1.067s CPU time.
Nov 24 20:13:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-070177be288e8c9f6cf98fedca48aec9c44825f11d66f72bc69d982ef21801b4-merged.mount: Deactivated successfully.
Nov 24 20:13:23 compute-0 podman[260360]: 2025-11-24 20:13:23.563370809 +0000 UTC m=+1.353624559 container remove 481a86592c8b5880706aab070ad8640756aa81f04998dd4c32f045e0e72b867a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cannon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:13:23 compute-0 systemd[1]: libpod-conmon-481a86592c8b5880706aab070ad8640756aa81f04998dd4c32f045e0e72b867a.scope: Deactivated successfully.
Nov 24 20:13:23 compute-0 sudo[260256]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:13:23 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:13:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:13:23 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:13:23 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 66cfeea5-80d4-43b4-b9fb-f9d62b4e93a3 does not exist
Nov 24 20:13:23 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9757e240-0bf5-471f-8234-5c6a7fe455b2 does not exist
Nov 24 20:13:23 compute-0 sudo[260423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:13:23 compute-0 sudo[260423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:23 compute-0 sudo[260423]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:23.818+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:23 compute-0 sudo[260448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:13:23 compute-0 sudo[260448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:13:23 compute-0 sudo[260448]: pam_unix(sudo:session): session closed for user root
Nov 24 20:13:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:23.962+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:24 compute-0 ceph-mon[75677]: pgmap v891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:13:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:13:24
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.meta']
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:13:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:24.807+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:24.935+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:25.827+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:25.914+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:26 compute-0 ceph-mon[75677]: pgmap v892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1322 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:26.812+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:26.913+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1322 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:27.855+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:27.882+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:28 compute-0 ceph-mon[75677]: pgmap v893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:28.836+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:28.864+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:29.833+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:29.864+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:29 compute-0 podman[260473]: 2025-11-24 20:13:29.887017737 +0000 UTC m=+0.107998224 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 20:13:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:30 compute-0 ceph-mon[75677]: pgmap v894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:30.801+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:30.815+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1332 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:31.775+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:31.819+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:32 compute-0 ceph-mon[75677]: pgmap v895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1332 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:32.731+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:32.819+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:33.777+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:33.854+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:34 compute-0 ceph-mon[75677]: pgmap v896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:13:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:13:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:34.768+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:34.829+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:34 compute-0 podman[260495]: 2025-11-24 20:13:34.934959423 +0000 UTC m=+0.151445105 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 20:13:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:35.801+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:35.826+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:36 compute-0 ceph-mon[75677]: pgmap v897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:36.791+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:36.848+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1337 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:37.836+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:37.884+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:38 compute-0 sshd-session[260493]: Invalid user plex from 27.79.44.141 port 38456
Nov 24 20:13:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:38 compute-0 ceph-mon[75677]: pgmap v898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1337 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:38.811+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:38.874+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:39.773+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:39.862+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:40 compute-0 ceph-mon[75677]: pgmap v899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:13:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:13:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:13:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:13:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:13:40 compute-0 sshd-session[260493]: Connection closed by invalid user plex 27.79.44.141 port 38456 [preauth]
Nov 24 20:13:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:40.732+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:40.906+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:41.727+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:41.938+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:42 compute-0 ceph-mon[75677]: pgmap v900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:42.770+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:42.944+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:43.790+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:43.957+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:44 compute-0 ceph-mon[75677]: pgmap v901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:44.798+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:44.915+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:45.827+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:45.952+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:46 compute-0 ceph-mon[75677]: pgmap v902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1342 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:46.846+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:46.989+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1342 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:47.820+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:47.939+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:48 compute-0 ceph-mon[75677]: pgmap v903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:48.827+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:48 compute-0 podman[260522]: 2025-11-24 20:13:48.860310955 +0000 UTC m=+0.087798900 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:13:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:48.983+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:49.840+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:50.005+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:50 compute-0 ceph-mon[75677]: pgmap v904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:50.795+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:51.037+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1352 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:51.781+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:52.022+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:52 compute-0 ceph-mon[75677]: pgmap v905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1352 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:52.753+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:53.061+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:53.720+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:54.030+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:13:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:54 compute-0 ceph-mon[75677]: pgmap v906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:54.766+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:55.061+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:55.779+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:56.029+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:56 compute-0 ceph-mon[75677]: pgmap v907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:56.761+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:13:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:57.063+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1357 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:57.806+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:58.079+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:58 compute-0 ceph-mon[75677]: pgmap v908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:58 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1357 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:13:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:58.784+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:13:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:13:59.082+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:13:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:13:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:13:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:13:59.791+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:13:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:00.060+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:00 compute-0 ceph-mon[75677]: pgmap v909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:00.798+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:00 compute-0 podman[260542]: 2025-11-24 20:14:00.874175309 +0000 UTC m=+0.106065387 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:14:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:01.096+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:01.825+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:02.087+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:02 compute-0 ceph-mon[75677]: pgmap v910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:02.808+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:03.121+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:03.798+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:04.126+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:04 compute-0 nova_compute[257476]: 2025-11-24 20:14:04.385 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:14:04 compute-0 nova_compute[257476]: 2025-11-24 20:14:04.386 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:14:04 compute-0 nova_compute[257476]: 2025-11-24 20:14:04.386 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:14:04 compute-0 nova_compute[257476]: 2025-11-24 20:14:04.387 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:14:04 compute-0 nova_compute[257476]: 2025-11-24 20:14:04.387 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:14:04 compute-0 nova_compute[257476]: 2025-11-24 20:14:04.387 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:14:04 compute-0 nova_compute[257476]: 2025-11-24 20:14:04.387 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:14:04 compute-0 ceph-mon[75677]: pgmap v911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:04.780+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:05.079+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.171 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.171 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.206 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.206 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.207 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.207 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.208 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:14:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:14:05 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422701153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.659 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:14:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:05.819+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:05 compute-0 podman[260584]: 2025-11-24 20:14:05.94022596 +0000 UTC m=+0.157129063 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.948 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.950 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5169MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.950 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:14:05 compute-0 nova_compute[257476]: 2025-11-24 20:14:05.951 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:14:06 compute-0 nova_compute[257476]: 2025-11-24 20:14:06.077 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:14:06 compute-0 nova_compute[257476]: 2025-11-24 20:14:06.077 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:14:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:06.094+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:06 compute-0 nova_compute[257476]: 2025-11-24 20:14:06.101 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:14:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:14:06 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3411791120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:14:06 compute-0 nova_compute[257476]: 2025-11-24 20:14:06.553 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:14:06 compute-0 nova_compute[257476]: 2025-11-24 20:14:06.563 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:14:06 compute-0 nova_compute[257476]: 2025-11-24 20:14:06.586 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:14:06 compute-0 nova_compute[257476]: 2025-11-24 20:14:06.589 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:14:06 compute-0 nova_compute[257476]: 2025-11-24 20:14:06.590 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:14:06 compute-0 ceph-mon[75677]: pgmap v912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:06 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3422701153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:14:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:06 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3411791120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:14:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1362 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:06.833+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:07.057+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:07 compute-0 nova_compute[257476]: 2025-11-24 20:14:07.585 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:14:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1362 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:07.829+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:08.058+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:08 compute-0 ceph-mon[75677]: pgmap v913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:08.825+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:09.073+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:14:09.366 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:14:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:14:09.367 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:14:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:14:09.367 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:14:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:09.842+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:10.082+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:10 compute-0 ceph-mon[75677]: pgmap v914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:10.892+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:11.064+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1372 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:11.892+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:12.114+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:12 compute-0 ceph-mon[75677]: pgmap v915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1372 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:12.863+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:13.141+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:13.912+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:14.146+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:14 compute-0 ceph-mon[75677]: pgmap v916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:14.900+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:15.135+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:15.879+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:16.115+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:14:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1157906092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:14:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:14:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1157906092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:14:16 compute-0 ceph-mon[75677]: pgmap v917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1157906092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:14:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1157906092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:14:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:16.890+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:17.123+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1377 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:17.861+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:18.105+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:18 compute-0 ceph-mon[75677]: pgmap v918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:18 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1377 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:18.849+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:19.105+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:19.821+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:19 compute-0 podman[260633]: 2025-11-24 20:14:19.865138965 +0000 UTC m=+0.090639872 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:20.071+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:20 compute-0 ceph-mon[75677]: pgmap v919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:20.845+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:21.069+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:21.813+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:22.055+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:22 compute-0 ceph-mon[75677]: pgmap v920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:22.833+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:23.075+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:23.847+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:23 compute-0 sudo[260655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:14:23 compute-0 sudo[260655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:23 compute-0 sudo[260655]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:24.029+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:24 compute-0 sudo[260680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:14:24 compute-0 sudo[260680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:24 compute-0 sudo[260680]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:24 compute-0 sudo[260705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:14:24 compute-0 sudo[260705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:24 compute-0 sudo[260705]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:24 compute-0 sudo[260730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:14:24 compute-0 sudo[260730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:14:24
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'volumes', '.rgw.root', 'images', 'backups', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:14:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:24 compute-0 ceph-mon[75677]: pgmap v921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:24.818+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:24 compute-0 sudo[260730]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:14:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:14:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:14:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:14:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:14:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 273ece87-34a2-494d-a389-891493748a40 does not exist
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev fc52389a-a611-44ab-8cd5-936f61be9db9 does not exist
Nov 24 20:14:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev daaaa367-dfe6-4282-bfab-b2a087c14daf does not exist
Nov 24 20:14:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:14:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:14:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:14:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:14:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:14:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:14:25 compute-0 sudo[260786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:14:25 compute-0 sudo[260786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:25 compute-0 sudo[260786]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:25.070+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:25 compute-0 sudo[260811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:14:25 compute-0 sudo[260811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:25 compute-0 sudo[260811]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:25 compute-0 sudo[260836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:14:25 compute-0 sudo[260836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:25 compute-0 sudo[260836]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:25 compute-0 sudo[260861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:14:25 compute-0 sudo[260861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:14:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:14:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:14:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:14:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:14:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:14:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:25.838+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:25 compute-0 podman[260926]: 2025-11-24 20:14:25.844357556 +0000 UTC m=+0.074577730 container create 661603ec38711d1b6d1d80943bc309c1c7377355bca7e0ac424f57bfe5001d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:14:25 compute-0 systemd[1]: Started libpod-conmon-661603ec38711d1b6d1d80943bc309c1c7377355bca7e0ac424f57bfe5001d26.scope.
Nov 24 20:14:25 compute-0 podman[260926]: 2025-11-24 20:14:25.816070134 +0000 UTC m=+0.046290378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:14:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:14:25 compute-0 podman[260926]: 2025-11-24 20:14:25.958841148 +0000 UTC m=+0.189061392 container init 661603ec38711d1b6d1d80943bc309c1c7377355bca7e0ac424f57bfe5001d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 20:14:25 compute-0 podman[260926]: 2025-11-24 20:14:25.973244387 +0000 UTC m=+0.203464591 container start 661603ec38711d1b6d1d80943bc309c1c7377355bca7e0ac424f57bfe5001d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:14:25 compute-0 podman[260926]: 2025-11-24 20:14:25.977543722 +0000 UTC m=+0.207763966 container attach 661603ec38711d1b6d1d80943bc309c1c7377355bca7e0ac424f57bfe5001d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:14:25 compute-0 wonderful_buck[260943]: 167 167
Nov 24 20:14:25 compute-0 systemd[1]: libpod-661603ec38711d1b6d1d80943bc309c1c7377355bca7e0ac424f57bfe5001d26.scope: Deactivated successfully.
Nov 24 20:14:25 compute-0 podman[260926]: 2025-11-24 20:14:25.982817374 +0000 UTC m=+0.213037578 container died 661603ec38711d1b6d1d80943bc309c1c7377355bca7e0ac424f57bfe5001d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:14:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a617637762004cced7ec790cf18f29424ab1fa6c752003225dfd570e4ad4a593-merged.mount: Deactivated successfully.
Nov 24 20:14:26 compute-0 podman[260926]: 2025-11-24 20:14:26.042219173 +0000 UTC m=+0.272439367 container remove 661603ec38711d1b6d1d80943bc309c1c7377355bca7e0ac424f57bfe5001d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_buck, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 20:14:26 compute-0 systemd[1]: libpod-conmon-661603ec38711d1b6d1d80943bc309c1c7377355bca7e0ac424f57bfe5001d26.scope: Deactivated successfully.
Nov 24 20:14:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:26.069+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:26 compute-0 podman[260967]: 2025-11-24 20:14:26.293900831 +0000 UTC m=+0.061084146 container create db2fe91a37d700b3c372536f1254a07fb30e9756ea4c8846c0f7ebb64f95b3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:14:26 compute-0 systemd[1]: Started libpod-conmon-db2fe91a37d700b3c372536f1254a07fb30e9756ea4c8846c0f7ebb64f95b3e0.scope.
Nov 24 20:14:26 compute-0 podman[260967]: 2025-11-24 20:14:26.26230163 +0000 UTC m=+0.029484995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:14:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/324c35342c8db755e1b6e8872382fe10ba2beb685eb7b78f18e1c9dd22e8175f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/324c35342c8db755e1b6e8872382fe10ba2beb685eb7b78f18e1c9dd22e8175f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/324c35342c8db755e1b6e8872382fe10ba2beb685eb7b78f18e1c9dd22e8175f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/324c35342c8db755e1b6e8872382fe10ba2beb685eb7b78f18e1c9dd22e8175f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/324c35342c8db755e1b6e8872382fe10ba2beb685eb7b78f18e1c9dd22e8175f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:26 compute-0 podman[260967]: 2025-11-24 20:14:26.411384284 +0000 UTC m=+0.178567600 container init db2fe91a37d700b3c372536f1254a07fb30e9756ea4c8846c0f7ebb64f95b3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:14:26 compute-0 podman[260967]: 2025-11-24 20:14:26.425508325 +0000 UTC m=+0.192691650 container start db2fe91a37d700b3c372536f1254a07fb30e9756ea4c8846c0f7ebb64f95b3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 20:14:26 compute-0 podman[260967]: 2025-11-24 20:14:26.430224522 +0000 UTC m=+0.197407847 container attach db2fe91a37d700b3c372536f1254a07fb30e9756ea4c8846c0f7ebb64f95b3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:14:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:26 compute-0 ceph-mon[75677]: pgmap v922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1382 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:26.840+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:27.076+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:27 compute-0 gifted_sammet[260984]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:14:27 compute-0 gifted_sammet[260984]: --> relative data size: 1.0
Nov 24 20:14:27 compute-0 gifted_sammet[260984]: --> All data devices are unavailable
Nov 24 20:14:27 compute-0 systemd[1]: libpod-db2fe91a37d700b3c372536f1254a07fb30e9756ea4c8846c0f7ebb64f95b3e0.scope: Deactivated successfully.
Nov 24 20:14:27 compute-0 systemd[1]: libpod-db2fe91a37d700b3c372536f1254a07fb30e9756ea4c8846c0f7ebb64f95b3e0.scope: Consumed 1.228s CPU time.
Nov 24 20:14:27 compute-0 podman[261013]: 2025-11-24 20:14:27.775524719 +0000 UTC m=+0.047654685 container died db2fe91a37d700b3c372536f1254a07fb30e9756ea4c8846c0f7ebb64f95b3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:14:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1382 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-324c35342c8db755e1b6e8872382fe10ba2beb685eb7b78f18e1c9dd22e8175f-merged.mount: Deactivated successfully.
Nov 24 20:14:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:27.835+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:27 compute-0 podman[261013]: 2025-11-24 20:14:27.851745141 +0000 UTC m=+0.123875097 container remove db2fe91a37d700b3c372536f1254a07fb30e9756ea4c8846c0f7ebb64f95b3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:14:27 compute-0 systemd[1]: libpod-conmon-db2fe91a37d700b3c372536f1254a07fb30e9756ea4c8846c0f7ebb64f95b3e0.scope: Deactivated successfully.
Nov 24 20:14:27 compute-0 sudo[260861]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:27 compute-0 sudo[261029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:14:28 compute-0 sudo[261029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:28 compute-0 sudo[261029]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:28 compute-0 sudo[261054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:14:28 compute-0 sudo[261054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:28.106+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:28 compute-0 sudo[261054]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:28 compute-0 sudo[261079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:14:28 compute-0 sudo[261079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:28 compute-0 sudo[261079]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:28 compute-0 sudo[261104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:14:28 compute-0 sudo[261104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:28.793+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:28 compute-0 ceph-mon[75677]: pgmap v923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:28 compute-0 podman[261171]: 2025-11-24 20:14:28.807626411 +0000 UTC m=+0.076964443 container create efeaa4296c6de0bf33ac473adff1aa1dda6cdcbd4e3494d62f15643f295a3dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 24 20:14:28 compute-0 systemd[1]: Started libpod-conmon-efeaa4296c6de0bf33ac473adff1aa1dda6cdcbd4e3494d62f15643f295a3dc3.scope.
Nov 24 20:14:28 compute-0 podman[261171]: 2025-11-24 20:14:28.776088382 +0000 UTC m=+0.045426464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:14:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:14:28 compute-0 podman[261171]: 2025-11-24 20:14:28.915775214 +0000 UTC m=+0.185113246 container init efeaa4296c6de0bf33ac473adff1aa1dda6cdcbd4e3494d62f15643f295a3dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 24 20:14:28 compute-0 podman[261171]: 2025-11-24 20:14:28.928910738 +0000 UTC m=+0.198248770 container start efeaa4296c6de0bf33ac473adff1aa1dda6cdcbd4e3494d62f15643f295a3dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:14:28 compute-0 podman[261171]: 2025-11-24 20:14:28.933036189 +0000 UTC m=+0.202374251 container attach efeaa4296c6de0bf33ac473adff1aa1dda6cdcbd4e3494d62f15643f295a3dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:14:28 compute-0 hopeful_lehmann[261187]: 167 167
Nov 24 20:14:28 compute-0 systemd[1]: libpod-efeaa4296c6de0bf33ac473adff1aa1dda6cdcbd4e3494d62f15643f295a3dc3.scope: Deactivated successfully.
Nov 24 20:14:28 compute-0 conmon[261187]: conmon efeaa4296c6de0bf33ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-efeaa4296c6de0bf33ac473adff1aa1dda6cdcbd4e3494d62f15643f295a3dc3.scope/container/memory.events
Nov 24 20:14:28 compute-0 podman[261171]: 2025-11-24 20:14:28.93828324 +0000 UTC m=+0.207621292 container died efeaa4296c6de0bf33ac473adff1aa1dda6cdcbd4e3494d62f15643f295a3dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b76dd01faba9ea5784969eb82f8a966b5ea3b884773412f28638d9a369772031-merged.mount: Deactivated successfully.
Nov 24 20:14:28 compute-0 podman[261171]: 2025-11-24 20:14:28.988826251 +0000 UTC m=+0.258164273 container remove efeaa4296c6de0bf33ac473adff1aa1dda6cdcbd4e3494d62f15643f295a3dc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:14:29 compute-0 systemd[1]: libpod-conmon-efeaa4296c6de0bf33ac473adff1aa1dda6cdcbd4e3494d62f15643f295a3dc3.scope: Deactivated successfully.
Nov 24 20:14:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:29.111+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:29 compute-0 podman[261211]: 2025-11-24 20:14:29.264964837 +0000 UTC m=+0.077952710 container create 94fcee4af301487c5b8ca4c0743ad0dddd78d3a45f29b7818d609866d74dd193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 20:14:29 compute-0 systemd[1]: Started libpod-conmon-94fcee4af301487c5b8ca4c0743ad0dddd78d3a45f29b7818d609866d74dd193.scope.
Nov 24 20:14:29 compute-0 podman[261211]: 2025-11-24 20:14:29.235246117 +0000 UTC m=+0.048234030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:14:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e14577f70a019a01b5bf8dcae61a7b96e377fd7dd10e5e76146b997624e9e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e14577f70a019a01b5bf8dcae61a7b96e377fd7dd10e5e76146b997624e9e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e14577f70a019a01b5bf8dcae61a7b96e377fd7dd10e5e76146b997624e9e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e14577f70a019a01b5bf8dcae61a7b96e377fd7dd10e5e76146b997624e9e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:29 compute-0 podman[261211]: 2025-11-24 20:14:29.386268064 +0000 UTC m=+0.199255987 container init 94fcee4af301487c5b8ca4c0743ad0dddd78d3a45f29b7818d609866d74dd193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_grothendieck, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:14:29 compute-0 podman[261211]: 2025-11-24 20:14:29.397946038 +0000 UTC m=+0.210933901 container start 94fcee4af301487c5b8ca4c0743ad0dddd78d3a45f29b7818d609866d74dd193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:14:29 compute-0 podman[261211]: 2025-11-24 20:14:29.402124641 +0000 UTC m=+0.215112514 container attach 94fcee4af301487c5b8ca4c0743ad0dddd78d3a45f29b7818d609866d74dd193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:14:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:29.802+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:30.131+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]: {
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:     "0": [
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:         {
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "devices": [
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "/dev/loop3"
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             ],
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_name": "ceph_lv0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_size": "21470642176",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "name": "ceph_lv0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "tags": {
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.cluster_name": "ceph",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.crush_device_class": "",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.encrypted": "0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.osd_id": "0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.type": "block",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.vdo": "0"
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             },
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "type": "block",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "vg_name": "ceph_vg0"
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:         }
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:     ],
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:     "1": [
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:         {
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "devices": [
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "/dev/loop4"
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             ],
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_name": "ceph_lv1",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_size": "21470642176",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "name": "ceph_lv1",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "tags": {
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.cluster_name": "ceph",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.crush_device_class": "",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.encrypted": "0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.osd_id": "1",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.type": "block",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.vdo": "0"
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             },
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "type": "block",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "vg_name": "ceph_vg1"
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:         }
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:     ],
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:     "2": [
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:         {
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "devices": [
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "/dev/loop5"
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             ],
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_name": "ceph_lv2",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_size": "21470642176",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "name": "ceph_lv2",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "tags": {
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.cluster_name": "ceph",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.crush_device_class": "",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.encrypted": "0",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.osd_id": "2",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.type": "block",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:                 "ceph.vdo": "0"
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             },
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "type": "block",
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:             "vg_name": "ceph_vg2"
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:         }
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]:     ]
Nov 24 20:14:30 compute-0 happy_grothendieck[261230]: }
Nov 24 20:14:30 compute-0 systemd[1]: libpod-94fcee4af301487c5b8ca4c0743ad0dddd78d3a45f29b7818d609866d74dd193.scope: Deactivated successfully.
Nov 24 20:14:30 compute-0 podman[261211]: 2025-11-24 20:14:30.238308697 +0000 UTC m=+1.051296540 container died 94fcee4af301487c5b8ca4c0743ad0dddd78d3a45f29b7818d609866d74dd193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-46e14577f70a019a01b5bf8dcae61a7b96e377fd7dd10e5e76146b997624e9e2-merged.mount: Deactivated successfully.
Nov 24 20:14:30 compute-0 podman[261211]: 2025-11-24 20:14:30.305695142 +0000 UTC m=+1.118682985 container remove 94fcee4af301487c5b8ca4c0743ad0dddd78d3a45f29b7818d609866d74dd193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_grothendieck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:14:30 compute-0 systemd[1]: libpod-conmon-94fcee4af301487c5b8ca4c0743ad0dddd78d3a45f29b7818d609866d74dd193.scope: Deactivated successfully.
Nov 24 20:14:30 compute-0 sudo[261104]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:30 compute-0 sshd-session[261219]: Invalid user admin from 27.79.44.141 port 33758
Nov 24 20:14:30 compute-0 sudo[261253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:14:30 compute-0 sudo[261253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:30 compute-0 sudo[261253]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:30 compute-0 sudo[261278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:14:30 compute-0 sudo[261278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:30 compute-0 sudo[261278]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:30 compute-0 sudo[261303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:14:30 compute-0 sudo[261303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:30 compute-0 sudo[261303]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:30 compute-0 sshd-session[261219]: Connection closed by invalid user admin 27.79.44.141 port 33758 [preauth]
Nov 24 20:14:30 compute-0 sudo[261328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:14:30 compute-0 sudo[261328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:30 compute-0 ceph-mon[75677]: pgmap v924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:30.831+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:31.136+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:31 compute-0 podman[261395]: 2025-11-24 20:14:31.248445409 +0000 UTC m=+0.074214430 container create 289febbe9898b5c028eeee71fea5ec96e82611568602cfcd15f4b43062dc736c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:14:31 compute-0 systemd[1]: Started libpod-conmon-289febbe9898b5c028eeee71fea5ec96e82611568602cfcd15f4b43062dc736c.scope.
Nov 24 20:14:31 compute-0 podman[261395]: 2025-11-24 20:14:31.215640996 +0000 UTC m=+0.041410047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:14:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:14:31 compute-0 podman[261395]: 2025-11-24 20:14:31.354084843 +0000 UTC m=+0.179853844 container init 289febbe9898b5c028eeee71fea5ec96e82611568602cfcd15f4b43062dc736c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 20:14:31 compute-0 podman[261395]: 2025-11-24 20:14:31.367288159 +0000 UTC m=+0.193057170 container start 289febbe9898b5c028eeee71fea5ec96e82611568602cfcd15f4b43062dc736c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:14:31 compute-0 podman[261395]: 2025-11-24 20:14:31.372162641 +0000 UTC m=+0.197931652 container attach 289febbe9898b5c028eeee71fea5ec96e82611568602cfcd15f4b43062dc736c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 20:14:31 compute-0 lucid_raman[261412]: 167 167
Nov 24 20:14:31 compute-0 systemd[1]: libpod-289febbe9898b5c028eeee71fea5ec96e82611568602cfcd15f4b43062dc736c.scope: Deactivated successfully.
Nov 24 20:14:31 compute-0 podman[261395]: 2025-11-24 20:14:31.377005151 +0000 UTC m=+0.202774182 container died 289febbe9898b5c028eeee71fea5ec96e82611568602cfcd15f4b43062dc736c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-86b7de47472e838b4374c506718a33c894ce68360c84009acf19fba1393f671d-merged.mount: Deactivated successfully.
Nov 24 20:14:31 compute-0 podman[261409]: 2025-11-24 20:14:31.424414918 +0000 UTC m=+0.118149163 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 24 20:14:31 compute-0 podman[261395]: 2025-11-24 20:14:31.433108942 +0000 UTC m=+0.258877943 container remove 289febbe9898b5c028eeee71fea5ec96e82611568602cfcd15f4b43062dc736c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_raman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 20:14:31 compute-0 systemd[1]: libpod-conmon-289febbe9898b5c028eeee71fea5ec96e82611568602cfcd15f4b43062dc736c.scope: Deactivated successfully.
Nov 24 20:14:31 compute-0 podman[261455]: 2025-11-24 20:14:31.681461519 +0000 UTC m=+0.069714508 container create 21c8183b335849e985da9c44f4c6ceae9b4bdea87a0846ff7c585712763bead6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:14:31 compute-0 systemd[1]: Started libpod-conmon-21c8183b335849e985da9c44f4c6ceae9b4bdea87a0846ff7c585712763bead6.scope.
Nov 24 20:14:31 compute-0 podman[261455]: 2025-11-24 20:14:31.655801118 +0000 UTC m=+0.044054167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:14:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4adf113d24c7dcc8c7fbef4ee524bfffe06d1862dcf5b9163ff53dde0c99f9b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4adf113d24c7dcc8c7fbef4ee524bfffe06d1862dcf5b9163ff53dde0c99f9b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4adf113d24c7dcc8c7fbef4ee524bfffe06d1862dcf5b9163ff53dde0c99f9b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4adf113d24c7dcc8c7fbef4ee524bfffe06d1862dcf5b9163ff53dde0c99f9b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:14:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1387 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:31 compute-0 podman[261455]: 2025-11-24 20:14:31.807148164 +0000 UTC m=+0.195401193 container init 21c8183b335849e985da9c44f4c6ceae9b4bdea87a0846ff7c585712763bead6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_buck, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:14:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:31.817+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:31 compute-0 podman[261455]: 2025-11-24 20:14:31.820530864 +0000 UTC m=+0.208783863 container start 21c8183b335849e985da9c44f4c6ceae9b4bdea87a0846ff7c585712763bead6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_buck, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:14:31 compute-0 podman[261455]: 2025-11-24 20:14:31.826188047 +0000 UTC m=+0.214441056 container attach 21c8183b335849e985da9c44f4c6ceae9b4bdea87a0846ff7c585712763bead6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 20:14:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:31 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1387 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:32.144+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:32 compute-0 ceph-mon[75677]: pgmap v925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:32.855+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:32 compute-0 quirky_buck[261472]: {
Nov 24 20:14:32 compute-0 quirky_buck[261472]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "osd_id": 2,
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "type": "bluestore"
Nov 24 20:14:32 compute-0 quirky_buck[261472]:     },
Nov 24 20:14:32 compute-0 quirky_buck[261472]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "osd_id": 1,
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "type": "bluestore"
Nov 24 20:14:32 compute-0 quirky_buck[261472]:     },
Nov 24 20:14:32 compute-0 quirky_buck[261472]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "osd_id": 0,
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:14:32 compute-0 quirky_buck[261472]:         "type": "bluestore"
Nov 24 20:14:32 compute-0 quirky_buck[261472]:     }
Nov 24 20:14:32 compute-0 quirky_buck[261472]: }
Nov 24 20:14:32 compute-0 systemd[1]: libpod-21c8183b335849e985da9c44f4c6ceae9b4bdea87a0846ff7c585712763bead6.scope: Deactivated successfully.
Nov 24 20:14:32 compute-0 podman[261455]: 2025-11-24 20:14:32.964830568 +0000 UTC m=+1.353083567 container died 21c8183b335849e985da9c44f4c6ceae9b4bdea87a0846ff7c585712763bead6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_buck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:14:32 compute-0 systemd[1]: libpod-21c8183b335849e985da9c44f4c6ceae9b4bdea87a0846ff7c585712763bead6.scope: Consumed 1.147s CPU time.
Nov 24 20:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4adf113d24c7dcc8c7fbef4ee524bfffe06d1862dcf5b9163ff53dde0c99f9b3-merged.mount: Deactivated successfully.
Nov 24 20:14:33 compute-0 podman[261455]: 2025-11-24 20:14:33.048848581 +0000 UTC m=+1.437101570 container remove 21c8183b335849e985da9c44f4c6ceae9b4bdea87a0846ff7c585712763bead6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_buck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 24 20:14:33 compute-0 systemd[1]: libpod-conmon-21c8183b335849e985da9c44f4c6ceae9b4bdea87a0846ff7c585712763bead6.scope: Deactivated successfully.
Nov 24 20:14:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:33 compute-0 sudo[261328]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:14:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:14:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:14:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:33.119+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:14:33 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 8a6c6946-2363-414c-9427-409bb50e1740 does not exist
Nov 24 20:14:33 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0955a282-3e7a-438e-907c-2a578cb62af6 does not exist
Nov 24 20:14:33 compute-0 sudo[261519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:14:33 compute-0 sudo[261519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:33 compute-0 sudo[261519]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:33 compute-0 sudo[261544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:14:33 compute-0 sudo[261544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:14:33 compute-0 sudo[261544]: pam_unix(sudo:session): session closed for user root
Nov 24 20:14:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:33.854+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:14:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:14:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:34.163+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:14:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:14:34 compute-0 ceph-mon[75677]: pgmap v926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:34.885+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:35.205+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:35.861+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:36.204+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1392 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:36.822+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:36 compute-0 ceph-mon[75677]: pgmap v927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:36 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1392 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:36 compute-0 podman[261569]: 2025-11-24 20:14:36.918802172 +0000 UTC m=+0.141369848 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:14:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:37.188+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:37.869+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:38.138+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:38.893+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:38 compute-0 ceph-mon[75677]: pgmap v928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:39.176+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:39.895+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:40.140+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:14:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:14:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:14:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:14:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:14:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:40.897+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:40 compute-0 ceph-mon[75677]: pgmap v929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:41.165+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1397 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:41.897+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:41 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1397 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:42.145+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:42.888+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:42 compute-0 ceph-mon[75677]: pgmap v930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:43.175+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:43.877+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:44.169+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:44.911+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:44 compute-0 ceph-mon[75677]: pgmap v931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:45.136+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:45.888+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:46.102+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1402 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:46.914+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:47.088+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:47 compute-0 ceph-mon[75677]: pgmap v932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1402 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:47.880+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:48.095+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:48 compute-0 ceph-mon[75677]: pgmap v933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:48.891+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:49.057+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:49.882+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:50.096+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:50 compute-0 ceph-mon[75677]: pgmap v934: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:50 compute-0 podman[261595]: 2025-11-24 20:14:50.854485765 +0000 UTC m=+0.077996620 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 24 20:14:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:50.906+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:51.091+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1412 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:51.861+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:52.100+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:52 compute-0 ceph-mon[75677]: pgmap v935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1412 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:52.902+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:53.068+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:53.944+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:54.038+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:14:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:14:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:14:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:14:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:14:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:14:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:54 compute-0 ceph-mon[75677]: pgmap v936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.540260) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015294540325, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2314, "num_deletes": 251, "total_data_size": 2843111, "memory_usage": 2891944, "flush_reason": "Manual Compaction"}
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015294595849, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2775775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23148, "largest_seqno": 25461, "table_properties": {"data_size": 2765930, "index_size": 5635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 27252, "raw_average_key_size": 22, "raw_value_size": 2743269, "raw_average_value_size": 2230, "num_data_blocks": 248, "num_entries": 1230, "num_filter_entries": 1230, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764015132, "oldest_key_time": 1764015132, "file_creation_time": 1764015294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 55663 microseconds, and 13127 cpu microseconds.
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.595921) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2775775 bytes OK
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.595952) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.605457) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.605483) EVENT_LOG_v1 {"time_micros": 1764015294605474, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.605509) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2832686, prev total WAL file size 2832686, number of live WAL files 2.
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.607163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2710KB)], [50(8343KB)]
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015294607244, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11319074, "oldest_snapshot_seqno": -1}
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 7421 keys, 9862890 bytes, temperature: kUnknown
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015294806431, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9862890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9815762, "index_size": 27496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 194142, "raw_average_key_size": 26, "raw_value_size": 9682497, "raw_average_value_size": 1304, "num_data_blocks": 1105, "num_entries": 7421, "num_filter_entries": 7421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764015294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.806891) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9862890 bytes
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.825730) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.8 rd, 49.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.1 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 7935, records dropped: 514 output_compression: NoCompression
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.825777) EVENT_LOG_v1 {"time_micros": 1764015294825757, "job": 26, "event": "compaction_finished", "compaction_time_micros": 199376, "compaction_time_cpu_micros": 45772, "output_level": 6, "num_output_files": 1, "total_output_size": 9862890, "num_input_records": 7935, "num_output_records": 7421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015294827076, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015294830065, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.607080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.830158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.830164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.830167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.830170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:54 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:54.830173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:54.932+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:55.082+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:55.967+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:56.068+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:56 compute-0 ceph-mon[75677]: pgmap v937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:56.814168) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015296814225, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 280, "num_deletes": 250, "total_data_size": 47181, "memory_usage": 53696, "flush_reason": "Manual Compaction"}
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015296826415, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 46639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25462, "largest_seqno": 25741, "table_properties": {"data_size": 44757, "index_size": 111, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5234, "raw_average_key_size": 19, "raw_value_size": 41042, "raw_average_value_size": 152, "num_data_blocks": 5, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764015295, "oldest_key_time": 1764015295, "file_creation_time": 1764015296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 12352 microseconds, and 1639 cpu microseconds.
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:56.826513) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 46639 bytes OK
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:56.826552) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:56.837972) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:56.837999) EVENT_LOG_v1 {"time_micros": 1764015296837989, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:56.838033) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 45067, prev total WAL file size 45067, number of live WAL files 2.
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:56.838786) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(45KB)], [53(9631KB)]
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015296838839, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9909529, "oldest_snapshot_seqno": -1}
Nov 24 20:14:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:56.941+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 7184 keys, 6575883 bytes, temperature: kUnknown
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015296994068, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 6575883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6534980, "index_size": 21847, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 189691, "raw_average_key_size": 26, "raw_value_size": 6410448, "raw_average_value_size": 892, "num_data_blocks": 858, "num_entries": 7184, "num_filter_entries": 7184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764015296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:14:56 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:56.994374) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 6575883 bytes
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:57.009749) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.8 rd, 42.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.4 +0.0 blob) out(6.3 +0.0 blob), read-write-amplify(353.5) write-amplify(141.0) OK, records in: 7691, records dropped: 507 output_compression: NoCompression
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:57.009778) EVENT_LOG_v1 {"time_micros": 1764015297009765, "job": 28, "event": "compaction_finished", "compaction_time_micros": 155320, "compaction_time_cpu_micros": 36566, "output_level": 6, "num_output_files": 1, "total_output_size": 6575883, "num_input_records": 7691, "num_output_records": 7184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015297009944, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015297012369, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:56.838582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:57.012504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:57.012515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:57.012519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:57.012522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:14:57.012526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:14:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:57.076+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1417 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:57.933+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:58.067+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:58 compute-0 ceph-mon[75677]: pgmap v938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:58 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1417 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:14:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:58.966+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:14:59.058+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:14:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:14:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:14:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:14:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:14:59.982+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:14:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:00.050+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:00 compute-0 ceph-mon[75677]: pgmap v939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:00.953+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:01.086+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:01 compute-0 podman[261616]: 2025-11-24 20:15:01.861853281 +0000 UTC m=+0.085727979 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 20:15:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:01.944+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:02.102+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:02 compute-0 ceph-mon[75677]: pgmap v940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:02.956+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:03.152+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:03.940+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:04 compute-0 nova_compute[257476]: 2025-11-24 20:15:04.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:15:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:04.194+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:04 compute-0 sshd-session[261637]: Invalid user admin from 27.79.44.141 port 43842
Nov 24 20:15:04 compute-0 ceph-mon[75677]: pgmap v941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:04.950+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:05 compute-0 sshd-session[261637]: Connection closed by invalid user admin 27.79.44.141 port 43842 [preauth]
Nov 24 20:15:05 compute-0 nova_compute[257476]: 2025-11-24 20:15:05.147 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:15:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:05.149+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:05 compute-0 nova_compute[257476]: 2025-11-24 20:15:05.164 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:15:05 compute-0 nova_compute[257476]: 2025-11-24 20:15:05.165 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:15:05 compute-0 nova_compute[257476]: 2025-11-24 20:15:05.165 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:15:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:05.966+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.149 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.150 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:15:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:06.158+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.186 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.186 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.187 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.187 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.188 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:15:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:15:06 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183891798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.666 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:15:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1422 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:06 compute-0 ceph-mon[75677]: pgmap v942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:06 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2183891798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:15:06 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1422 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.910 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.913 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5173MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.914 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.914 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:15:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:06.951+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.987 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:15:06 compute-0 nova_compute[257476]: 2025-11-24 20:15:06.988 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:15:07 compute-0 nova_compute[257476]: 2025-11-24 20:15:07.013 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:15:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:07.143+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:15:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002716991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:15:07 compute-0 nova_compute[257476]: 2025-11-24 20:15:07.491 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:15:07 compute-0 nova_compute[257476]: 2025-11-24 20:15:07.499 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:15:07 compute-0 nova_compute[257476]: 2025-11-24 20:15:07.518 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:15:07 compute-0 nova_compute[257476]: 2025-11-24 20:15:07.521 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:15:07 compute-0 nova_compute[257476]: 2025-11-24 20:15:07.522 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:15:07 compute-0 podman[261683]: 2025-11-24 20:15:07.90535893 +0000 UTC m=+0.127635497 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 24 20:15:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:07.905+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:07 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2002716991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:15:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:08.110+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:08 compute-0 nova_compute[257476]: 2025-11-24 20:15:08.523 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:15:08 compute-0 nova_compute[257476]: 2025-11-24 20:15:08.524 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:15:08 compute-0 nova_compute[257476]: 2025-11-24 20:15:08.524 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:15:08 compute-0 nova_compute[257476]: 2025-11-24 20:15:08.524 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:15:08 compute-0 nova_compute[257476]: 2025-11-24 20:15:08.542 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:15:08 compute-0 ceph-mon[75677]: pgmap v943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:08.950+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:09.142+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:15:09.368 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:15:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:15:09.368 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:15:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:15:09.369 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:15:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:09.944+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:10.107+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:10.914+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:10 compute-0 ceph-mon[75677]: pgmap v944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:11.103+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1427 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:11.951+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:11 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1427 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:12.089+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:12.912+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:12 compute-0 ceph-mon[75677]: pgmap v945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:13.061+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:13.870+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:14.101+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:14.874+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:14 compute-0 ceph-mon[75677]: pgmap v946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:15.107+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:15.873+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:15 compute-0 ceph-mon[75677]: pgmap v947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:16.077+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:15:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2110132158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:15:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:15:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2110132158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:15:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1432 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:16.923+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2110132158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:15:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2110132158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:15:17 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1432 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:17.086+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:17.895+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:18 compute-0 ceph-mon[75677]: pgmap v948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:18.093+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:18.941+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:19.139+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:19.965+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:20 compute-0 ceph-mon[75677]: pgmap v949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:20.155+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:20.980+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:21.141+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1437 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:21 compute-0 podman[261711]: 2025-11-24 20:15:21.869352629 +0000 UTC m=+0.072120382 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 20:15:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:21.958+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:22 compute-0 ceph-mon[75677]: pgmap v950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:22 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1437 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:22.133+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:23.004+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:23.140+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:24.022+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:24 compute-0 ceph-mon[75677]: pgmap v951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:24.174+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:15:24
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta']
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:15:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:15:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:25.064+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:25.130+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:26.038+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:26 compute-0 ceph-mon[75677]: pgmap v952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:26.177+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1442 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:27.060+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:27 compute-0 rsyslogd[1003]: imjournal from <np0005534003:ceph-mon>: begin to drop messages due to rate-limiting
Nov 24 20:15:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1442 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:27.185+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:28.023+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:28 compute-0 ceph-mon[75677]: pgmap v953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:28.151+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:28.989+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:29.118+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:30.009+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:30.101+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:30 compute-0 ceph-mon[75677]: pgmap v954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:31.031+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:31.107+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1452 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:32.009+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:32.144+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:32 compute-0 ceph-mon[75677]: pgmap v955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1452 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:32 compute-0 podman[261733]: 2025-11-24 20:15:32.875854732 +0000 UTC m=+0.099094199 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 20:15:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:33.036+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:33.152+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:33 compute-0 sudo[261753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:15:33 compute-0 sudo[261753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:33 compute-0 sudo[261753]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:33 compute-0 sudo[261778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:15:33 compute-0 sudo[261778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:33 compute-0 sudo[261778]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:33 compute-0 sudo[261803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:15:33 compute-0 sudo[261803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:33 compute-0 sudo[261803]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:33 compute-0 sudo[261828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:15:33 compute-0 sudo[261828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:33 compute-0 sshd-session[261731]: Invalid user admin from 27.79.44.141 port 45068
Nov 24 20:15:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:34.027+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:34 compute-0 sshd-session[261731]: Connection closed by invalid user admin 27.79.44.141 port 45068 [preauth]
Nov 24 20:15:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:34.128+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:34 compute-0 ceph-mon[75677]: pgmap v956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:34 compute-0 sudo[261828]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:15:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:15:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:15:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:15:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:15:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 5f380adb-fbed-420c-aa74-47851f9c4d9d does not exist
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e6d5aca4-733c-432c-aa73-7068cb863d42 does not exist
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0152f88b-913f-47f7-a621-c740694e5475 does not exist
Nov 24 20:15:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:15:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:15:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:15:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:15:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:15:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:15:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:15:34 compute-0 sudo[261886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:15:34 compute-0 sudo[261886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:34 compute-0 sudo[261886]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:34 compute-0 sudo[261911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:15:34 compute-0 sudo[261911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:34 compute-0 sudo[261911]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:34 compute-0 sudo[261936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:15:34 compute-0 sudo[261936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:34 compute-0 sudo[261936]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:34 compute-0 sudo[261961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:15:34 compute-0 sudo[261961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:35.061+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:35.145+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:15:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:15:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:15:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:15:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:15:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:15:35 compute-0 podman[262027]: 2025-11-24 20:15:35.413858045 +0000 UTC m=+0.067973061 container create 3aef13dd40e4a21b42bf9228feea3462dc46bfa8852d0639c29aefefe2357343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:15:35 compute-0 systemd[1]: Started libpod-conmon-3aef13dd40e4a21b42bf9228feea3462dc46bfa8852d0639c29aefefe2357343.scope.
Nov 24 20:15:35 compute-0 podman[262027]: 2025-11-24 20:15:35.380063796 +0000 UTC m=+0.034178832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:15:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:15:35 compute-0 podman[262027]: 2025-11-24 20:15:35.543263968 +0000 UTC m=+0.197379004 container init 3aef13dd40e4a21b42bf9228feea3462dc46bfa8852d0639c29aefefe2357343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:15:35 compute-0 podman[262027]: 2025-11-24 20:15:35.556309869 +0000 UTC m=+0.210424895 container start 3aef13dd40e4a21b42bf9228feea3462dc46bfa8852d0639c29aefefe2357343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:15:35 compute-0 podman[262027]: 2025-11-24 20:15:35.560626866 +0000 UTC m=+0.214741932 container attach 3aef13dd40e4a21b42bf9228feea3462dc46bfa8852d0639c29aefefe2357343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:15:35 compute-0 silly_meninsky[262043]: 167 167
Nov 24 20:15:35 compute-0 systemd[1]: libpod-3aef13dd40e4a21b42bf9228feea3462dc46bfa8852d0639c29aefefe2357343.scope: Deactivated successfully.
Nov 24 20:15:35 compute-0 podman[262027]: 2025-11-24 20:15:35.567674555 +0000 UTC m=+0.221789571 container died 3aef13dd40e4a21b42bf9228feea3462dc46bfa8852d0639c29aefefe2357343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:15:35 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b55dde732eb9adbdef0ce901828358516011cda8118067ddffbc79bb3ff0ce3e-merged.mount: Deactivated successfully.
Nov 24 20:15:35 compute-0 podman[262027]: 2025-11-24 20:15:35.624907956 +0000 UTC m=+0.279022942 container remove 3aef13dd40e4a21b42bf9228feea3462dc46bfa8852d0639c29aefefe2357343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 20:15:35 compute-0 systemd[1]: libpod-conmon-3aef13dd40e4a21b42bf9228feea3462dc46bfa8852d0639c29aefefe2357343.scope: Deactivated successfully.
Nov 24 20:15:35 compute-0 podman[262067]: 2025-11-24 20:15:35.862774328 +0000 UTC m=+0.078572345 container create ce607a8633f00f41de883e56183eef3b0b1d24f7fa455e4f1ac05853b74b098c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:15:35 compute-0 systemd[1]: Started libpod-conmon-ce607a8633f00f41de883e56183eef3b0b1d24f7fa455e4f1ac05853b74b098c.scope.
Nov 24 20:15:35 compute-0 podman[262067]: 2025-11-24 20:15:35.833186042 +0000 UTC m=+0.048984119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:15:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a3b1131e45e3802779b9b1170c53e3ec3d7cdd7b8ffe62d5f9348bb15bf142/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a3b1131e45e3802779b9b1170c53e3ec3d7cdd7b8ffe62d5f9348bb15bf142/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a3b1131e45e3802779b9b1170c53e3ec3d7cdd7b8ffe62d5f9348bb15bf142/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a3b1131e45e3802779b9b1170c53e3ec3d7cdd7b8ffe62d5f9348bb15bf142/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a3b1131e45e3802779b9b1170c53e3ec3d7cdd7b8ffe62d5f9348bb15bf142/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:35 compute-0 podman[262067]: 2025-11-24 20:15:35.981172395 +0000 UTC m=+0.196970422 container init ce607a8633f00f41de883e56183eef3b0b1d24f7fa455e4f1ac05853b74b098c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 20:15:36 compute-0 podman[262067]: 2025-11-24 20:15:36.002719165 +0000 UTC m=+0.218517192 container start ce607a8633f00f41de883e56183eef3b0b1d24f7fa455e4f1ac05853b74b098c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:15:36 compute-0 podman[262067]: 2025-11-24 20:15:36.007351839 +0000 UTC m=+0.223149836 container attach ce607a8633f00f41de883e56183eef3b0b1d24f7fa455e4f1ac05853b74b098c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhaskara, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:15:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:36.064+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:36.159+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:36 compute-0 ceph-mon[75677]: pgmap v957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:37.092+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1457 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:37.205+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:37 compute-0 lucid_bhaskara[262083]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:15:37 compute-0 lucid_bhaskara[262083]: --> relative data size: 1.0
Nov 24 20:15:37 compute-0 lucid_bhaskara[262083]: --> All data devices are unavailable
Nov 24 20:15:37 compute-0 systemd[1]: libpod-ce607a8633f00f41de883e56183eef3b0b1d24f7fa455e4f1ac05853b74b098c.scope: Deactivated successfully.
Nov 24 20:15:37 compute-0 podman[262067]: 2025-11-24 20:15:37.257450938 +0000 UTC m=+1.473248935 container died ce607a8633f00f41de883e56183eef3b0b1d24f7fa455e4f1ac05853b74b098c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:15:37 compute-0 systemd[1]: libpod-ce607a8633f00f41de883e56183eef3b0b1d24f7fa455e4f1ac05853b74b098c.scope: Consumed 1.210s CPU time.
Nov 24 20:15:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-08a3b1131e45e3802779b9b1170c53e3ec3d7cdd7b8ffe62d5f9348bb15bf142-merged.mount: Deactivated successfully.
Nov 24 20:15:37 compute-0 podman[262067]: 2025-11-24 20:15:37.318260994 +0000 UTC m=+1.534058981 container remove ce607a8633f00f41de883e56183eef3b0b1d24f7fa455e4f1ac05853b74b098c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:15:37 compute-0 systemd[1]: libpod-conmon-ce607a8633f00f41de883e56183eef3b0b1d24f7fa455e4f1ac05853b74b098c.scope: Deactivated successfully.
Nov 24 20:15:37 compute-0 sudo[261961]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:37 compute-0 sudo[262126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:15:37 compute-0 sudo[262126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:37 compute-0 sudo[262126]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:37 compute-0 sudo[262151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:15:37 compute-0 sudo[262151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:37 compute-0 sudo[262151]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:37 compute-0 sudo[262176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:15:37 compute-0 sudo[262176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:37 compute-0 sudo[262176]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:37 compute-0 sudo[262201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:15:37 compute-0 sudo[262201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:38.113+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:38 compute-0 ceph-mon[75677]: pgmap v958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1457 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:38.212+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:38 compute-0 podman[262268]: 2025-11-24 20:15:38.303493053 +0000 UTC m=+0.068385051 container create 5bfe10ab5530ab909a54c8a83f734965f9529023cf5a7fbf06a64a76125e928e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:15:38 compute-0 systemd[1]: Started libpod-conmon-5bfe10ab5530ab909a54c8a83f734965f9529023cf5a7fbf06a64a76125e928e.scope.
Nov 24 20:15:38 compute-0 podman[262268]: 2025-11-24 20:15:38.277472203 +0000 UTC m=+0.042364231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:15:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:15:38 compute-0 podman[262268]: 2025-11-24 20:15:38.404074331 +0000 UTC m=+0.168966389 container init 5bfe10ab5530ab909a54c8a83f734965f9529023cf5a7fbf06a64a76125e928e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:15:38 compute-0 podman[262268]: 2025-11-24 20:15:38.418905139 +0000 UTC m=+0.183797177 container start 5bfe10ab5530ab909a54c8a83f734965f9529023cf5a7fbf06a64a76125e928e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:15:38 compute-0 nostalgic_perlman[262285]: 167 167
Nov 24 20:15:38 compute-0 podman[262268]: 2025-11-24 20:15:38.425913019 +0000 UTC m=+0.190805087 container attach 5bfe10ab5530ab909a54c8a83f734965f9529023cf5a7fbf06a64a76125e928e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 20:15:38 compute-0 systemd[1]: libpod-5bfe10ab5530ab909a54c8a83f734965f9529023cf5a7fbf06a64a76125e928e.scope: Deactivated successfully.
Nov 24 20:15:38 compute-0 podman[262268]: 2025-11-24 20:15:38.42968347 +0000 UTC m=+0.194575498 container died 5bfe10ab5530ab909a54c8a83f734965f9529023cf5a7fbf06a64a76125e928e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e471c2dcd85c124c1fe47e21667ab424e6d81f867a112bc24d1ffcc0773b86c6-merged.mount: Deactivated successfully.
Nov 24 20:15:38 compute-0 podman[262268]: 2025-11-24 20:15:38.491191345 +0000 UTC m=+0.256083383 container remove 5bfe10ab5530ab909a54c8a83f734965f9529023cf5a7fbf06a64a76125e928e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 20:15:38 compute-0 podman[262282]: 2025-11-24 20:15:38.516477926 +0000 UTC m=+0.155389173 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:15:38 compute-0 systemd[1]: libpod-conmon-5bfe10ab5530ab909a54c8a83f734965f9529023cf5a7fbf06a64a76125e928e.scope: Deactivated successfully.
Nov 24 20:15:38 compute-0 podman[262334]: 2025-11-24 20:15:38.771402578 +0000 UTC m=+0.073012567 container create dd273e6b39ae7ef901becbe613344b07926fc5212836d7e757d90c33ba0f868f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:15:38 compute-0 systemd[1]: Started libpod-conmon-dd273e6b39ae7ef901becbe613344b07926fc5212836d7e757d90c33ba0f868f.scope.
Nov 24 20:15:38 compute-0 podman[262334]: 2025-11-24 20:15:38.741976316 +0000 UTC m=+0.043586375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:15:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e598603bc680636755a1c5b8336d789ec64eb83f4ea054a59911b456e03c5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e598603bc680636755a1c5b8336d789ec64eb83f4ea054a59911b456e03c5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e598603bc680636755a1c5b8336d789ec64eb83f4ea054a59911b456e03c5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e598603bc680636755a1c5b8336d789ec64eb83f4ea054a59911b456e03c5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:38 compute-0 podman[262334]: 2025-11-24 20:15:38.896062323 +0000 UTC m=+0.197672372 container init dd273e6b39ae7ef901becbe613344b07926fc5212836d7e757d90c33ba0f868f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 20:15:38 compute-0 podman[262334]: 2025-11-24 20:15:38.910902833 +0000 UTC m=+0.212512842 container start dd273e6b39ae7ef901becbe613344b07926fc5212836d7e757d90c33ba0f868f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:15:38 compute-0 podman[262334]: 2025-11-24 20:15:38.916116723 +0000 UTC m=+0.217726792 container attach dd273e6b39ae7ef901becbe613344b07926fc5212836d7e757d90c33ba0f868f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:15:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:39.158+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:39.228+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:39 compute-0 sweet_swartz[262350]: {
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:     "0": [
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:         {
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "devices": [
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "/dev/loop3"
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             ],
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_name": "ceph_lv0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_size": "21470642176",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "name": "ceph_lv0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "tags": {
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.cluster_name": "ceph",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.crush_device_class": "",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.encrypted": "0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.osd_id": "0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.type": "block",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.vdo": "0"
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             },
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "type": "block",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "vg_name": "ceph_vg0"
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:         }
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:     ],
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:     "1": [
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:         {
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "devices": [
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "/dev/loop4"
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             ],
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_name": "ceph_lv1",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_size": "21470642176",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "name": "ceph_lv1",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "tags": {
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.cluster_name": "ceph",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.crush_device_class": "",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.encrypted": "0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.osd_id": "1",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.type": "block",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.vdo": "0"
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             },
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "type": "block",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "vg_name": "ceph_vg1"
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:         }
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:     ],
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:     "2": [
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:         {
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "devices": [
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "/dev/loop5"
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             ],
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_name": "ceph_lv2",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_size": "21470642176",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "name": "ceph_lv2",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "tags": {
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.cluster_name": "ceph",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.crush_device_class": "",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.encrypted": "0",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.osd_id": "2",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.type": "block",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:                 "ceph.vdo": "0"
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             },
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "type": "block",
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:             "vg_name": "ceph_vg2"
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:         }
Nov 24 20:15:39 compute-0 sweet_swartz[262350]:     ]
Nov 24 20:15:39 compute-0 sweet_swartz[262350]: }
Nov 24 20:15:39 compute-0 systemd[1]: libpod-dd273e6b39ae7ef901becbe613344b07926fc5212836d7e757d90c33ba0f868f.scope: Deactivated successfully.
Nov 24 20:15:39 compute-0 podman[262334]: 2025-11-24 20:15:39.815041889 +0000 UTC m=+1.116651898 container died dd273e6b39ae7ef901becbe613344b07926fc5212836d7e757d90c33ba0f868f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9e598603bc680636755a1c5b8336d789ec64eb83f4ea054a59911b456e03c5b-merged.mount: Deactivated successfully.
Nov 24 20:15:39 compute-0 podman[262334]: 2025-11-24 20:15:39.90278562 +0000 UTC m=+1.204395619 container remove dd273e6b39ae7ef901becbe613344b07926fc5212836d7e757d90c33ba0f868f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:15:39 compute-0 systemd[1]: libpod-conmon-dd273e6b39ae7ef901becbe613344b07926fc5212836d7e757d90c33ba0f868f.scope: Deactivated successfully.
Nov 24 20:15:39 compute-0 sudo[262201]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:40 compute-0 sudo[262372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:15:40 compute-0 sudo[262372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:40 compute-0 sudo[262372]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:40 compute-0 sudo[262397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:15:40 compute-0 sudo[262397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:40 compute-0 sudo[262397]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:40.200+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:40 compute-0 ceph-mon[75677]: pgmap v959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:40.249+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:40 compute-0 sudo[262422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:15:40 compute-0 sudo[262422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:40 compute-0 sudo[262422]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:40 compute-0 sudo[262447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:15:40 compute-0 sudo[262447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:15:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:15:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:15:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:15:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:15:40 compute-0 podman[262510]: 2025-11-24 20:15:40.768979285 +0000 UTC m=+0.061211189 container create 89bd31acbf3bef04b3356ab7b0f6f9332298cb77cdcbf89ef0726c8cddf0f39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 20:15:40 compute-0 systemd[1]: Started libpod-conmon-89bd31acbf3bef04b3356ab7b0f6f9332298cb77cdcbf89ef0726c8cddf0f39f.scope.
Nov 24 20:15:40 compute-0 podman[262510]: 2025-11-24 20:15:40.741928737 +0000 UTC m=+0.034160631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:15:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:15:40 compute-0 podman[262510]: 2025-11-24 20:15:40.865323169 +0000 UTC m=+0.157555113 container init 89bd31acbf3bef04b3356ab7b0f6f9332298cb77cdcbf89ef0726c8cddf0f39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 20:15:40 compute-0 podman[262510]: 2025-11-24 20:15:40.877298951 +0000 UTC m=+0.169530865 container start 89bd31acbf3bef04b3356ab7b0f6f9332298cb77cdcbf89ef0726c8cddf0f39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:15:40 compute-0 podman[262510]: 2025-11-24 20:15:40.882910452 +0000 UTC m=+0.175142366 container attach 89bd31acbf3bef04b3356ab7b0f6f9332298cb77cdcbf89ef0726c8cddf0f39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:15:40 compute-0 dreamy_wescoff[262526]: 167 167
Nov 24 20:15:40 compute-0 systemd[1]: libpod-89bd31acbf3bef04b3356ab7b0f6f9332298cb77cdcbf89ef0726c8cddf0f39f.scope: Deactivated successfully.
Nov 24 20:15:40 compute-0 podman[262510]: 2025-11-24 20:15:40.885759168 +0000 UTC m=+0.177991082 container died 89bd31acbf3bef04b3356ab7b0f6f9332298cb77cdcbf89ef0726c8cddf0f39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-412a6cc2cec976c8c5776231672f0425bc5962beb2e80f15a44307d1682807fe-merged.mount: Deactivated successfully.
Nov 24 20:15:40 compute-0 podman[262510]: 2025-11-24 20:15:40.938083157 +0000 UTC m=+0.230315031 container remove 89bd31acbf3bef04b3356ab7b0f6f9332298cb77cdcbf89ef0726c8cddf0f39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:15:40 compute-0 systemd[1]: libpod-conmon-89bd31acbf3bef04b3356ab7b0f6f9332298cb77cdcbf89ef0726c8cddf0f39f.scope: Deactivated successfully.
Nov 24 20:15:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:41 compute-0 podman[262550]: 2025-11-24 20:15:41.203675585 +0000 UTC m=+0.074485145 container create 1a0f215a664b57cdf26e7e0cb879e5663d31cb7db1e961e14e752efcd9abb843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 20:15:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:41.205+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:41.233+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:41 compute-0 systemd[1]: Started libpod-conmon-1a0f215a664b57cdf26e7e0cb879e5663d31cb7db1e961e14e752efcd9abb843.scope.
Nov 24 20:15:41 compute-0 podman[262550]: 2025-11-24 20:15:41.175051255 +0000 UTC m=+0.045860865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:15:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0042996278a4957ec92f8306652015f97c830197e8d1375b155027a2f5928504/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0042996278a4957ec92f8306652015f97c830197e8d1375b155027a2f5928504/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0042996278a4957ec92f8306652015f97c830197e8d1375b155027a2f5928504/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0042996278a4957ec92f8306652015f97c830197e8d1375b155027a2f5928504/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:15:41 compute-0 podman[262550]: 2025-11-24 20:15:41.309909305 +0000 UTC m=+0.180718855 container init 1a0f215a664b57cdf26e7e0cb879e5663d31cb7db1e961e14e752efcd9abb843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 20:15:41 compute-0 podman[262550]: 2025-11-24 20:15:41.323040808 +0000 UTC m=+0.193850358 container start 1a0f215a664b57cdf26e7e0cb879e5663d31cb7db1e961e14e752efcd9abb843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 20:15:41 compute-0 podman[262550]: 2025-11-24 20:15:41.327689094 +0000 UTC m=+0.198498654 container attach 1a0f215a664b57cdf26e7e0cb879e5663d31cb7db1e961e14e752efcd9abb843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bouman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:15:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:42.167+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:42.243+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:42 compute-0 ceph-mon[75677]: pgmap v960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:42 compute-0 distracted_bouman[262566]: {
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "osd_id": 2,
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "type": "bluestore"
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:     },
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "osd_id": 1,
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "type": "bluestore"
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:     },
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "osd_id": 0,
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:         "type": "bluestore"
Nov 24 20:15:42 compute-0 distracted_bouman[262566]:     }
Nov 24 20:15:42 compute-0 distracted_bouman[262566]: }
Nov 24 20:15:42 compute-0 systemd[1]: libpod-1a0f215a664b57cdf26e7e0cb879e5663d31cb7db1e961e14e752efcd9abb843.scope: Deactivated successfully.
Nov 24 20:15:42 compute-0 podman[262550]: 2025-11-24 20:15:42.518550647 +0000 UTC m=+1.389360167 container died 1a0f215a664b57cdf26e7e0cb879e5663d31cb7db1e961e14e752efcd9abb843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 20:15:42 compute-0 systemd[1]: libpod-1a0f215a664b57cdf26e7e0cb879e5663d31cb7db1e961e14e752efcd9abb843.scope: Consumed 1.206s CPU time.
Nov 24 20:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0042996278a4957ec92f8306652015f97c830197e8d1375b155027a2f5928504-merged.mount: Deactivated successfully.
Nov 24 20:15:42 compute-0 podman[262550]: 2025-11-24 20:15:42.594632645 +0000 UTC m=+1.465442195 container remove 1a0f215a664b57cdf26e7e0cb879e5663d31cb7db1e961e14e752efcd9abb843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bouman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:15:42 compute-0 systemd[1]: libpod-conmon-1a0f215a664b57cdf26e7e0cb879e5663d31cb7db1e961e14e752efcd9abb843.scope: Deactivated successfully.
Nov 24 20:15:42 compute-0 sudo[262447]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:15:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:15:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:15:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:15:42 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a2edd1b7-415d-4db3-9957-994d95c58a51 does not exist
Nov 24 20:15:42 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1427a656-7b9c-47a5-b5b6-5aa57053bf1c does not exist
Nov 24 20:15:42 compute-0 sudo[262612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:15:42 compute-0 sudo[262612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:42 compute-0 sudo[262612]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:42 compute-0 sudo[262637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:15:42 compute-0 sudo[262637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:15:42 compute-0 sudo[262637]: pam_unix(sudo:session): session closed for user root
Nov 24 20:15:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:43.156+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:43.205+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:15:43 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:15:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:44.117+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:44.240+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:44 compute-0 ceph-mon[75677]: pgmap v961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:45.141+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:45.232+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:46.186+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:46.232+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:46 compute-0 ceph-mon[75677]: pgmap v962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1462 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:47.147+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:47.250+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1462 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:48.177+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:48.203+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:48 compute-0 ceph-mon[75677]: pgmap v963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:49.204+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:49.247+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:50.212+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:50.250+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:50 compute-0 ceph-mon[75677]: pgmap v964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:51.214+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:51.252+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1472 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:52.212+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:52.300+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:52 compute-0 ceph-mon[75677]: pgmap v965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1472 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:52 compute-0 podman[262662]: 2025-11-24 20:15:52.876650208 +0000 UTC m=+0.096549690 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 20:15:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:53.168+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:53.260+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:54.188+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:54.304+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:54 compute-0 ceph-mon[75677]: pgmap v966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:15:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:15:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:15:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:15:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:15:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:15:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:55.160+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:55.294+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:56.180+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:56.340+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:56 compute-0 ceph-mon[75677]: pgmap v967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:15:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:57.196+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1477 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:57.376+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:58.175+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:58.377+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:58 compute-0 ceph-mon[75677]: pgmap v968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:58 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1477 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:15:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:15:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:15:59.197+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:15:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:15:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:15:59.375+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:15:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:15:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:00.175+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:00.332+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:00 compute-0 ceph-mon[75677]: pgmap v969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:01.168+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:01.376+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:02 compute-0 nova_compute[257476]: 2025-11-24 20:16:02.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:02 compute-0 nova_compute[257476]: 2025-11-24 20:16:02.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 20:16:02 compute-0 nova_compute[257476]: 2025-11-24 20:16:02.168 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 20:16:02 compute-0 nova_compute[257476]: 2025-11-24 20:16:02.169 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:02 compute-0 nova_compute[257476]: 2025-11-24 20:16:02.169 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 20:16:02 compute-0 nova_compute[257476]: 2025-11-24 20:16:02.180 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:02.188+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:02.344+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:02 compute-0 ceph-mon[75677]: pgmap v970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:03.162+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:03.334+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:03 compute-0 podman[262681]: 2025-11-24 20:16:03.884491561 +0000 UTC m=+0.105830993 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:16:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:04.170+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:04.340+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:04 compute-0 ceph-mon[75677]: pgmap v971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:05.183+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:05 compute-0 nova_compute[257476]: 2025-11-24 20:16:05.189 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:05 compute-0 nova_compute[257476]: 2025-11-24 20:16:05.190 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:05.303+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.149 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.150 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.185 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.185 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.186 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.186 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.187 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:16:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:06.190+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:06.264+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:06 compute-0 ceph-mon[75677]: pgmap v972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:16:06 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909490860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.671 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:16:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1482 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.923 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.924 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.925 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:16:06 compute-0 nova_compute[257476]: 2025-11-24 20:16:06.925 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:16:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:07.143+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:07.238+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:07 compute-0 nova_compute[257476]: 2025-11-24 20:16:07.246 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:16:07 compute-0 nova_compute[257476]: 2025-11-24 20:16:07.247 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:16:07 compute-0 nova_compute[257476]: 2025-11-24 20:16:07.345 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Refreshing inventories for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 20:16:07 compute-0 nova_compute[257476]: 2025-11-24 20:16:07.460 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Updating ProviderTree inventory for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 20:16:07 compute-0 nova_compute[257476]: 2025-11-24 20:16:07.462 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Updating inventory in ProviderTree for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 20:16:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:07 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2909490860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:16:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1482 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:07 compute-0 nova_compute[257476]: 2025-11-24 20:16:07.486 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Refreshing aggregate associations for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 20:16:07 compute-0 nova_compute[257476]: 2025-11-24 20:16:07.515 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Refreshing trait associations for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66, traits: HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 20:16:07 compute-0 nova_compute[257476]: 2025-11-24 20:16:07.540 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:16:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:16:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1245099460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:16:08 compute-0 nova_compute[257476]: 2025-11-24 20:16:08.059 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:16:08 compute-0 nova_compute[257476]: 2025-11-24 20:16:08.067 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:16:08 compute-0 nova_compute[257476]: 2025-11-24 20:16:08.088 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:16:08 compute-0 nova_compute[257476]: 2025-11-24 20:16:08.091 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:16:08 compute-0 nova_compute[257476]: 2025-11-24 20:16:08.092 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:16:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:08.138+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:08.263+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:08 compute-0 ceph-mon[75677]: pgmap v973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1245099460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:16:08 compute-0 podman[262746]: 2025-11-24 20:16:08.912221574 +0000 UTC m=+0.141032398 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 24 20:16:09 compute-0 nova_compute[257476]: 2025-11-24 20:16:09.093 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:09 compute-0 nova_compute[257476]: 2025-11-24 20:16:09.094 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:09 compute-0 nova_compute[257476]: 2025-11-24 20:16:09.095 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:16:09 compute-0 nova_compute[257476]: 2025-11-24 20:16:09.095 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:16:09 compute-0 nova_compute[257476]: 2025-11-24 20:16:09.114 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:16:09 compute-0 nova_compute[257476]: 2025-11-24 20:16:09.115 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:09 compute-0 nova_compute[257476]: 2025-11-24 20:16:09.116 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:16:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:09.122+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:09.278+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:16:09.369 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:16:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:16:09.370 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:16:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:16:09.370 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:16:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:10.162+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:10.271+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:10 compute-0 ceph-mon[75677]: pgmap v974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:11.201+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:11.275+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1492 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:12.160+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:12.227+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:12 compute-0 ceph-mon[75677]: pgmap v975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1492 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:13.128+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:13.267+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:14.090+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:14.289+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:14 compute-0 ceph-mon[75677]: pgmap v976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:14 compute-0 sshd-session[262773]: Invalid user admin from 27.79.44.141 port 35268
Nov 24 20:16:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:15.053+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:15.253+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:15 compute-0 sshd-session[262773]: Connection closed by invalid user admin 27.79.44.141 port 35268 [preauth]
Nov 24 20:16:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:16.013+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:16.256+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:16:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3294314589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:16:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:16:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3294314589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:16:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:16 compute-0 ceph-mon[75677]: pgmap v977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3294314589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:16:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3294314589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:16:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:17.044+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:17.257+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1497 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:18.031+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:18.269+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:18 compute-0 ceph-mon[75677]: pgmap v978: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:18 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1497 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:19.047+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:19.274+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:20.060+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:20.246+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:20 compute-0 ceph-mon[75677]: pgmap v979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:21.061+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:21.278+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:22.076+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:22.300+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:22 compute-0 ceph-mon[75677]: pgmap v980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:23.032+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:23.269+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:23 compute-0 podman[262775]: 2025-11-24 20:16:23.884354001 +0000 UTC m=+0.097563711 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:16:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:24.021+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:24.312+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:16:24
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'vms', 'images', 'volumes', 'default.rgw.meta']
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:16:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:16:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:24 compute-0 ceph-mon[75677]: pgmap v981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:25.050+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:25.265+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:26.053+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:26.282+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1502 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:26 compute-0 ceph-mon[75677]: pgmap v982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:26 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1502 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:27.016+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:27.313+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:27.998+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:28.267+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:28 compute-0 ceph-mon[75677]: pgmap v983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:29.025+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:29.269+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:30.021+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:30.267+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:30 compute-0 ceph-mon[75677]: pgmap v984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:31.050+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:31.279+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1507 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:31 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1507 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:32.017+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:32.291+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:32 compute-0 ceph-mon[75677]: pgmap v985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:32.994+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:33.340+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:33.999+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:34.347+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:16:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:16:34 compute-0 podman[262796]: 2025-11-24 20:16:34.853761864 +0000 UTC m=+0.086069972 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:16:34 compute-0 ceph-mon[75677]: pgmap v986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:35.044+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:35.333+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:36 compute-0 ceph-mon[75677]: pgmap v987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:36.008+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:36.350+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1512 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:37.000+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:37 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1512 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:37.345+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:37.994+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:38 compute-0 ceph-mon[75677]: pgmap v988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:38.299+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:39.003+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:39 compute-0 sshd-session[262794]: Invalid user grid from 14.63.196.175 port 32938
Nov 24 20:16:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:39 compute-0 podman[262818]: 2025-11-24 20:16:39.201092298 +0000 UTC m=+0.144697156 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:16:39 compute-0 sshd-session[262794]: Received disconnect from 14.63.196.175 port 32938:11: Bye Bye [preauth]
Nov 24 20:16:39 compute-0 sshd-session[262794]: Disconnected from invalid user grid 14.63.196.175 port 32938 [preauth]
Nov 24 20:16:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:39.258+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:40.023+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:40 compute-0 ceph-mon[75677]: pgmap v989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:40 compute-0 sshd-session[262817]: Invalid user prueba from 185.156.73.233 port 41596
Nov 24 20:16:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:40.305+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:16:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:16:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:16:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:16:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:16:40 compute-0 sshd-session[262817]: Connection closed by invalid user prueba 185.156.73.233 port 41596 [preauth]
Nov 24 20:16:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:40.991+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:41.265+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:16:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5452 writes, 27K keys, 5452 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 5452 writes, 5452 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1838 writes, 9576 keys, 1838 commit groups, 1.0 writes per commit group, ingest: 9.96 MB, 0.02 MB/s
                                           Interval WAL: 1838 writes, 1838 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     60.2      0.41              0.11        14    0.029       0      0       0.0       0.0
                                             L6      1/0    6.27 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.9    108.5     91.1      1.05              0.40        13    0.081     80K   6800       0.0       0.0
                                            Sum      1/0    6.27 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.9     78.1     82.4      1.46              0.51        27    0.054     80K   6800       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.3     78.8     77.6      0.77              0.25        14    0.055     49K   3605       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    108.5     91.1      1.05              0.40        13    0.081     80K   6800       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.5      0.40              0.11        13    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.11 GB read, 0.06 MB/s read, 1.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b73e09f1f0#2 capacity: 308.00 MB usage: 9.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.00022 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(641,9.08 MB,2.94673%) FilterBlock(28,239.98 KB,0.0760908%) IndexBlock(28,358.69 KB,0.113728%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 20:16:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1517 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:41.989+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:42 compute-0 ceph-mon[75677]: pgmap v990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:42 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1517 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:42.244+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:42 compute-0 sudo[262846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:16:42 compute-0 sudo[262846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:42 compute-0 sudo[262846]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:43.020+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:43 compute-0 sudo[262871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:16:43 compute-0 sudo[262871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:43 compute-0 sudo[262871]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:43 compute-0 sudo[262896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:16:43 compute-0 sudo[262896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:43 compute-0 sudo[262896]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:43 compute-0 sudo[262921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:16:43 compute-0 sudo[262921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:43.268+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:43 compute-0 sudo[262921]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:16:43 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:16:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:16:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:16:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:16:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:16:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 469fc6fa-ab41-45fa-ad65-43af3fa6193d does not exist
Nov 24 20:16:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev def8cb98-c071-4a25-8747-c233fc4a141f does not exist
Nov 24 20:16:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ad752874-9873-428a-99f3-1e388fd11aff does not exist
Nov 24 20:16:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:16:43 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:16:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:16:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:16:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:16:43 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:16:43 compute-0 sudo[262978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:16:43 compute-0 sudo[262978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:43 compute-0 sudo[262978]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:44.038+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:44 compute-0 ceph-mon[75677]: pgmap v991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:16:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:16:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:16:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:16:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:16:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:16:44 compute-0 sudo[263003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:16:44 compute-0 sudo[263003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:44 compute-0 sudo[263003]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:44 compute-0 sudo[263028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:16:44 compute-0 sudo[263028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:44 compute-0 sudo[263028]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:44.279+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:44 compute-0 sudo[263053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:16:44 compute-0 sudo[263053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:44 compute-0 podman[263118]: 2025-11-24 20:16:44.766097167 +0000 UTC m=+0.061627506 container create 1ae603215afd0059bd721ceef194c13206aef2da78a4b3087a8345ac673d8842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 20:16:44 compute-0 systemd[1]: Started libpod-conmon-1ae603215afd0059bd721ceef194c13206aef2da78a4b3087a8345ac673d8842.scope.
Nov 24 20:16:44 compute-0 podman[263118]: 2025-11-24 20:16:44.732290789 +0000 UTC m=+0.027821198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:16:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:16:44 compute-0 podman[263118]: 2025-11-24 20:16:44.87124352 +0000 UTC m=+0.166773929 container init 1ae603215afd0059bd721ceef194c13206aef2da78a4b3087a8345ac673d8842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 20:16:44 compute-0 podman[263118]: 2025-11-24 20:16:44.884339811 +0000 UTC m=+0.179870160 container start 1ae603215afd0059bd721ceef194c13206aef2da78a4b3087a8345ac673d8842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:16:44 compute-0 podman[263118]: 2025-11-24 20:16:44.888537314 +0000 UTC m=+0.184067663 container attach 1ae603215afd0059bd721ceef194c13206aef2da78a4b3087a8345ac673d8842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mccarthy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 20:16:44 compute-0 lucid_mccarthy[263134]: 167 167
Nov 24 20:16:44 compute-0 systemd[1]: libpod-1ae603215afd0059bd721ceef194c13206aef2da78a4b3087a8345ac673d8842.scope: Deactivated successfully.
Nov 24 20:16:44 compute-0 podman[263118]: 2025-11-24 20:16:44.894234577 +0000 UTC m=+0.189764926 container died 1ae603215afd0059bd721ceef194c13206aef2da78a4b3087a8345ac673d8842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 20:16:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a857acf8d26e3fc3a6c173cad344889639f41dd3d6166b12b27666af3e4b15ff-merged.mount: Deactivated successfully.
Nov 24 20:16:44 compute-0 podman[263118]: 2025-11-24 20:16:44.955780969 +0000 UTC m=+0.251311318 container remove 1ae603215afd0059bd721ceef194c13206aef2da78a4b3087a8345ac673d8842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:16:44 compute-0 systemd[1]: libpod-conmon-1ae603215afd0059bd721ceef194c13206aef2da78a4b3087a8345ac673d8842.scope: Deactivated successfully.
Nov 24 20:16:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:45.023+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:45 compute-0 podman[263158]: 2025-11-24 20:16:45.219484149 +0000 UTC m=+0.073388801 container create 24bed42bbda4217945b7049ce877aed53e0a53fe5ab0c9189dd78a44e6ad0ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:16:45 compute-0 systemd[1]: Started libpod-conmon-24bed42bbda4217945b7049ce877aed53e0a53fe5ab0c9189dd78a44e6ad0ae5.scope.
Nov 24 20:16:45 compute-0 podman[263158]: 2025-11-24 20:16:45.192531985 +0000 UTC m=+0.046436727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:16:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:45.297+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a6e31253ef3130b0806e5e361772bacf16ca78eb0eb95103a02aa3c5774255/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a6e31253ef3130b0806e5e361772bacf16ca78eb0eb95103a02aa3c5774255/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a6e31253ef3130b0806e5e361772bacf16ca78eb0eb95103a02aa3c5774255/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a6e31253ef3130b0806e5e361772bacf16ca78eb0eb95103a02aa3c5774255/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a6e31253ef3130b0806e5e361772bacf16ca78eb0eb95103a02aa3c5774255/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:45 compute-0 podman[263158]: 2025-11-24 20:16:45.364428401 +0000 UTC m=+0.218333103 container init 24bed42bbda4217945b7049ce877aed53e0a53fe5ab0c9189dd78a44e6ad0ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:16:45 compute-0 podman[263158]: 2025-11-24 20:16:45.374764188 +0000 UTC m=+0.228668880 container start 24bed42bbda4217945b7049ce877aed53e0a53fe5ab0c9189dd78a44e6ad0ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:16:45 compute-0 podman[263158]: 2025-11-24 20:16:45.378090657 +0000 UTC m=+0.231995579 container attach 24bed42bbda4217945b7049ce877aed53e0a53fe5ab0c9189dd78a44e6ad0ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:16:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:46.008+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:46 compute-0 ceph-mon[75677]: pgmap v992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:46.303+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:46 compute-0 blissful_kare[263174]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:16:46 compute-0 blissful_kare[263174]: --> relative data size: 1.0
Nov 24 20:16:46 compute-0 blissful_kare[263174]: --> All data devices are unavailable
Nov 24 20:16:46 compute-0 systemd[1]: libpod-24bed42bbda4217945b7049ce877aed53e0a53fe5ab0c9189dd78a44e6ad0ae5.scope: Deactivated successfully.
Nov 24 20:16:46 compute-0 systemd[1]: libpod-24bed42bbda4217945b7049ce877aed53e0a53fe5ab0c9189dd78a44e6ad0ae5.scope: Consumed 1.194s CPU time.
Nov 24 20:16:46 compute-0 podman[263158]: 2025-11-24 20:16:46.604255687 +0000 UTC m=+1.458160369 container died 24bed42bbda4217945b7049ce877aed53e0a53fe5ab0c9189dd78a44e6ad0ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-45a6e31253ef3130b0806e5e361772bacf16ca78eb0eb95103a02aa3c5774255-merged.mount: Deactivated successfully.
Nov 24 20:16:46 compute-0 podman[263158]: 2025-11-24 20:16:46.688871578 +0000 UTC m=+1.542776230 container remove 24bed42bbda4217945b7049ce877aed53e0a53fe5ab0c9189dd78a44e6ad0ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:16:46 compute-0 systemd[1]: libpod-conmon-24bed42bbda4217945b7049ce877aed53e0a53fe5ab0c9189dd78a44e6ad0ae5.scope: Deactivated successfully.
Nov 24 20:16:46 compute-0 sudo[263053]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1522 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:46 compute-0 sudo[263218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:16:46 compute-0 sudo[263218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:46 compute-0 sudo[263218]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:46 compute-0 sudo[263243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:16:46 compute-0 sudo[263243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:46 compute-0 sudo[263243]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:46.961+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:47 compute-0 sudo[263268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:16:47 compute-0 sudo[263268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:47 compute-0 sudo[263268]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1522 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:47 compute-0 sudo[263293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:16:47 compute-0 sudo[263293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:47.270+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:47 compute-0 podman[263359]: 2025-11-24 20:16:47.670330239 +0000 UTC m=+0.069201939 container create 52b3815f2674e5a11750c18e7e2831a3186a0abb7f7433cb1fb0cab5b9f204f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shirley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:16:47 compute-0 systemd[1]: Started libpod-conmon-52b3815f2674e5a11750c18e7e2831a3186a0abb7f7433cb1fb0cab5b9f204f0.scope.
Nov 24 20:16:47 compute-0 podman[263359]: 2025-11-24 20:16:47.642247995 +0000 UTC m=+0.041119745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:16:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:16:47 compute-0 podman[263359]: 2025-11-24 20:16:47.775472111 +0000 UTC m=+0.174343831 container init 52b3815f2674e5a11750c18e7e2831a3186a0abb7f7433cb1fb0cab5b9f204f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:16:47 compute-0 podman[263359]: 2025-11-24 20:16:47.787543836 +0000 UTC m=+0.186415546 container start 52b3815f2674e5a11750c18e7e2831a3186a0abb7f7433cb1fb0cab5b9f204f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:16:47 compute-0 podman[263359]: 2025-11-24 20:16:47.792882418 +0000 UTC m=+0.191754128 container attach 52b3815f2674e5a11750c18e7e2831a3186a0abb7f7433cb1fb0cab5b9f204f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:16:47 compute-0 bold_shirley[263375]: 167 167
Nov 24 20:16:47 compute-0 systemd[1]: libpod-52b3815f2674e5a11750c18e7e2831a3186a0abb7f7433cb1fb0cab5b9f204f0.scope: Deactivated successfully.
Nov 24 20:16:47 compute-0 podman[263359]: 2025-11-24 20:16:47.79479501 +0000 UTC m=+0.193666690 container died 52b3815f2674e5a11750c18e7e2831a3186a0abb7f7433cb1fb0cab5b9f204f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shirley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:16:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-56a06eecce42a8e4d6baf640dc45766b46d17bf365bf90b7d16b1648088b19dd-merged.mount: Deactivated successfully.
Nov 24 20:16:47 compute-0 podman[263359]: 2025-11-24 20:16:47.840543268 +0000 UTC m=+0.239414948 container remove 52b3815f2674e5a11750c18e7e2831a3186a0abb7f7433cb1fb0cab5b9f204f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shirley, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:16:47 compute-0 systemd[1]: libpod-conmon-52b3815f2674e5a11750c18e7e2831a3186a0abb7f7433cb1fb0cab5b9f204f0.scope: Deactivated successfully.
Nov 24 20:16:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:47.989+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:48 compute-0 podman[263398]: 2025-11-24 20:16:48.044333489 +0000 UTC m=+0.055419249 container create 681536a8c47ee657acb6b26c0e99c62c27af8368e8ff9fd978f41b866367c900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curie, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 20:16:48 compute-0 systemd[1]: Started libpod-conmon-681536a8c47ee657acb6b26c0e99c62c27af8368e8ff9fd978f41b866367c900.scope.
Nov 24 20:16:48 compute-0 podman[263398]: 2025-11-24 20:16:48.018161047 +0000 UTC m=+0.029246827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:16:48 compute-0 ceph-mon[75677]: pgmap v993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300f5ecb6c69be3064406ab9653d599dce73aae4de243e8181c30f34bb241d32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300f5ecb6c69be3064406ab9653d599dce73aae4de243e8181c30f34bb241d32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300f5ecb6c69be3064406ab9653d599dce73aae4de243e8181c30f34bb241d32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300f5ecb6c69be3064406ab9653d599dce73aae4de243e8181c30f34bb241d32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:48 compute-0 podman[263398]: 2025-11-24 20:16:48.155860304 +0000 UTC m=+0.166946074 container init 681536a8c47ee657acb6b26c0e99c62c27af8368e8ff9fd978f41b866367c900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 20:16:48 compute-0 podman[263398]: 2025-11-24 20:16:48.176311383 +0000 UTC m=+0.187397133 container start 681536a8c47ee657acb6b26c0e99c62c27af8368e8ff9fd978f41b866367c900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:16:48 compute-0 podman[263398]: 2025-11-24 20:16:48.180393163 +0000 UTC m=+0.191478903 container attach 681536a8c47ee657acb6b26c0e99c62c27af8368e8ff9fd978f41b866367c900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:16:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:48.285+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:48 compute-0 jovial_curie[263414]: {
Nov 24 20:16:48 compute-0 jovial_curie[263414]:     "0": [
Nov 24 20:16:48 compute-0 jovial_curie[263414]:         {
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "devices": [
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "/dev/loop3"
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             ],
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_name": "ceph_lv0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_size": "21470642176",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "name": "ceph_lv0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "tags": {
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.cluster_name": "ceph",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.crush_device_class": "",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.encrypted": "0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.osd_id": "0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.type": "block",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.vdo": "0"
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             },
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "type": "block",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "vg_name": "ceph_vg0"
Nov 24 20:16:48 compute-0 jovial_curie[263414]:         }
Nov 24 20:16:48 compute-0 jovial_curie[263414]:     ],
Nov 24 20:16:48 compute-0 jovial_curie[263414]:     "1": [
Nov 24 20:16:48 compute-0 jovial_curie[263414]:         {
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "devices": [
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "/dev/loop4"
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             ],
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_name": "ceph_lv1",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_size": "21470642176",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "name": "ceph_lv1",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "tags": {
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.cluster_name": "ceph",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.crush_device_class": "",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.encrypted": "0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.osd_id": "1",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.type": "block",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.vdo": "0"
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             },
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "type": "block",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "vg_name": "ceph_vg1"
Nov 24 20:16:48 compute-0 jovial_curie[263414]:         }
Nov 24 20:16:48 compute-0 jovial_curie[263414]:     ],
Nov 24 20:16:48 compute-0 jovial_curie[263414]:     "2": [
Nov 24 20:16:48 compute-0 jovial_curie[263414]:         {
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "devices": [
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "/dev/loop5"
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             ],
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_name": "ceph_lv2",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_size": "21470642176",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "name": "ceph_lv2",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "tags": {
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.cluster_name": "ceph",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.crush_device_class": "",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.encrypted": "0",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.osd_id": "2",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.type": "block",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:                 "ceph.vdo": "0"
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             },
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "type": "block",
Nov 24 20:16:48 compute-0 jovial_curie[263414]:             "vg_name": "ceph_vg2"
Nov 24 20:16:48 compute-0 jovial_curie[263414]:         }
Nov 24 20:16:48 compute-0 jovial_curie[263414]:     ]
Nov 24 20:16:48 compute-0 jovial_curie[263414]: }
Nov 24 20:16:49 compute-0 systemd[1]: libpod-681536a8c47ee657acb6b26c0e99c62c27af8368e8ff9fd978f41b866367c900.scope: Deactivated successfully.
Nov 24 20:16:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:49.002+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:49 compute-0 podman[263423]: 2025-11-24 20:16:49.065896396 +0000 UTC m=+0.043331115 container died 681536a8c47ee657acb6b26c0e99c62c27af8368e8ff9fd978f41b866367c900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:16:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:49.280+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-300f5ecb6c69be3064406ab9653d599dce73aae4de243e8181c30f34bb241d32-merged.mount: Deactivated successfully.
Nov 24 20:16:49 compute-0 podman[263423]: 2025-11-24 20:16:49.398739362 +0000 UTC m=+0.376174001 container remove 681536a8c47ee657acb6b26c0e99c62c27af8368e8ff9fd978f41b866367c900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:16:49 compute-0 systemd[1]: libpod-conmon-681536a8c47ee657acb6b26c0e99c62c27af8368e8ff9fd978f41b866367c900.scope: Deactivated successfully.
Nov 24 20:16:49 compute-0 sudo[263293]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:49 compute-0 sudo[263438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:16:49 compute-0 sudo[263438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:49 compute-0 sudo[263438]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:49 compute-0 sudo[263463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:16:49 compute-0 sudo[263463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:49 compute-0 sudo[263463]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:49 compute-0 sudo[263488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:16:49 compute-0 sudo[263488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:49 compute-0 sudo[263488]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:49 compute-0 sudo[263513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:16:49 compute-0 sudo[263513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:50.052+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:50 compute-0 podman[263581]: 2025-11-24 20:16:50.239946747 +0000 UTC m=+0.071799429 container create beb6d98407c623ad8b83c1845c9f766bd5101a8fa800e23246f6a4e10f76da29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 20:16:50 compute-0 systemd[1]: Started libpod-conmon-beb6d98407c623ad8b83c1845c9f766bd5101a8fa800e23246f6a4e10f76da29.scope.
Nov 24 20:16:50 compute-0 podman[263581]: 2025-11-24 20:16:50.209264133 +0000 UTC m=+0.041116875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:16:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:16:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:50.323+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:50 compute-0 podman[263581]: 2025-11-24 20:16:50.337377082 +0000 UTC m=+0.169229765 container init beb6d98407c623ad8b83c1845c9f766bd5101a8fa800e23246f6a4e10f76da29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:16:50 compute-0 podman[263581]: 2025-11-24 20:16:50.344670008 +0000 UTC m=+0.176522660 container start beb6d98407c623ad8b83c1845c9f766bd5101a8fa800e23246f6a4e10f76da29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 20:16:50 compute-0 vigilant_tharp[263597]: 167 167
Nov 24 20:16:50 compute-0 systemd[1]: libpod-beb6d98407c623ad8b83c1845c9f766bd5101a8fa800e23246f6a4e10f76da29.scope: Deactivated successfully.
Nov 24 20:16:50 compute-0 podman[263581]: 2025-11-24 20:16:50.352478918 +0000 UTC m=+0.184331570 container attach beb6d98407c623ad8b83c1845c9f766bd5101a8fa800e23246f6a4e10f76da29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 20:16:50 compute-0 podman[263581]: 2025-11-24 20:16:50.353389532 +0000 UTC m=+0.185242184 container died beb6d98407c623ad8b83c1845c9f766bd5101a8fa800e23246f6a4e10f76da29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:16:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:50 compute-0 ceph-mon[75677]: pgmap v994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cf7b30177a14b63d22cbdffc8bc388dc2e25656b417bd9fb961d4209893885d-merged.mount: Deactivated successfully.
Nov 24 20:16:50 compute-0 podman[263581]: 2025-11-24 20:16:50.413478065 +0000 UTC m=+0.245330747 container remove beb6d98407c623ad8b83c1845c9f766bd5101a8fa800e23246f6a4e10f76da29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:16:50 compute-0 systemd[1]: libpod-conmon-beb6d98407c623ad8b83c1845c9f766bd5101a8fa800e23246f6a4e10f76da29.scope: Deactivated successfully.
Nov 24 20:16:50 compute-0 podman[263622]: 2025-11-24 20:16:50.672493859 +0000 UTC m=+0.073114804 container create 2a8b117a9403a7193f5582600361874285eb7410d35e88cd099b1aa1bea0c6c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:16:50 compute-0 systemd[1]: Started libpod-conmon-2a8b117a9403a7193f5582600361874285eb7410d35e88cd099b1aa1bea0c6c7.scope.
Nov 24 20:16:50 compute-0 podman[263622]: 2025-11-24 20:16:50.642663049 +0000 UTC m=+0.043284034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:16:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e662fc0d2196d10807bf33af792a29c38569c3420f5cc3ac104ecf93f9f62d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e662fc0d2196d10807bf33af792a29c38569c3420f5cc3ac104ecf93f9f62d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e662fc0d2196d10807bf33af792a29c38569c3420f5cc3ac104ecf93f9f62d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e662fc0d2196d10807bf33af792a29c38569c3420f5cc3ac104ecf93f9f62d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:16:50 compute-0 podman[263622]: 2025-11-24 20:16:50.774071967 +0000 UTC m=+0.174692952 container init 2a8b117a9403a7193f5582600361874285eb7410d35e88cd099b1aa1bea0c6c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 20:16:50 compute-0 podman[263622]: 2025-11-24 20:16:50.791048913 +0000 UTC m=+0.191669848 container start 2a8b117a9403a7193f5582600361874285eb7410d35e88cd099b1aa1bea0c6c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:16:50 compute-0 podman[263622]: 2025-11-24 20:16:50.798321907 +0000 UTC m=+0.198942852 container attach 2a8b117a9403a7193f5582600361874285eb7410d35e88cd099b1aa1bea0c6c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:16:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:51.036+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:51.350+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1532 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:52.020+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:52 compute-0 strange_blackwell[263638]: {
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "osd_id": 2,
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "type": "bluestore"
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:     },
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "osd_id": 1,
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "type": "bluestore"
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:     },
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "osd_id": 0,
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:         "type": "bluestore"
Nov 24 20:16:52 compute-0 strange_blackwell[263638]:     }
Nov 24 20:16:52 compute-0 strange_blackwell[263638]: }
Nov 24 20:16:52 compute-0 systemd[1]: libpod-2a8b117a9403a7193f5582600361874285eb7410d35e88cd099b1aa1bea0c6c7.scope: Deactivated successfully.
Nov 24 20:16:52 compute-0 podman[263622]: 2025-11-24 20:16:52.064653285 +0000 UTC m=+1.465274270 container died 2a8b117a9403a7193f5582600361874285eb7410d35e88cd099b1aa1bea0c6c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 20:16:52 compute-0 systemd[1]: libpod-2a8b117a9403a7193f5582600361874285eb7410d35e88cd099b1aa1bea0c6c7.scope: Consumed 1.274s CPU time.
Nov 24 20:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-38e662fc0d2196d10807bf33af792a29c38569c3420f5cc3ac104ecf93f9f62d-merged.mount: Deactivated successfully.
Nov 24 20:16:52 compute-0 podman[263622]: 2025-11-24 20:16:52.143602096 +0000 UTC m=+1.544223001 container remove 2a8b117a9403a7193f5582600361874285eb7410d35e88cd099b1aa1bea0c6c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:16:52 compute-0 systemd[1]: libpod-conmon-2a8b117a9403a7193f5582600361874285eb7410d35e88cd099b1aa1bea0c6c7.scope: Deactivated successfully.
Nov 24 20:16:52 compute-0 sudo[263513]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:16:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:16:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:16:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:16:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 281601a1-039e-4fe5-ae04-53ff231cce08 does not exist
Nov 24 20:16:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9f78f1a9-e94c-414b-8d0d-247ba4e14668 does not exist
Nov 24 20:16:52 compute-0 sudo[263683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:16:52 compute-0 sudo[263683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:52 compute-0 sudo[263683]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:52 compute-0 sudo[263708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:16:52 compute-0 sudo[263708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:16:52 compute-0 sudo[263708]: pam_unix(sudo:session): session closed for user root
Nov 24 20:16:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:52.340+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:52 compute-0 ceph-mon[75677]: pgmap v995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1532 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:16:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:16:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:53.063+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:53.340+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:54.073+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:54.319+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:16:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:16:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:54 compute-0 ceph-mon[75677]: pgmap v996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:16:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:16:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:16:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:16:54 compute-0 podman[263733]: 2025-11-24 20:16:54.874364699 +0000 UTC m=+0.089103382 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:16:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:55.090+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:55.303+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:56.083+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:56.336+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:56 compute-0 ceph-mon[75677]: pgmap v997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:16:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:57.064+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:57.293+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1537 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:58.113+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:58.249+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:58 compute-0 ceph-mon[75677]: pgmap v998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:58 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1537 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:16:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:16:59.072+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:16:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:16:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:16:59.277+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:16:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:16:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:16:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:00.071+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:00.323+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:00 compute-0 ceph-mon[75677]: pgmap v999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:01.117+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:01.320+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:02.107+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:02.363+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:02 compute-0 ceph-mon[75677]: pgmap v1000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:03.155+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:03.322+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:04.177+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:04.305+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:04 compute-0 ceph-mon[75677]: pgmap v1001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:05.207+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:05.272+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:05 compute-0 podman[263752]: 2025-11-24 20:17:05.849079565 +0000 UTC m=+0.081941640 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 20:17:06 compute-0 nova_compute[257476]: 2025-11-24 20:17:06.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:17:06 compute-0 nova_compute[257476]: 2025-11-24 20:17:06.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:17:06 compute-0 nova_compute[257476]: 2025-11-24 20:17:06.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:17:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:06.237+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:06.257+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:06 compute-0 ceph-mon[75677]: pgmap v1002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1542 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.177 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.177 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.177 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.178 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.178 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:17:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:07.211+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:07.238+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1542 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:17:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99977821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.644 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.894 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.896 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5167MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.897 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.897 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.958 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.959 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:17:07 compute-0 nova_compute[257476]: 2025-11-24 20:17:07.974 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:17:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:08.231+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:08.243+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:17:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2385735260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:17:08 compute-0 nova_compute[257476]: 2025-11-24 20:17:08.433 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:17:08 compute-0 nova_compute[257476]: 2025-11-24 20:17:08.439 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:17:08 compute-0 nova_compute[257476]: 2025-11-24 20:17:08.455 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:17:08 compute-0 nova_compute[257476]: 2025-11-24 20:17:08.457 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:17:08 compute-0 nova_compute[257476]: 2025-11-24 20:17:08.457 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:17:08 compute-0 ceph-mon[75677]: pgmap v1003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/99977821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:17:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2385735260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:17:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:09.218+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:09.282+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:17:09.370 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:17:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:17:09.370 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:17:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:17:09.370 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:17:09 compute-0 nova_compute[257476]: 2025-11-24 20:17:09.452 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:17:09 compute-0 nova_compute[257476]: 2025-11-24 20:17:09.453 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:17:09 compute-0 nova_compute[257476]: 2025-11-24 20:17:09.453 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:17:09 compute-0 nova_compute[257476]: 2025-11-24 20:17:09.453 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:17:09 compute-0 nova_compute[257476]: 2025-11-24 20:17:09.467 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:17:09 compute-0 nova_compute[257476]: 2025-11-24 20:17:09.467 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:17:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:09 compute-0 podman[263817]: 2025-11-24 20:17:09.986687409 +0000 UTC m=+0.213420064 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 20:17:10 compute-0 nova_compute[257476]: 2025-11-24 20:17:10.161 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:17:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:10.199+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:10.300+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:10 compute-0 ceph-mon[75677]: pgmap v1004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:11.225+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:11.309+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1552 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:12.234+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:12.344+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:12 compute-0 ceph-mon[75677]: pgmap v1005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1552 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:13.236+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:13.348+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:14.264+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:14.338+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:14 compute-0 ceph-mon[75677]: pgmap v1006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:15.283+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:15.338+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:16.294+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:16.309+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:17:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4250772938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:17:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:17:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4250772938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:17:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:16 compute-0 ceph-mon[75677]: pgmap v1007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/4250772938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:17:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/4250772938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:17:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:17.246+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:17.278+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1557 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:18.283+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:18.304+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:18 compute-0 ceph-mon[75677]: pgmap v1008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:18 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1557 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:19.282+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:19.287+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:20.263+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:20.325+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:21 compute-0 ceph-mon[75677]: pgmap v1009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:21.252+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:21.291+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:22 compute-0 ceph-mon[75677]: pgmap v1010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:22.252+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:22.262+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:23.272+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:23.294+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:24.246+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:24 compute-0 ceph-mon[75677]: pgmap v1011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:24.297+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:17:24
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'images', 'vms', 'backups']
Nov 24 20:17:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:17:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:25.217+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:25.318+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:25 compute-0 podman[263842]: 2025-11-24 20:17:25.861945509 +0000 UTC m=+0.082830688 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 20:17:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:26.192+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:26 compute-0 ceph-mon[75677]: pgmap v1012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:26.327+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1562 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:27.158+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1562 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:27.361+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:28.190+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:28 compute-0 ceph-mon[75677]: pgmap v1013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:28.390+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:29.226+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:29.355+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:30.218+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:30 compute-0 ceph-mon[75677]: pgmap v1014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:30.338+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:31.218+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:31.299+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1572 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:32.193+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:32.310+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:32 compute-0 ceph-mon[75677]: pgmap v1015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1572 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:33.213+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:33.308+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:34.216+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:34.318+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:34 compute-0 ceph-mon[75677]: pgmap v1016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:17:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:17:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:35.227+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:35.277+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:36.275+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:36.283+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:36 compute-0 ceph-mon[75677]: pgmap v1017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:36 compute-0 podman[263861]: 2025-11-24 20:17:36.86296678 +0000 UTC m=+0.087157542 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:17:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:37.290+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:37.323+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1577 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.418378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015457418470, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2535, "num_deletes": 507, "total_data_size": 2804681, "memory_usage": 2869424, "flush_reason": "Manual Compaction"}
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015457440098, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2748091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25742, "largest_seqno": 28276, "table_properties": {"data_size": 2737643, "index_size": 5662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 31028, "raw_average_key_size": 21, "raw_value_size": 2711874, "raw_average_value_size": 1844, "num_data_blocks": 250, "num_entries": 1470, "num_filter_entries": 1470, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764015297, "oldest_key_time": 1764015297, "file_creation_time": 1764015457, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 21750 microseconds, and 11692 cpu microseconds.
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.440141) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2748091 bytes OK
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.440162) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.441913) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.441926) EVENT_LOG_v1 {"time_micros": 1764015457441921, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.441941) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2792357, prev total WAL file size 2792357, number of live WAL files 2.
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.442799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2683KB)], [56(6421KB)]
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015457442849, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9323974, "oldest_snapshot_seqno": -1}
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 7627 keys, 7678779 bytes, temperature: kUnknown
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015457506131, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7678779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7634268, "index_size": 24367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19077, "raw_key_size": 203002, "raw_average_key_size": 26, "raw_value_size": 7500880, "raw_average_value_size": 983, "num_data_blocks": 954, "num_entries": 7627, "num_filter_entries": 7627, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764015457, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.506458) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7678779 bytes
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.507895) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.1 rd, 121.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 6.3 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 8654, records dropped: 1027 output_compression: NoCompression
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.507924) EVENT_LOG_v1 {"time_micros": 1764015457507911, "job": 30, "event": "compaction_finished", "compaction_time_micros": 63367, "compaction_time_cpu_micros": 37929, "output_level": 6, "num_output_files": 1, "total_output_size": 7678779, "num_input_records": 8654, "num_output_records": 7627, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015457509095, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015457511412, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.442700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.511466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.511475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.511479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.511483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:17:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:17:37.511487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:17:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:38.334+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:38.334+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:38 compute-0 ceph-mon[75677]: pgmap v1018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1577 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:39.328+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:39.355+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:40.319+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:40.363+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:17:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:17:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:17:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:17:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:17:40 compute-0 ceph-mon[75677]: pgmap v1019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:40 compute-0 podman[263881]: 2025-11-24 20:17:40.94473416 +0000 UTC m=+0.171935418 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:17:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:41.284+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:41.390+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:42.281+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:42.436+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:42 compute-0 ceph-mon[75677]: pgmap v1020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:43.326+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:43.465+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:44.359+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:44.432+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:44 compute-0 ceph-mon[75677]: pgmap v1021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:45.386+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:45.414+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:46.377+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:46.428+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:46 compute-0 ceph-mon[75677]: pgmap v1022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1582 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:47.427+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:47.453+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1582 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:48.444+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:48.444+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:48 compute-0 ceph-mon[75677]: pgmap v1023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:49.411+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:49.492+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:50.414+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:50.461+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:50 compute-0 ceph-mon[75677]: pgmap v1024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:51.433+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:51.501+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1592 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:52.420+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:52 compute-0 sudo[263908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:17:52 compute-0 sudo[263908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:52 compute-0 sudo[263908]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:52.470+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:52 compute-0 sudo[263933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:17:52 compute-0 sudo[263933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:52 compute-0 sudo[263933]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:52 compute-0 sudo[263958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:17:52 compute-0 sudo[263958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:52 compute-0 sudo[263958]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:52 compute-0 ceph-mon[75677]: pgmap v1025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1592 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:52 compute-0 sudo[263983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:17:52 compute-0 sudo[263983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:53 compute-0 sudo[263983]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:53 compute-0 sudo[264039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:17:53 compute-0 sudo[264039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:53 compute-0 sudo[264039]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:53.456+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:53 compute-0 sudo[264064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:17:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:53.503+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:53 compute-0 sudo[264064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:53 compute-0 sudo[264064]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:53 compute-0 sudo[264089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:17:53 compute-0 sudo[264089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:53 compute-0 sudo[264089]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:53 compute-0 sudo[264114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 24 20:17:53 compute-0 sudo[264114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:53 compute-0 sudo[264114]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:17:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:17:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:17:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:17:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:17:53 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:17:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:17:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:17:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:17:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:17:53 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 110f99e2-49d4-4302-b112-c62c2301319d does not exist
Nov 24 20:17:53 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ab9ca2a9-72f5-459d-94bd-52604c8ba844 does not exist
Nov 24 20:17:53 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 48b0ec20-2ca8-4b48-9e3d-a6ad66fedb4f does not exist
Nov 24 20:17:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:17:53 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:17:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:17:53 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:17:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:17:53 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:17:54 compute-0 sudo[264157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:17:54 compute-0 sudo[264157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:54 compute-0 sudo[264157]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:54 compute-0 sudo[264182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:17:54 compute-0 sudo[264182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:54 compute-0 sudo[264182]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:54 compute-0 sudo[264207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:17:54 compute-0 sudo[264207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:54 compute-0 sudo[264207]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:54 compute-0 sudo[264232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:17:54 compute-0 sudo[264232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:17:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:17:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:17:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:17:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:17:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:17:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:54.458+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:54.537+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:54 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:54 compute-0 podman[264300]: 2025-11-24 20:17:54.71077583 +0000 UTC m=+0.058889075 container create 8c3792ad7dbf6a11d7e48facf5c39b6b14fcbe123f38b46b5846d7849475ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:17:54 compute-0 systemd[1]: Started libpod-conmon-8c3792ad7dbf6a11d7e48facf5c39b6b14fcbe123f38b46b5846d7849475ba1f.scope.
Nov 24 20:17:54 compute-0 podman[264300]: 2025-11-24 20:17:54.688531823 +0000 UTC m=+0.036645078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:17:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:17:54 compute-0 podman[264300]: 2025-11-24 20:17:54.807293447 +0000 UTC m=+0.155406742 container init 8c3792ad7dbf6a11d7e48facf5c39b6b14fcbe123f38b46b5846d7849475ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:17:54 compute-0 podman[264300]: 2025-11-24 20:17:54.816192993 +0000 UTC m=+0.164306248 container start 8c3792ad7dbf6a11d7e48facf5c39b6b14fcbe123f38b46b5846d7849475ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 20:17:54 compute-0 podman[264300]: 2025-11-24 20:17:54.81988022 +0000 UTC m=+0.167993475 container attach 8c3792ad7dbf6a11d7e48facf5c39b6b14fcbe123f38b46b5846d7849475ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:17:54 compute-0 peaceful_wing[264316]: 167 167
Nov 24 20:17:54 compute-0 systemd[1]: libpod-8c3792ad7dbf6a11d7e48facf5c39b6b14fcbe123f38b46b5846d7849475ba1f.scope: Deactivated successfully.
Nov 24 20:17:54 compute-0 conmon[264316]: conmon 8c3792ad7dbf6a11d7e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c3792ad7dbf6a11d7e48facf5c39b6b14fcbe123f38b46b5846d7849475ba1f.scope/container/memory.events
Nov 24 20:17:54 compute-0 podman[264300]: 2025-11-24 20:17:54.824932654 +0000 UTC m=+0.173045899 container died 8c3792ad7dbf6a11d7e48facf5c39b6b14fcbe123f38b46b5846d7849475ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-365dfda97490ff93814103d184a7f372c233ae1ea1816a00d4ce49bb19869ae1-merged.mount: Deactivated successfully.
Nov 24 20:17:54 compute-0 podman[264300]: 2025-11-24 20:17:54.874241494 +0000 UTC m=+0.222354709 container remove 8c3792ad7dbf6a11d7e48facf5c39b6b14fcbe123f38b46b5846d7849475ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wing, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 20:17:54 compute-0 systemd[1]: libpod-conmon-8c3792ad7dbf6a11d7e48facf5c39b6b14fcbe123f38b46b5846d7849475ba1f.scope: Deactivated successfully.
Nov 24 20:17:54 compute-0 ceph-mon[75677]: pgmap v1026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:17:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:17:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:17:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:17:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:17:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:17:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:17:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:17:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:55 compute-0 podman[264341]: 2025-11-24 20:17:55.036341073 +0000 UTC m=+0.037561012 container create fb06dee3415dae680dbb1d01bb38e80f3cebfbe766f8024d5321200cfc91357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 20:17:55 compute-0 systemd[1]: Started libpod-conmon-fb06dee3415dae680dbb1d01bb38e80f3cebfbe766f8024d5321200cfc91357e.scope.
Nov 24 20:17:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de0b51dab7c5bcfdd8ad4ed7494a5047534405c4159e6d3383c02e19a4ad27b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de0b51dab7c5bcfdd8ad4ed7494a5047534405c4159e6d3383c02e19a4ad27b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de0b51dab7c5bcfdd8ad4ed7494a5047534405c4159e6d3383c02e19a4ad27b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de0b51dab7c5bcfdd8ad4ed7494a5047534405c4159e6d3383c02e19a4ad27b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de0b51dab7c5bcfdd8ad4ed7494a5047534405c4159e6d3383c02e19a4ad27b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:17:55 compute-0 podman[264341]: 2025-11-24 20:17:55.020610998 +0000 UTC m=+0.021830997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:17:55 compute-0 podman[264341]: 2025-11-24 20:17:55.117856914 +0000 UTC m=+0.119076893 container init fb06dee3415dae680dbb1d01bb38e80f3cebfbe766f8024d5321200cfc91357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:17:55 compute-0 podman[264341]: 2025-11-24 20:17:55.124790438 +0000 UTC m=+0.126010427 container start fb06dee3415dae680dbb1d01bb38e80f3cebfbe766f8024d5321200cfc91357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 20:17:55 compute-0 podman[264341]: 2025-11-24 20:17:55.128970618 +0000 UTC m=+0.130190617 container attach fb06dee3415dae680dbb1d01bb38e80f3cebfbe766f8024d5321200cfc91357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:17:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:55.429+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:55 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:55.520+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:55 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:56 compute-0 sweet_chandrasekhar[264357]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:17:56 compute-0 sweet_chandrasekhar[264357]: --> relative data size: 1.0
Nov 24 20:17:56 compute-0 sweet_chandrasekhar[264357]: --> All data devices are unavailable
Nov 24 20:17:56 compute-0 systemd[1]: libpod-fb06dee3415dae680dbb1d01bb38e80f3cebfbe766f8024d5321200cfc91357e.scope: Deactivated successfully.
Nov 24 20:17:56 compute-0 systemd[1]: libpod-fb06dee3415dae680dbb1d01bb38e80f3cebfbe766f8024d5321200cfc91357e.scope: Consumed 1.013s CPU time.
Nov 24 20:17:56 compute-0 podman[264341]: 2025-11-24 20:17:56.18276612 +0000 UTC m=+1.183986069 container died fb06dee3415dae680dbb1d01bb38e80f3cebfbe766f8024d5321200cfc91357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-de0b51dab7c5bcfdd8ad4ed7494a5047534405c4159e6d3383c02e19a4ad27b2-merged.mount: Deactivated successfully.
Nov 24 20:17:56 compute-0 podman[264341]: 2025-11-24 20:17:56.259140287 +0000 UTC m=+1.260360236 container remove fb06dee3415dae680dbb1d01bb38e80f3cebfbe766f8024d5321200cfc91357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:17:56 compute-0 systemd[1]: libpod-conmon-fb06dee3415dae680dbb1d01bb38e80f3cebfbe766f8024d5321200cfc91357e.scope: Deactivated successfully.
Nov 24 20:17:56 compute-0 podman[264387]: 2025-11-24 20:17:56.29640486 +0000 UTC m=+0.078925074 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:17:56 compute-0 sudo[264232]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:56 compute-0 sudo[264415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:17:56 compute-0 sudo[264415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:56 compute-0 sudo[264415]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:56.410+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:56 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:56 compute-0 sudo[264440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:17:56 compute-0 sudo[264440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:56 compute-0 sudo[264440]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:56.561+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:56 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:56 compute-0 sudo[264465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:17:56 compute-0 sudo[264465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:56 compute-0 sudo[264465]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:56 compute-0 sudo[264490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:17:56 compute-0 sudo[264490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:17:56 compute-0 ceph-mon[75677]: pgmap v1027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:57 compute-0 podman[264556]: 2025-11-24 20:17:57.110016103 +0000 UTC m=+0.054576541 container create 0700d04102fdc3bd573394dddd464c15cbaf589ff94915e185b0e1787437d5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:17:57 compute-0 systemd[1]: Started libpod-conmon-0700d04102fdc3bd573394dddd464c15cbaf589ff94915e185b0e1787437d5ba.scope.
Nov 24 20:17:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:17:57 compute-0 podman[264556]: 2025-11-24 20:17:57.089867552 +0000 UTC m=+0.034428010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:17:57 compute-0 podman[264556]: 2025-11-24 20:17:57.196345992 +0000 UTC m=+0.140906420 container init 0700d04102fdc3bd573394dddd464c15cbaf589ff94915e185b0e1787437d5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:17:57 compute-0 podman[264556]: 2025-11-24 20:17:57.203862511 +0000 UTC m=+0.148422939 container start 0700d04102fdc3bd573394dddd464c15cbaf589ff94915e185b0e1787437d5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:17:57 compute-0 podman[264556]: 2025-11-24 20:17:57.207772943 +0000 UTC m=+0.152333371 container attach 0700d04102fdc3bd573394dddd464c15cbaf589ff94915e185b0e1787437d5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 20:17:57 compute-0 systemd[1]: libpod-0700d04102fdc3bd573394dddd464c15cbaf589ff94915e185b0e1787437d5ba.scope: Deactivated successfully.
Nov 24 20:17:57 compute-0 angry_heisenberg[264572]: 167 167
Nov 24 20:17:57 compute-0 conmon[264572]: conmon 0700d04102fdc3bd5733 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0700d04102fdc3bd573394dddd464c15cbaf589ff94915e185b0e1787437d5ba.scope/container/memory.events
Nov 24 20:17:57 compute-0 podman[264556]: 2025-11-24 20:17:57.210765252 +0000 UTC m=+0.155325670 container died 0700d04102fdc3bd573394dddd464c15cbaf589ff94915e185b0e1787437d5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:17:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcd690ebe50acdbf4967ef3f672a5063f9421b4c21895b4b992b93dc4bd2e778-merged.mount: Deactivated successfully.
Nov 24 20:17:57 compute-0 podman[264556]: 2025-11-24 20:17:57.251675843 +0000 UTC m=+0.196236271 container remove 0700d04102fdc3bd573394dddd464c15cbaf589ff94915e185b0e1787437d5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:17:57 compute-0 systemd[1]: libpod-conmon-0700d04102fdc3bd573394dddd464c15cbaf589ff94915e185b0e1787437d5ba.scope: Deactivated successfully.
Nov 24 20:17:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:57.366+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:57 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:57 compute-0 podman[264594]: 2025-11-24 20:17:57.486754577 +0000 UTC m=+0.076038998 container create 887a3f18ba929e5b29fa7228783a0d0254168ae1098ffb1dddef28a23077eafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:17:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:57.513+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:57 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:57 compute-0 systemd[1]: Started libpod-conmon-887a3f18ba929e5b29fa7228783a0d0254168ae1098ffb1dddef28a23077eafa.scope.
Nov 24 20:17:57 compute-0 podman[264594]: 2025-11-24 20:17:57.456835057 +0000 UTC m=+0.046119528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:17:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d70457c11fa7e4c331af4faabd03799f73be91549a9d37a7345607066f5500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d70457c11fa7e4c331af4faabd03799f73be91549a9d37a7345607066f5500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d70457c11fa7e4c331af4faabd03799f73be91549a9d37a7345607066f5500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d70457c11fa7e4c331af4faabd03799f73be91549a9d37a7345607066f5500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:17:57 compute-0 podman[264594]: 2025-11-24 20:17:57.61380252 +0000 UTC m=+0.203087001 container init 887a3f18ba929e5b29fa7228783a0d0254168ae1098ffb1dddef28a23077eafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 20:17:57 compute-0 podman[264594]: 2025-11-24 20:17:57.627314617 +0000 UTC m=+0.216599038 container start 887a3f18ba929e5b29fa7228783a0d0254168ae1098ffb1dddef28a23077eafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 20:17:57 compute-0 podman[264594]: 2025-11-24 20:17:57.631772215 +0000 UTC m=+0.221056696 container attach 887a3f18ba929e5b29fa7228783a0d0254168ae1098ffb1dddef28a23077eafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:17:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1597 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:58.364+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:58 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:58 compute-0 jovial_ellis[264611]: {
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:     "0": [
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:         {
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "devices": [
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "/dev/loop3"
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             ],
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_name": "ceph_lv0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_size": "21470642176",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "name": "ceph_lv0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "tags": {
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.cluster_name": "ceph",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.crush_device_class": "",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.encrypted": "0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.osd_id": "0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.type": "block",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.vdo": "0"
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             },
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "type": "block",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "vg_name": "ceph_vg0"
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:         }
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:     ],
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:     "1": [
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:         {
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "devices": [
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "/dev/loop4"
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             ],
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_name": "ceph_lv1",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_size": "21470642176",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "name": "ceph_lv1",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "tags": {
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.cluster_name": "ceph",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.crush_device_class": "",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.encrypted": "0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.osd_id": "1",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.type": "block",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.vdo": "0"
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             },
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "type": "block",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "vg_name": "ceph_vg1"
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:         }
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:     ],
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:     "2": [
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:         {
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "devices": [
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "/dev/loop5"
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             ],
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_name": "ceph_lv2",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_size": "21470642176",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "name": "ceph_lv2",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "tags": {
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.cluster_name": "ceph",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.crush_device_class": "",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.encrypted": "0",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.osd_id": "2",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.type": "block",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:                 "ceph.vdo": "0"
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             },
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "type": "block",
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:             "vg_name": "ceph_vg2"
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:         }
Nov 24 20:17:58 compute-0 jovial_ellis[264611]:     ]
Nov 24 20:17:58 compute-0 jovial_ellis[264611]: }
Nov 24 20:17:58 compute-0 systemd[1]: libpod-887a3f18ba929e5b29fa7228783a0d0254168ae1098ffb1dddef28a23077eafa.scope: Deactivated successfully.
Nov 24 20:17:58 compute-0 podman[264594]: 2025-11-24 20:17:58.430729162 +0000 UTC m=+1.020013623 container died 887a3f18ba929e5b29fa7228783a0d0254168ae1098ffb1dddef28a23077eafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:17:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-60d70457c11fa7e4c331af4faabd03799f73be91549a9d37a7345607066f5500-merged.mount: Deactivated successfully.
Nov 24 20:17:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:58.508+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:58 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:58 compute-0 podman[264594]: 2025-11-24 20:17:58.521725773 +0000 UTC m=+1.111010204 container remove 887a3f18ba929e5b29fa7228783a0d0254168ae1098ffb1dddef28a23077eafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:17:58 compute-0 systemd[1]: libpod-conmon-887a3f18ba929e5b29fa7228783a0d0254168ae1098ffb1dddef28a23077eafa.scope: Deactivated successfully.
Nov 24 20:17:58 compute-0 sudo[264490]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:58 compute-0 sudo[264632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:17:58 compute-0 sudo[264632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:58 compute-0 sudo[264632]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:58 compute-0 sudo[264657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:17:58 compute-0 sudo[264657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:58 compute-0 sudo[264657]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:58 compute-0 sudo[264682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:17:58 compute-0 sudo[264682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:58 compute-0 sudo[264682]: pam_unix(sudo:session): session closed for user root
Nov 24 20:17:58 compute-0 sudo[264707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:17:58 compute-0 sudo[264707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:17:58 compute-0 ceph-mon[75677]: pgmap v1028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:58 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1597 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:17:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:17:59 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:17:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:17:59.394+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:17:59 compute-0 podman[264772]: 2025-11-24 20:17:59.395992577 +0000 UTC m=+0.060740473 container create 738a47296990e7e0950b6d42f595ad3462ea8652c2febafca7a40de7d7945233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:17:59 compute-0 systemd[1]: Started libpod-conmon-738a47296990e7e0950b6d42f595ad3462ea8652c2febafca7a40de7d7945233.scope.
Nov 24 20:17:59 compute-0 podman[264772]: 2025-11-24 20:17:59.372949979 +0000 UTC m=+0.037697855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:17:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:17:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:17:59.492+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:59 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:17:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:17:59 compute-0 podman[264772]: 2025-11-24 20:17:59.509291038 +0000 UTC m=+0.174038964 container init 738a47296990e7e0950b6d42f595ad3462ea8652c2febafca7a40de7d7945233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_davinci, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:17:59 compute-0 podman[264772]: 2025-11-24 20:17:59.524835058 +0000 UTC m=+0.189582954 container start 738a47296990e7e0950b6d42f595ad3462ea8652c2febafca7a40de7d7945233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:17:59 compute-0 happy_davinci[264788]: 167 167
Nov 24 20:17:59 compute-0 systemd[1]: libpod-738a47296990e7e0950b6d42f595ad3462ea8652c2febafca7a40de7d7945233.scope: Deactivated successfully.
Nov 24 20:17:59 compute-0 conmon[264788]: conmon 738a47296990e7e0950b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-738a47296990e7e0950b6d42f595ad3462ea8652c2febafca7a40de7d7945233.scope/container/memory.events
Nov 24 20:17:59 compute-0 podman[264772]: 2025-11-24 20:17:59.540183333 +0000 UTC m=+0.204931279 container attach 738a47296990e7e0950b6d42f595ad3462ea8652c2febafca7a40de7d7945233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 20:17:59 compute-0 podman[264772]: 2025-11-24 20:17:59.540701377 +0000 UTC m=+0.205449293 container died 738a47296990e7e0950b6d42f595ad3462ea8652c2febafca7a40de7d7945233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_davinci, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 20:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a999e29099eafdd722e279fc6d4298898425581f940d71009a6e3df9acbe1e8-merged.mount: Deactivated successfully.
Nov 24 20:17:59 compute-0 podman[264772]: 2025-11-24 20:17:59.666413365 +0000 UTC m=+0.331161251 container remove 738a47296990e7e0950b6d42f595ad3462ea8652c2febafca7a40de7d7945233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 24 20:17:59 compute-0 systemd[1]: libpod-conmon-738a47296990e7e0950b6d42f595ad3462ea8652c2febafca7a40de7d7945233.scope: Deactivated successfully.
Nov 24 20:17:59 compute-0 podman[264810]: 2025-11-24 20:17:59.953518792 +0000 UTC m=+0.086295048 container create 0363ab53908aeeff16a135f5e314238bf022e812ef8bc2b036cfc6b0fa1bb7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 20:18:00 compute-0 podman[264810]: 2025-11-24 20:17:59.911152844 +0000 UTC m=+0.043929110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:18:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:00 compute-0 systemd[1]: Started libpod-conmon-0363ab53908aeeff16a135f5e314238bf022e812ef8bc2b036cfc6b0fa1bb7a9.scope.
Nov 24 20:18:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e841c98da18b1a5ce124c14dacd4f8e56d231c6110cdbc8e148ab4defc8118f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e841c98da18b1a5ce124c14dacd4f8e56d231c6110cdbc8e148ab4defc8118f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e841c98da18b1a5ce124c14dacd4f8e56d231c6110cdbc8e148ab4defc8118f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e841c98da18b1a5ce124c14dacd4f8e56d231c6110cdbc8e148ab4defc8118f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:18:00 compute-0 podman[264810]: 2025-11-24 20:18:00.125259446 +0000 UTC m=+0.258035742 container init 0363ab53908aeeff16a135f5e314238bf022e812ef8bc2b036cfc6b0fa1bb7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_burnell, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 20:18:00 compute-0 podman[264810]: 2025-11-24 20:18:00.140270122 +0000 UTC m=+0.273046378 container start 0363ab53908aeeff16a135f5e314238bf022e812ef8bc2b036cfc6b0fa1bb7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:18:00 compute-0 podman[264810]: 2025-11-24 20:18:00.146491106 +0000 UTC m=+0.279267402 container attach 0363ab53908aeeff16a135f5e314238bf022e812ef8bc2b036cfc6b0fa1bb7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_burnell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:18:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:00.388+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:00 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:00.473+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:00 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:01 compute-0 ceph-mon[75677]: pgmap v1029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:01 compute-0 anacron[156334]: Job `cron.daily' started
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]: {
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "osd_id": 2,
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "type": "bluestore"
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:     },
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "osd_id": 1,
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "type": "bluestore"
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:     },
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "osd_id": 0,
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:         "type": "bluestore"
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]:     }
Nov 24 20:18:01 compute-0 eloquent_burnell[264827]: }
Nov 24 20:18:01 compute-0 anacron[156334]: Job `cron.daily' terminated
Nov 24 20:18:01 compute-0 systemd[1]: libpod-0363ab53908aeeff16a135f5e314238bf022e812ef8bc2b036cfc6b0fa1bb7a9.scope: Deactivated successfully.
Nov 24 20:18:01 compute-0 podman[264810]: 2025-11-24 20:18:01.319406023 +0000 UTC m=+1.452182309 container died 0363ab53908aeeff16a135f5e314238bf022e812ef8bc2b036cfc6b0fa1bb7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_burnell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 20:18:01 compute-0 systemd[1]: libpod-0363ab53908aeeff16a135f5e314238bf022e812ef8bc2b036cfc6b0fa1bb7a9.scope: Consumed 1.181s CPU time.
Nov 24 20:18:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:01.386+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:01 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:01 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:01.492+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e841c98da18b1a5ce124c14dacd4f8e56d231c6110cdbc8e148ab4defc8118f0-merged.mount: Deactivated successfully.
Nov 24 20:18:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:01 compute-0 podman[264810]: 2025-11-24 20:18:01.878901749 +0000 UTC m=+2.011678005 container remove 0363ab53908aeeff16a135f5e314238bf022e812ef8bc2b036cfc6b0fa1bb7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:18:01 compute-0 systemd[1]: libpod-conmon-0363ab53908aeeff16a135f5e314238bf022e812ef8bc2b036cfc6b0fa1bb7a9.scope: Deactivated successfully.
Nov 24 20:18:01 compute-0 sudo[264707]: pam_unix(sudo:session): session closed for user root
Nov 24 20:18:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:18:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:18:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:18:02 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:18:02 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6f53dc85-4029-4bc1-ae44-9a0733a4ddbc does not exist
Nov 24 20:18:02 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4dacf8aa-7aae-4ad1-8de0-030b8eec9ab3 does not exist
Nov 24 20:18:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:02 compute-0 ceph-mon[75677]: pgmap v1030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:18:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:18:02 compute-0 sudo[264875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:18:02 compute-0 sudo[264875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:18:02 compute-0 sudo[264875]: pam_unix(sudo:session): session closed for user root
Nov 24 20:18:02 compute-0 sudo[264900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:18:02 compute-0 sudo[264900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:18:02 compute-0 sudo[264900]: pam_unix(sudo:session): session closed for user root
Nov 24 20:18:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:02.385+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:02 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:02.486+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:02 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:03.356+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:03 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:03.454+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:03 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:04 compute-0 ceph-mon[75677]: pgmap v1031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:04.306+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:04 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:04.467+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:04 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:04 compute-0 sshd-session[264925]: Invalid user dev from 182.93.7.194 port 54182
Nov 24 20:18:05 compute-0 sshd-session[264925]: Received disconnect from 182.93.7.194 port 54182:11: Bye Bye [preauth]
Nov 24 20:18:05 compute-0 sshd-session[264925]: Disconnected from invalid user dev 182.93.7.194 port 54182 [preauth]
Nov 24 20:18:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:05.273+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:05 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:05.424+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:05 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:06.244+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:06 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:06 compute-0 ceph-mon[75677]: pgmap v1032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:06.397+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:06 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1602 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.183 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.184 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.184 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.185 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.185 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:18:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:07.214+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:07 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1602 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:07.447+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:07 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:18:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/803476848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.673 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:18:07 compute-0 podman[264949]: 2025-11-24 20:18:07.893684137 +0000 UTC m=+0.115551980 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.932 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.933 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5161MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.934 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:18:07 compute-0 nova_compute[257476]: 2025-11-24 20:18:07.934 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:18:08 compute-0 nova_compute[257476]: 2025-11-24 20:18:08.007 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:18:08 compute-0 nova_compute[257476]: 2025-11-24 20:18:08.008 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:18:08 compute-0 nova_compute[257476]: 2025-11-24 20:18:08.037 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:18:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:08.213+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:08 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:08 compute-0 ceph-mon[75677]: pgmap v1033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/803476848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:18:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:08.409+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:08 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:18:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2558837185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:18:08 compute-0 nova_compute[257476]: 2025-11-24 20:18:08.570 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:18:08 compute-0 nova_compute[257476]: 2025-11-24 20:18:08.576 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:18:08 compute-0 nova_compute[257476]: 2025-11-24 20:18:08.597 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:18:08 compute-0 nova_compute[257476]: 2025-11-24 20:18:08.599 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:18:08 compute-0 nova_compute[257476]: 2025-11-24 20:18:08.599 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:18:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:09.202+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:09 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:09 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2558837185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:18:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:18:09.371 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:18:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:18:09.372 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:18:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:18:09.372 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:18:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:09.411+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:09 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:09 compute-0 nova_compute[257476]: 2025-11-24 20:18:09.599 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:18:09 compute-0 nova_compute[257476]: 2025-11-24 20:18:09.600 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:18:09 compute-0 nova_compute[257476]: 2025-11-24 20:18:09.600 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:18:09 compute-0 nova_compute[257476]: 2025-11-24 20:18:09.601 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:18:09 compute-0 nova_compute[257476]: 2025-11-24 20:18:09.601 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:18:10 compute-0 nova_compute[257476]: 2025-11-24 20:18:10.148 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:18:10 compute-0 nova_compute[257476]: 2025-11-24 20:18:10.149 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:18:10 compute-0 nova_compute[257476]: 2025-11-24 20:18:10.149 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:18:10 compute-0 nova_compute[257476]: 2025-11-24 20:18:10.149 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:18:10 compute-0 nova_compute[257476]: 2025-11-24 20:18:10.165 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:18:10 compute-0 nova_compute[257476]: 2025-11-24 20:18:10.165 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:18:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:10.248+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:10 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:10 compute-0 ceph-mon[75677]: pgmap v1034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:10.432+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:10 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:11.212+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:11 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:11.465+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:11 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1612 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:11 compute-0 podman[264992]: 2025-11-24 20:18:11.946431519 +0000 UTC m=+0.165933002 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 24 20:18:12 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:12.258+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:12 compute-0 ceph-mon[75677]: pgmap v1035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1612 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:12.441+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:12 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:13.288+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:13 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:13.478+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:13 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:14.307+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:14 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:14 compute-0 ceph-mon[75677]: pgmap v1036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:14.476+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:14 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:15.265+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:15 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:15.478+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:15 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:16.237+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:16 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:16 compute-0 ceph-mon[75677]: pgmap v1037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:18:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1209250497' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:18:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:18:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1209250497' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:18:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:16.522+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:16 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:17.275+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:17 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1617 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1209250497' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:18:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1209250497' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:18:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:17.502+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:17 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:18.310+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:18 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:18 compute-0 ceph-mon[75677]: pgmap v1038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:18 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1617 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:18.512+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:18 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:19.345+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:19 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:19 compute-0 ceph-mon[75677]: pgmap v1039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:19.473+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:19 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:20.353+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:20 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:20.454+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:20 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:21.351+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:21 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:21 compute-0 ceph-mon[75677]: pgmap v1040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:21.457+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:21 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:22.353+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:22 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:22.454+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:22 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:18:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5606 writes, 23K keys, 5606 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5606 writes, 861 syncs, 6.51 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 28 writes, 42 keys, 28 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s
                                           Interval WAL: 28 writes, 14 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:18:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:23.337+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:23 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:23 compute-0 ceph-mon[75677]: pgmap v1041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:23.461+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:23 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:24.295+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:24 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:18:24
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'volumes', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 24 20:18:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:18:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:24.436+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:24 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:25.296+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:25 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:25 compute-0 ceph-mon[75677]: pgmap v1042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:25.455+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:25 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:26.274+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:26 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:26.503+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:26 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1622 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:26 compute-0 podman[265019]: 2025-11-24 20:18:26.856129871 +0000 UTC m=+0.085924952 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 24 20:18:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:27.248+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:27 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1622 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:27 compute-0 ceph-mon[75677]: pgmap v1043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:27.459+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:27 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:28.206+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:28 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:28.481+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:28 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:18:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6676 writes, 27K keys, 6676 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6676 writes, 1213 syncs, 5.50 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 20 writes, 30 keys, 20 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s
                                           Interval WAL: 20 writes, 10 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:18:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:29.180+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:29 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:29.444+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:29 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:29 compute-0 ceph-mon[75677]: pgmap v1044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:30.199+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:30 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:30.454+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:30 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:31.222+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:31 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:31.406+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:31 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:31 compute-0 ceph-mon[75677]: pgmap v1045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1632 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:32.245+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:32 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:32.437+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:32 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1632 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:33.285+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:33 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:33.393+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:33 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:33 compute-0 ceph-mon[75677]: pgmap v1046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:34.290+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:34 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:34.437+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:34 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:18:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5405 writes, 23K keys, 5405 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5405 writes, 772 syncs, 7.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 16 writes, 24 keys, 16 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s
                                           Interval WAL: 16 writes, 8 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:18:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:18:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:18:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:35.240+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:35 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:35.474+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:35 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:35 compute-0 ceph-mon[75677]: pgmap v1047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:36.276+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:36 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:36.426+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:36 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:37.322+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:37 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:37.452+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:37 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1637 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:37 compute-0 ceph-mon[75677]: pgmap v1048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:38.306+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:38 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:38.485+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:38 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1637 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:38 compute-0 podman[265039]: 2025-11-24 20:18:38.863529292 +0000 UTC m=+0.089415351 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 24 20:18:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:39.264+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:39 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:39.436+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:39 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:39 compute-0 ceph-mon[75677]: pgmap v1049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:39 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Check health
Nov 24 20:18:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:40.262+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:40 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:40.400+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:40 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:18:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:18:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:18:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:18:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:18:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:41.216+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:41 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:41.356+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:41 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:41 compute-0 ceph-mon[75677]: pgmap v1050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:42.224+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:42 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:42.329+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:42 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:42 compute-0 podman[265059]: 2025-11-24 20:18:42.926268765 +0000 UTC m=+0.152722835 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 20:18:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:43.194+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:43 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:43.350+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:43 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:43 compute-0 ceph-mon[75677]: pgmap v1051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:44.237+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:44 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:44.366+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:44 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:45.253+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:45 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:45.367+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:45 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:45 compute-0 ceph-mon[75677]: pgmap v1052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:46.234+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:46 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:46.351+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:46 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1642 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:47.250+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:47 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:47.379+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:47 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1642 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:47 compute-0 ceph-mon[75677]: pgmap v1053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:48.289+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:48 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:48.392+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:48 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:49.282+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:49 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:49.364+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:49 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:49 compute-0 ceph-mon[75677]: pgmap v1054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:50.253+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:50 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:50.355+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:50 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:50 compute-0 sshd-session[265087]: Connection closed by authenticating user root 27.79.44.141 port 46760 [preauth]
Nov 24 20:18:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:51.229+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:51 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:51.320+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:51 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:51 compute-0 ceph-mon[75677]: pgmap v1055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1652 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:52.214+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:52 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:52.300+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:52 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1652 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:53.243+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:53 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:53.315+0000 7f1a67169640 -1 osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:53 compute-0 ceph-osd[89640]: osd.1 121 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 24 20:18:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 24 20:18:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:53 compute-0 ceph-mon[75677]: pgmap v1056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:53 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 24 20:18:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:54.210+0000 7f2ca3ee7640 -1 osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:54 compute-0 ceph-osd[88624]: osd.0 121 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:54.308+0000 7f1a67169640 -1 osd.1 122 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:54 compute-0 ceph-osd[89640]: osd.1 122 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:18:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:18:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:18:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:18:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:18:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:18:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 24 20:18:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 24 20:18:54 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 24 20:18:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:54 compute-0 ceph-mon[75677]: osdmap e122: 3 total, 3 up, 3 in
Nov 24 20:18:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:55.171+0000 7f2ca3ee7640 -1 osd.0 123 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:55 compute-0 ceph-osd[88624]: osd.0 123 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:55.339+0000 7f1a67169640 -1 osd.1 123 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:55 compute-0 ceph-osd[89640]: osd.1 123 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 24 20:18:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:55 compute-0 ceph-mon[75677]: osdmap e123: 3 total, 3 up, 3 in
Nov 24 20:18:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:55 compute-0 ceph-mon[75677]: pgmap v1059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:18:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 24 20:18:55 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 24 20:18:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:56.176+0000 7f2ca3ee7640 -1 osd.0 124 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:56 compute-0 ceph-osd[88624]: osd.0 124 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:56.316+0000 7f1a67169640 -1 osd.1 124 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:56 compute-0 ceph-osd[89640]: osd.1 124 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:56 compute-0 ceph-mon[75677]: osdmap e124: 3 total, 3 up, 3 in
Nov 24 20:18:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:18:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:57.140+0000 7f2ca3ee7640 -1 osd.0 124 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:57 compute-0 ceph-osd[88624]: osd.0 124 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 13 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 22 op/s
Nov 24 20:18:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:57.309+0000 7f1a67169640 -1 osd.1 124 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:57 compute-0 ceph-osd[89640]: osd.1 124 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1657 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 24 20:18:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 24 20:18:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:57 compute-0 ceph-mon[75677]: pgmap v1061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 13 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 22 op/s
Nov 24 20:18:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 24 20:18:57 compute-0 podman[265089]: 2025-11-24 20:18:57.85580516 +0000 UTC m=+0.070889379 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:18:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:58.135+0000 7f2ca3ee7640 -1 osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:58 compute-0 ceph-osd[88624]: osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:58.269+0000 7f1a67169640 -1 osd.1 125 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:58 compute-0 ceph-osd[89640]: osd.1 125 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:58 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1657 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:18:58 compute-0 ceph-mon[75677]: osdmap e125: 3 total, 3 up, 3 in
Nov 24 20:18:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:18:59.158+0000 7f2ca3ee7640 -1 osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:59 compute-0 ceph-osd[88624]: osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:18:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.7 MiB/s wr, 49 op/s
Nov 24 20:18:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:18:59.279+0000 7f1a67169640 -1 osd.1 125 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:59 compute-0 ceph-osd[89640]: osd.1 125 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:18:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:18:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:18:59 compute-0 ceph-mon[75677]: pgmap v1063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.7 MiB/s wr, 49 op/s
Nov 24 20:19:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:00.184+0000 7f2ca3ee7640 -1 osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:00 compute-0 ceph-osd[88624]: osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:00.323+0000 7f1a67169640 -1 osd.1 125 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:00 compute-0 ceph-osd[89640]: osd.1 125 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:01.168+0000 7f2ca3ee7640 -1 osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:01 compute-0 ceph-osd[88624]: osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 6.2 MiB/s wr, 58 op/s
Nov 24 20:19:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:01.356+0000 7f1a67169640 -1 osd.1 125 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:01 compute-0 ceph-osd[89640]: osd.1 125 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:01 compute-0 ceph-mon[75677]: pgmap v1064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 6.2 MiB/s wr, 58 op/s
Nov 24 20:19:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 24 20:19:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 24 20:19:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 24 20:19:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:02.141+0000 7f2ca3ee7640 -1 osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:02 compute-0 ceph-osd[88624]: osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:02 compute-0 sudo[265108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:02 compute-0 sudo[265108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:02 compute-0 sudo[265108]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:02.400+0000 7f1a67169640 -1 osd.1 125 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:02 compute-0 ceph-osd[89640]: osd.1 125 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:02 compute-0 sudo[265133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:19:02 compute-0 sudo[265133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:02 compute-0 sudo[265133]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:02 compute-0 sudo[265158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:02 compute-0 sudo[265158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:02 compute-0 sudo[265158]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:02 compute-0 sudo[265183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 20:19:02 compute-0 sudo[265183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:02 compute-0 ceph-mon[75677]: osdmap e126: 3 total, 3 up, 3 in
Nov 24 20:19:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:02 compute-0 sudo[265183]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:19:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:19:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:03 compute-0 sudo[265228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:03 compute-0 sudo[265228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:03 compute-0 sudo[265228]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:03.113+0000 7f2ca3ee7640 -1 osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:03 compute-0 ceph-osd[88624]: osd.0 125 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:03 compute-0 sudo[265253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:19:03 compute-0 sudo[265253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:03 compute-0 sudo[265253]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.8 MiB/s wr, 32 op/s
Nov 24 20:19:03 compute-0 sudo[265278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:03 compute-0 sudo[265278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:03 compute-0 sudo[265278]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:03 compute-0 sudo[265303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:19:03 compute-0 sudo[265303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:03.436+0000 7f1a67169640 -1 osd.1 126 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:03 compute-0 ceph-osd[89640]: osd.1 126 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:04 compute-0 ceph-mon[75677]: pgmap v1066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.8 MiB/s wr, 32 op/s
Nov 24 20:19:04 compute-0 sudo[265303]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:04.066+0000 7f2ca3ee7640 -1 osd.0 126 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:04 compute-0 ceph-osd[88624]: osd.0 126 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:04 compute-0 sudo[265359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:04 compute-0 sudo[265359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:04 compute-0 sudo[265359]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:04 compute-0 sudo[265384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:19:04 compute-0 sudo[265384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:04 compute-0 sudo[265384]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:04 compute-0 sudo[265409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:04 compute-0 sudo[265409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:04 compute-0 sudo[265409]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:04.405+0000 7f1a67169640 -1 osd.1 126 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:04 compute-0 ceph-osd[89640]: osd.1 126 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:04 compute-0 sudo[265434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- inventory --format=json-pretty --filter-for-batch
Nov 24 20:19:04 compute-0 sudo[265434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:04 compute-0 podman[265500]: 2025-11-24 20:19:04.902758701 +0000 UTC m=+0.061868279 container create 1cb91a9bae64ef9fa3785fc6ca6dab1bf19f186181a0781389ca28fd7054ccd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:19:04 compute-0 systemd[1]: Started libpod-conmon-1cb91a9bae64ef9fa3785fc6ca6dab1bf19f186181a0781389ca28fd7054ccd1.scope.
Nov 24 20:19:04 compute-0 podman[265500]: 2025-11-24 20:19:04.876285666 +0000 UTC m=+0.035395254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:19:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:19:05 compute-0 podman[265500]: 2025-11-24 20:19:05.014176712 +0000 UTC m=+0.173286340 container init 1cb91a9bae64ef9fa3785fc6ca6dab1bf19f186181a0781389ca28fd7054ccd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:19:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:05.020+0000 7f2ca3ee7640 -1 osd.0 126 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:05 compute-0 ceph-osd[88624]: osd.0 126 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:05 compute-0 podman[265500]: 2025-11-24 20:19:05.027523242 +0000 UTC m=+0.186632820 container start 1cb91a9bae64ef9fa3785fc6ca6dab1bf19f186181a0781389ca28fd7054ccd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:19:05 compute-0 podman[265500]: 2025-11-24 20:19:05.031857122 +0000 UTC m=+0.190966710 container attach 1cb91a9bae64ef9fa3785fc6ca6dab1bf19f186181a0781389ca28fd7054ccd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_easley, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 20:19:05 compute-0 kind_easley[265516]: 167 167
Nov 24 20:19:05 compute-0 systemd[1]: libpod-1cb91a9bae64ef9fa3785fc6ca6dab1bf19f186181a0781389ca28fd7054ccd1.scope: Deactivated successfully.
Nov 24 20:19:05 compute-0 podman[265500]: 2025-11-24 20:19:05.037763522 +0000 UTC m=+0.196873110 container died 1cb91a9bae64ef9fa3785fc6ca6dab1bf19f186181a0781389ca28fd7054ccd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-32f8e44eca4fd6ec528d4570f1142e0e735e29f6b26e3134cb7ddf1816004372-merged.mount: Deactivated successfully.
Nov 24 20:19:05 compute-0 podman[265500]: 2025-11-24 20:19:05.091098212 +0000 UTC m=+0.250207790 container remove 1cb91a9bae64ef9fa3785fc6ca6dab1bf19f186181a0781389ca28fd7054ccd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_easley, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:19:05 compute-0 systemd[1]: libpod-conmon-1cb91a9bae64ef9fa3785fc6ca6dab1bf19f186181a0781389ca28fd7054ccd1.scope: Deactivated successfully.
Nov 24 20:19:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.6 MiB/s wr, 30 op/s
Nov 24 20:19:05 compute-0 podman[265542]: 2025-11-24 20:19:05.327816188 +0000 UTC m=+0.071635698 container create e8535fe493b7f3e5071c5b07f6a9bbd4561f7b0a445a8b4b6c00990dccb42b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:19:05 compute-0 ceph-osd[89640]: osd.1 126 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:05.358+0000 7f1a67169640 -1 osd.1 126 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:05 compute-0 systemd[1]: Started libpod-conmon-e8535fe493b7f3e5071c5b07f6a9bbd4561f7b0a445a8b4b6c00990dccb42b83.scope.
Nov 24 20:19:05 compute-0 podman[265542]: 2025-11-24 20:19:05.29768059 +0000 UTC m=+0.041500150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:19:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d49bfcf3c35941667d2133ef7743efcd76ad19f3ba36124e12c020033b5212/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d49bfcf3c35941667d2133ef7743efcd76ad19f3ba36124e12c020033b5212/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d49bfcf3c35941667d2133ef7743efcd76ad19f3ba36124e12c020033b5212/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d49bfcf3c35941667d2133ef7743efcd76ad19f3ba36124e12c020033b5212/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:05 compute-0 podman[265542]: 2025-11-24 20:19:05.43416949 +0000 UTC m=+0.177989020 container init e8535fe493b7f3e5071c5b07f6a9bbd4561f7b0a445a8b4b6c00990dccb42b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:19:05 compute-0 podman[265542]: 2025-11-24 20:19:05.44593629 +0000 UTC m=+0.189755800 container start e8535fe493b7f3e5071c5b07f6a9bbd4561f7b0a445a8b4b6c00990dccb42b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 20:19:05 compute-0 podman[265542]: 2025-11-24 20:19:05.450072625 +0000 UTC m=+0.193892135 container attach e8535fe493b7f3e5071c5b07f6a9bbd4561f7b0a445a8b4b6c00990dccb42b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 20:19:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:06 compute-0 ceph-mon[75677]: pgmap v1067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.6 MiB/s wr, 30 op/s
Nov 24 20:19:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:06.035+0000 7f2ca3ee7640 -1 osd.0 126 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:06 compute-0 ceph-osd[88624]: osd.0 126 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:06 compute-0 ceph-osd[89640]: osd.1 126 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:06.341+0000 7f1a67169640 -1 osd.1 126 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1662 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:06.987+0000 7f2ca3ee7640 -1 osd.0 126 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:06 compute-0 ceph-osd[88624]: osd.0 126 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 24 20:19:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1662 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 24 20:19:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 24 20:19:07 compute-0 nova_compute[257476]: 2025-11-24 20:19:07.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:19:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 2.6 MiB/s wr, 13 op/s
Nov 24 20:19:07 compute-0 ceph-osd[89640]: osd.1 127 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:07.332+0000 7f1a67169640 -1 osd.1 127 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:07 compute-0 hungry_hopper[265559]: [
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:     {
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:         "available": false,
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:         "ceph_device": false,
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:         "lsm_data": {},
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:         "lvs": [],
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:         "path": "/dev/sr0",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:         "rejected_reasons": [
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "Insufficient space (<5GB)",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "Has a FileSystem"
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:         ],
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:         "sys_api": {
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "actuators": null,
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "device_nodes": "sr0",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "devname": "sr0",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "human_readable_size": "482.00 KB",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "id_bus": "ata",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "model": "QEMU DVD-ROM",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "nr_requests": "2",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "parent": "/dev/sr0",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "partitions": {},
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "path": "/dev/sr0",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "removable": "1",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "rev": "2.5+",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "ro": "0",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "rotational": "1",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "sas_address": "",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "sas_device_handle": "",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "scheduler_mode": "mq-deadline",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "sectors": 0,
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "sectorsize": "2048",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "size": 493568.0,
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "support_discard": "2048",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "type": "disk",
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:             "vendor": "QEMU"
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:         }
Nov 24 20:19:07 compute-0 hungry_hopper[265559]:     }
Nov 24 20:19:07 compute-0 hungry_hopper[265559]: ]
Nov 24 20:19:07 compute-0 systemd[1]: libpod-e8535fe493b7f3e5071c5b07f6a9bbd4561f7b0a445a8b4b6c00990dccb42b83.scope: Deactivated successfully.
Nov 24 20:19:07 compute-0 systemd[1]: libpod-e8535fe493b7f3e5071c5b07f6a9bbd4561f7b0a445a8b4b6c00990dccb42b83.scope: Consumed 2.085s CPU time.
Nov 24 20:19:07 compute-0 podman[265542]: 2025-11-24 20:19:07.420008841 +0000 UTC m=+2.163828351 container died e8535fe493b7f3e5071c5b07f6a9bbd4561f7b0a445a8b4b6c00990dccb42b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-31d49bfcf3c35941667d2133ef7743efcd76ad19f3ba36124e12c020033b5212-merged.mount: Deactivated successfully.
Nov 24 20:19:07 compute-0 podman[265542]: 2025-11-24 20:19:07.498255376 +0000 UTC m=+2.242074856 container remove e8535fe493b7f3e5071c5b07f6a9bbd4561f7b0a445a8b4b6c00990dccb42b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:19:07 compute-0 systemd[1]: libpod-conmon-e8535fe493b7f3e5071c5b07f6a9bbd4561f7b0a445a8b4b6c00990dccb42b83.scope: Deactivated successfully.
Nov 24 20:19:07 compute-0 sudo[265434]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:19:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:19:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:19:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:19:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:19:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:19:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:19:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1a8cd51a-77cb-49cd-827d-348ea35a7c68 does not exist
Nov 24 20:19:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e12b325b-1ba9-4197-90b6-e745f8ae46fa does not exist
Nov 24 20:19:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev fa826d02-ad5e-4289-bbeb-108d8304b629 does not exist
Nov 24 20:19:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:19:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:19:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:19:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:19:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:19:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:19:07 compute-0 sudo[267740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:07 compute-0 sudo[267740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:07 compute-0 sudo[267740]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:07 compute-0 sudo[267765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:19:07 compute-0 sudo[267765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:07 compute-0 sudo[267765]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:07 compute-0 sudo[267790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:07 compute-0 sudo[267790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:07 compute-0 sudo[267790]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:08 compute-0 sudo[267815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:19:08 compute-0 sudo[267815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:08.030+0000 7f2ca3ee7640 -1 osd.0 127 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:08 compute-0 ceph-osd[88624]: osd.0 127 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:08 compute-0 ceph-mon[75677]: osdmap e127: 3 total, 3 up, 3 in
Nov 24 20:19:08 compute-0 ceph-mon[75677]: pgmap v1069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 2.6 MiB/s wr, 13 op/s
Nov 24 20:19:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:19:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:19:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:19:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:19:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.176 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.176 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.177 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.177 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.178 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:19:08 compute-0 ceph-osd[89640]: osd.1 127 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:08.337+0000 7f1a67169640 -1 osd.1 127 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:08 compute-0 podman[267900]: 2025-11-24 20:19:08.51584147 +0000 UTC m=+0.073769871 container create ba72d7e25e70bbbc591782e13fb8188d96cd2e9a22efb721bbfaf50cf42251a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:19:08 compute-0 systemd[1]: Started libpod-conmon-ba72d7e25e70bbbc591782e13fb8188d96cd2e9a22efb721bbfaf50cf42251a4.scope.
Nov 24 20:19:08 compute-0 podman[267900]: 2025-11-24 20:19:08.485268361 +0000 UTC m=+0.043196822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:19:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:19:08 compute-0 podman[267900]: 2025-11-24 20:19:08.629457177 +0000 UTC m=+0.187385608 container init ba72d7e25e70bbbc591782e13fb8188d96cd2e9a22efb721bbfaf50cf42251a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:19:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:19:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355642198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:19:08 compute-0 podman[267900]: 2025-11-24 20:19:08.638873857 +0000 UTC m=+0.196802238 container start ba72d7e25e70bbbc591782e13fb8188d96cd2e9a22efb721bbfaf50cf42251a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:19:08 compute-0 podman[267900]: 2025-11-24 20:19:08.644645484 +0000 UTC m=+0.202573935 container attach ba72d7e25e70bbbc591782e13fb8188d96cd2e9a22efb721bbfaf50cf42251a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:19:08 compute-0 modest_panini[267917]: 167 167
Nov 24 20:19:08 compute-0 systemd[1]: libpod-ba72d7e25e70bbbc591782e13fb8188d96cd2e9a22efb721bbfaf50cf42251a4.scope: Deactivated successfully.
Nov 24 20:19:08 compute-0 podman[267900]: 2025-11-24 20:19:08.651772036 +0000 UTC m=+0.209700437 container died ba72d7e25e70bbbc591782e13fb8188d96cd2e9a22efb721bbfaf50cf42251a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.655 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-53aa0ee877398fecdddbb7681b1f2e08c031af5dc82936d6976ba74a3e131935-merged.mount: Deactivated successfully.
Nov 24 20:19:08 compute-0 podman[267900]: 2025-11-24 20:19:08.69666271 +0000 UTC m=+0.254591101 container remove ba72d7e25e70bbbc591782e13fb8188d96cd2e9a22efb721bbfaf50cf42251a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 24 20:19:08 compute-0 systemd[1]: libpod-conmon-ba72d7e25e70bbbc591782e13fb8188d96cd2e9a22efb721bbfaf50cf42251a4.scope: Deactivated successfully.
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.890 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.896 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5122MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.896 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.897 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:19:08 compute-0 podman[267943]: 2025-11-24 20:19:08.934803122 +0000 UTC m=+0.073050364 container create a296a954143dc92952a90229af04a863f50968407be0c59a7f8242c76cab7bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:19:08 compute-0 systemd[1]: Started libpod-conmon-a296a954143dc92952a90229af04a863f50968407be0c59a7f8242c76cab7bde.scope.
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.989 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:19:08 compute-0 nova_compute[257476]: 2025-11-24 20:19:08.990 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:19:08 compute-0 podman[267943]: 2025-11-24 20:19:08.903234067 +0000 UTC m=+0.041481369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:19:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:19:09 compute-0 nova_compute[257476]: 2025-11-24 20:19:09.018 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2865201d0e7c0a2b8266ad57b0c03a5804e58f39c1adf3c38b2e16c91e53486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2865201d0e7c0a2b8266ad57b0c03a5804e58f39c1adf3c38b2e16c91e53486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2865201d0e7c0a2b8266ad57b0c03a5804e58f39c1adf3c38b2e16c91e53486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2865201d0e7c0a2b8266ad57b0c03a5804e58f39c1adf3c38b2e16c91e53486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2865201d0e7c0a2b8266ad57b0c03a5804e58f39c1adf3c38b2e16c91e53486/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:09 compute-0 podman[267943]: 2025-11-24 20:19:09.040707032 +0000 UTC m=+0.178954264 container init a296a954143dc92952a90229af04a863f50968407be0c59a7f8242c76cab7bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 20:19:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:09 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1355642198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:19:09 compute-0 podman[267943]: 2025-11-24 20:19:09.061657377 +0000 UTC m=+0.199904589 container start a296a954143dc92952a90229af04a863f50968407be0c59a7f8242c76cab7bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 20:19:09 compute-0 podman[267943]: 2025-11-24 20:19:09.066091919 +0000 UTC m=+0.204339121 container attach a296a954143dc92952a90229af04a863f50968407be0c59a7f8242c76cab7bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:19:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:09.068+0000 7f2ca3ee7640 -1 osd.0 127 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:09 compute-0 ceph-osd[88624]: osd.0 127 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:09 compute-0 podman[267957]: 2025-11-24 20:19:09.12688791 +0000 UTC m=+0.124384992 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 24 20:19:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 33 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 383 B/s wr, 2 op/s
Nov 24 20:19:09 compute-0 ceph-osd[89640]: osd.1 127 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:09.364+0000 7f1a67169640 -1 osd.1 127 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:19:09.372 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:19:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:19:09.373 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:19:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:19:09.373 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:19:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:19:09 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1160642215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:19:09 compute-0 nova_compute[257476]: 2025-11-24 20:19:09.454 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:19:09 compute-0 nova_compute[257476]: 2025-11-24 20:19:09.461 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:19:09 compute-0 nova_compute[257476]: 2025-11-24 20:19:09.480 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:19:09 compute-0 nova_compute[257476]: 2025-11-24 20:19:09.483 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:19:09 compute-0 nova_compute[257476]: 2025-11-24 20:19:09.483 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:19:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:10 compute-0 ceph-mon[75677]: pgmap v1070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 33 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 383 B/s wr, 2 op/s
Nov 24 20:19:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:10 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1160642215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:19:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:10.076+0000 7f2ca3ee7640 -1 osd.0 127 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:10 compute-0 ceph-osd[88624]: osd.0 127 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:10 compute-0 clever_kalam[267960]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:19:10 compute-0 clever_kalam[267960]: --> relative data size: 1.0
Nov 24 20:19:10 compute-0 clever_kalam[267960]: --> All data devices are unavailable
Nov 24 20:19:10 compute-0 systemd[1]: libpod-a296a954143dc92952a90229af04a863f50968407be0c59a7f8242c76cab7bde.scope: Deactivated successfully.
Nov 24 20:19:10 compute-0 systemd[1]: libpod-a296a954143dc92952a90229af04a863f50968407be0c59a7f8242c76cab7bde.scope: Consumed 1.110s CPU time.
Nov 24 20:19:10 compute-0 podman[268031]: 2025-11-24 20:19:10.293566965 +0000 UTC m=+0.043827758 container died a296a954143dc92952a90229af04a863f50968407be0c59a7f8242c76cab7bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:19:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2865201d0e7c0a2b8266ad57b0c03a5804e58f39c1adf3c38b2e16c91e53486-merged.mount: Deactivated successfully.
Nov 24 20:19:10 compute-0 podman[268031]: 2025-11-24 20:19:10.383314753 +0000 UTC m=+0.133575496 container remove a296a954143dc92952a90229af04a863f50968407be0c59a7f8242c76cab7bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:19:10 compute-0 ceph-osd[89640]: osd.1 127 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:10.390+0000 7f1a67169640 -1 osd.1 127 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:10 compute-0 systemd[1]: libpod-conmon-a296a954143dc92952a90229af04a863f50968407be0c59a7f8242c76cab7bde.scope: Deactivated successfully.
Nov 24 20:19:10 compute-0 sudo[267815]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:10 compute-0 sudo[268047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:10 compute-0 sudo[268047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:10 compute-0 sudo[268047]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:10 compute-0 sudo[268072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:19:10 compute-0 sudo[268072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:10 compute-0 sudo[268072]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:10 compute-0 sudo[268097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:10 compute-0 sudo[268097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:10 compute-0 sudo[268097]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:10 compute-0 sudo[268122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:19:10 compute-0 sudo[268122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 24 20:19:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 24 20:19:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:11.086+0000 7f2ca3ee7640 -1 osd.0 127 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:11 compute-0 ceph-osd[88624]: osd.0 127 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 24 20:19:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Nov 24 20:19:11 compute-0 podman[268185]: 2025-11-24 20:19:11.380514028 +0000 UTC m=+0.067466601 container create 129ad7ba0eb671d4ca05a0e4aeefeda29adddf0e0d4019009ce6a1324edaa21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:19:11 compute-0 ceph-osd[89640]: osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:11.425+0000 7f1a67169640 -1 osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:11 compute-0 systemd[1]: Started libpod-conmon-129ad7ba0eb671d4ca05a0e4aeefeda29adddf0e0d4019009ce6a1324edaa21c.scope.
Nov 24 20:19:11 compute-0 podman[268185]: 2025-11-24 20:19:11.353706935 +0000 UTC m=+0.040659578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:19:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:19:11 compute-0 nova_compute[257476]: 2025-11-24 20:19:11.483 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:19:11 compute-0 nova_compute[257476]: 2025-11-24 20:19:11.484 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:19:11 compute-0 nova_compute[257476]: 2025-11-24 20:19:11.484 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:19:11 compute-0 podman[268185]: 2025-11-24 20:19:11.486636314 +0000 UTC m=+0.173588977 container init 129ad7ba0eb671d4ca05a0e4aeefeda29adddf0e0d4019009ce6a1324edaa21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:19:11 compute-0 podman[268185]: 2025-11-24 20:19:11.499113512 +0000 UTC m=+0.186066115 container start 129ad7ba0eb671d4ca05a0e4aeefeda29adddf0e0d4019009ce6a1324edaa21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 20:19:11 compute-0 podman[268185]: 2025-11-24 20:19:11.502889198 +0000 UTC m=+0.189841801 container attach 129ad7ba0eb671d4ca05a0e4aeefeda29adddf0e0d4019009ce6a1324edaa21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moser, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:19:11 compute-0 cool_moser[268201]: 167 167
Nov 24 20:19:11 compute-0 systemd[1]: libpod-129ad7ba0eb671d4ca05a0e4aeefeda29adddf0e0d4019009ce6a1324edaa21c.scope: Deactivated successfully.
Nov 24 20:19:11 compute-0 podman[268185]: 2025-11-24 20:19:11.509309302 +0000 UTC m=+0.196261865 container died 129ad7ba0eb671d4ca05a0e4aeefeda29adddf0e0d4019009ce6a1324edaa21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moser, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 24 20:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a08971c944e8252c23cf54fee88f4a0b6a77393ad4c97bc700c32a1d56fd873-merged.mount: Deactivated successfully.
Nov 24 20:19:11 compute-0 podman[268185]: 2025-11-24 20:19:11.563686289 +0000 UTC m=+0.250638882 container remove 129ad7ba0eb671d4ca05a0e4aeefeda29adddf0e0d4019009ce6a1324edaa21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 20:19:11 compute-0 systemd[1]: libpod-conmon-129ad7ba0eb671d4ca05a0e4aeefeda29adddf0e0d4019009ce6a1324edaa21c.scope: Deactivated successfully.
Nov 24 20:19:11 compute-0 podman[268224]: 2025-11-24 20:19:11.827615778 +0000 UTC m=+0.068414205 container create 69b23389a4270eb8ef1a3c8a6adfbcac5fe92a4234e62c3e6ea64f4045fa60e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:19:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1667 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:11 compute-0 systemd[1]: Started libpod-conmon-69b23389a4270eb8ef1a3c8a6adfbcac5fe92a4234e62c3e6ea64f4045fa60e9.scope.
Nov 24 20:19:11 compute-0 podman[268224]: 2025-11-24 20:19:11.810080041 +0000 UTC m=+0.050878488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:19:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/218f6b89a82c9a4d4c94facbcb76706cacb488e9233cac143267b8f101c68477/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/218f6b89a82c9a4d4c94facbcb76706cacb488e9233cac143267b8f101c68477/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/218f6b89a82c9a4d4c94facbcb76706cacb488e9233cac143267b8f101c68477/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/218f6b89a82c9a4d4c94facbcb76706cacb488e9233cac143267b8f101c68477/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:11 compute-0 podman[268224]: 2025-11-24 20:19:11.93124423 +0000 UTC m=+0.172042757 container init 69b23389a4270eb8ef1a3c8a6adfbcac5fe92a4234e62c3e6ea64f4045fa60e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 20:19:11 compute-0 podman[268224]: 2025-11-24 20:19:11.940491835 +0000 UTC m=+0.181290302 container start 69b23389a4270eb8ef1a3c8a6adfbcac5fe92a4234e62c3e6ea64f4045fa60e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lehmann, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:19:11 compute-0 podman[268224]: 2025-11-24 20:19:11.945726359 +0000 UTC m=+0.186524796 container attach 69b23389a4270eb8ef1a3c8a6adfbcac5fe92a4234e62c3e6ea64f4045fa60e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lehmann, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 24 20:19:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:12.061+0000 7f2ca3ee7640 -1 osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:12 compute-0 ceph-osd[88624]: osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:12 compute-0 ceph-mon[75677]: osdmap e128: 3 total, 3 up, 3 in
Nov 24 20:19:12 compute-0 ceph-mon[75677]: pgmap v1072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Nov 24 20:19:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1667 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:12 compute-0 nova_compute[257476]: 2025-11-24 20:19:12.147 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:19:12 compute-0 nova_compute[257476]: 2025-11-24 20:19:12.148 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:19:12 compute-0 nova_compute[257476]: 2025-11-24 20:19:12.196 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:19:12 compute-0 nova_compute[257476]: 2025-11-24 20:19:12.196 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:19:12 compute-0 nova_compute[257476]: 2025-11-24 20:19:12.196 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:19:12 compute-0 nova_compute[257476]: 2025-11-24 20:19:12.206 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:19:12 compute-0 nova_compute[257476]: 2025-11-24 20:19:12.207 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:19:12 compute-0 ceph-osd[89640]: osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:12.386+0000 7f1a67169640 -1 osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]: {
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:     "0": [
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:         {
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "devices": [
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "/dev/loop3"
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             ],
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_name": "ceph_lv0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_size": "21470642176",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "name": "ceph_lv0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "tags": {
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.cluster_name": "ceph",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.crush_device_class": "",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.encrypted": "0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.osd_id": "0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.type": "block",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.vdo": "0"
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             },
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "type": "block",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "vg_name": "ceph_vg0"
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:         }
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:     ],
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:     "1": [
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:         {
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "devices": [
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "/dev/loop4"
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             ],
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_name": "ceph_lv1",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_size": "21470642176",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "name": "ceph_lv1",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "tags": {
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.cluster_name": "ceph",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.crush_device_class": "",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.encrypted": "0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.osd_id": "1",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.type": "block",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.vdo": "0"
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             },
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "type": "block",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "vg_name": "ceph_vg1"
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:         }
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:     ],
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:     "2": [
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:         {
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "devices": [
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "/dev/loop5"
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             ],
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_name": "ceph_lv2",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_size": "21470642176",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "name": "ceph_lv2",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "tags": {
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.cluster_name": "ceph",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.crush_device_class": "",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.encrypted": "0",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.osd_id": "2",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.type": "block",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:                 "ceph.vdo": "0"
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             },
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "type": "block",
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:             "vg_name": "ceph_vg2"
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:         }
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]:     ]
Nov 24 20:19:12 compute-0 wizardly_lehmann[268240]: }
Nov 24 20:19:12 compute-0 systemd[1]: libpod-69b23389a4270eb8ef1a3c8a6adfbcac5fe92a4234e62c3e6ea64f4045fa60e9.scope: Deactivated successfully.
Nov 24 20:19:12 compute-0 podman[268249]: 2025-11-24 20:19:12.819369954 +0000 UTC m=+0.051276668 container died 69b23389a4270eb8ef1a3c8a6adfbcac5fe92a4234e62c3e6ea64f4045fa60e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:19:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-218f6b89a82c9a4d4c94facbcb76706cacb488e9233cac143267b8f101c68477-merged.mount: Deactivated successfully.
Nov 24 20:19:12 compute-0 podman[268249]: 2025-11-24 20:19:12.896414728 +0000 UTC m=+0.128321422 container remove 69b23389a4270eb8ef1a3c8a6adfbcac5fe92a4234e62c3e6ea64f4045fa60e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lehmann, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:19:12 compute-0 systemd[1]: libpod-conmon-69b23389a4270eb8ef1a3c8a6adfbcac5fe92a4234e62c3e6ea64f4045fa60e9.scope: Deactivated successfully.
Nov 24 20:19:12 compute-0 sudo[268122]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:13.017+0000 7f2ca3ee7640 -1 osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:13 compute-0 ceph-osd[88624]: osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:13 compute-0 sudo[268264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:13 compute-0 sudo[268264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:13 compute-0 sudo[268264]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:13 compute-0 sudo[268290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:19:13 compute-0 sudo[268290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:13 compute-0 sudo[268290]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Nov 24 20:19:13 compute-0 sudo[268334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:13 compute-0 podman[268288]: 2025-11-24 20:19:13.357264766 +0000 UTC m=+0.231763539 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 20:19:13 compute-0 sudo[268334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:13 compute-0 sudo[268334]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:13 compute-0 ceph-osd[89640]: osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:13.388+0000 7f1a67169640 -1 osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:13 compute-0 sudo[268365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:19:13 compute-0 sudo[268365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:13 compute-0 podman[268431]: 2025-11-24 20:19:13.922350436 +0000 UTC m=+0.077971000 container create 52c8c674ed18263830044ce229ad2fa5d49a766096a4aa1664caaa19089c8b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 24 20:19:13 compute-0 systemd[1]: Started libpod-conmon-52c8c674ed18263830044ce229ad2fa5d49a766096a4aa1664caaa19089c8b93.scope.
Nov 24 20:19:13 compute-0 podman[268431]: 2025-11-24 20:19:13.89232682 +0000 UTC m=+0.047947454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:19:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:19:14 compute-0 podman[268431]: 2025-11-24 20:19:14.039289967 +0000 UTC m=+0.194910591 container init 52c8c674ed18263830044ce229ad2fa5d49a766096a4aa1664caaa19089c8b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:19:14 compute-0 podman[268431]: 2025-11-24 20:19:14.050154784 +0000 UTC m=+0.205775348 container start 52c8c674ed18263830044ce229ad2fa5d49a766096a4aa1664caaa19089c8b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:19:14 compute-0 podman[268431]: 2025-11-24 20:19:14.055018268 +0000 UTC m=+0.210638892 container attach 52c8c674ed18263830044ce229ad2fa5d49a766096a4aa1664caaa19089c8b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 20:19:14 compute-0 pensive_cartwright[268447]: 167 167
Nov 24 20:19:14 compute-0 systemd[1]: libpod-52c8c674ed18263830044ce229ad2fa5d49a766096a4aa1664caaa19089c8b93.scope: Deactivated successfully.
Nov 24 20:19:14 compute-0 podman[268431]: 2025-11-24 20:19:14.060167879 +0000 UTC m=+0.215788443 container died 52c8c674ed18263830044ce229ad2fa5d49a766096a4aa1664caaa19089c8b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:19:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:14.059+0000 7f2ca3ee7640 -1 osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:14 compute-0 ceph-osd[88624]: osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-77221ec0cdf5c746d2cfceaff8d65cd2c21fda76a2b57b3aa3271d1ff6239cec-merged.mount: Deactivated successfully.
Nov 24 20:19:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:14 compute-0 ceph-mon[75677]: pgmap v1073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Nov 24 20:19:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:14 compute-0 podman[268431]: 2025-11-24 20:19:14.121861672 +0000 UTC m=+0.277482216 container remove 52c8c674ed18263830044ce229ad2fa5d49a766096a4aa1664caaa19089c8b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:19:14 compute-0 systemd[1]: libpod-conmon-52c8c674ed18263830044ce229ad2fa5d49a766096a4aa1664caaa19089c8b93.scope: Deactivated successfully.
Nov 24 20:19:14 compute-0 ceph-osd[89640]: osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:14.339+0000 7f1a67169640 -1 osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:14 compute-0 podman[268474]: 2025-11-24 20:19:14.388734466 +0000 UTC m=+0.071285628 container create 520f28ae95704ba25d5061c090c580273060909fee6478777d9bed0141e655e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hoover, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 20:19:14 compute-0 systemd[1]: Started libpod-conmon-520f28ae95704ba25d5061c090c580273060909fee6478777d9bed0141e655e6.scope.
Nov 24 20:19:14 compute-0 podman[268474]: 2025-11-24 20:19:14.362144989 +0000 UTC m=+0.044696221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:19:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15fa3c0517c5dfa397402c98d3e2e1ad64cef7e5b9e36d6c4254cb02f4e7ceb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15fa3c0517c5dfa397402c98d3e2e1ad64cef7e5b9e36d6c4254cb02f4e7ceb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15fa3c0517c5dfa397402c98d3e2e1ad64cef7e5b9e36d6c4254cb02f4e7ceb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15fa3c0517c5dfa397402c98d3e2e1ad64cef7e5b9e36d6c4254cb02f4e7ceb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:19:14 compute-0 podman[268474]: 2025-11-24 20:19:14.498288869 +0000 UTC m=+0.180840101 container init 520f28ae95704ba25d5061c090c580273060909fee6478777d9bed0141e655e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:19:14 compute-0 podman[268474]: 2025-11-24 20:19:14.514818501 +0000 UTC m=+0.197369663 container start 520f28ae95704ba25d5061c090c580273060909fee6478777d9bed0141e655e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:19:14 compute-0 podman[268474]: 2025-11-24 20:19:14.519423769 +0000 UTC m=+0.201974951 container attach 520f28ae95704ba25d5061c090c580273060909fee6478777d9bed0141e655e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:19:15 compute-0 ceph-osd[88624]: osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:15.087+0000 7f2ca3ee7640 -1 osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 55 op/s
Nov 24 20:19:15 compute-0 ceph-osd[89640]: osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:15.308+0000 7f1a67169640 -1 osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:15 compute-0 musing_hoover[268490]: {
Nov 24 20:19:15 compute-0 musing_hoover[268490]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "osd_id": 2,
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "type": "bluestore"
Nov 24 20:19:15 compute-0 musing_hoover[268490]:     },
Nov 24 20:19:15 compute-0 musing_hoover[268490]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "osd_id": 1,
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "type": "bluestore"
Nov 24 20:19:15 compute-0 musing_hoover[268490]:     },
Nov 24 20:19:15 compute-0 musing_hoover[268490]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "osd_id": 0,
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:19:15 compute-0 musing_hoover[268490]:         "type": "bluestore"
Nov 24 20:19:15 compute-0 musing_hoover[268490]:     }
Nov 24 20:19:15 compute-0 musing_hoover[268490]: }
Nov 24 20:19:15 compute-0 systemd[1]: libpod-520f28ae95704ba25d5061c090c580273060909fee6478777d9bed0141e655e6.scope: Deactivated successfully.
Nov 24 20:19:15 compute-0 systemd[1]: libpod-520f28ae95704ba25d5061c090c580273060909fee6478777d9bed0141e655e6.scope: Consumed 1.237s CPU time.
Nov 24 20:19:15 compute-0 podman[268474]: 2025-11-24 20:19:15.739545556 +0000 UTC m=+1.422096738 container died 520f28ae95704ba25d5061c090c580273060909fee6478777d9bed0141e655e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hoover, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e15fa3c0517c5dfa397402c98d3e2e1ad64cef7e5b9e36d6c4254cb02f4e7ceb-merged.mount: Deactivated successfully.
Nov 24 20:19:15 compute-0 podman[268474]: 2025-11-24 20:19:15.805457727 +0000 UTC m=+1.488008879 container remove 520f28ae95704ba25d5061c090c580273060909fee6478777d9bed0141e655e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 20:19:15 compute-0 systemd[1]: libpod-conmon-520f28ae95704ba25d5061c090c580273060909fee6478777d9bed0141e655e6.scope: Deactivated successfully.
Nov 24 20:19:15 compute-0 sudo[268365]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:19:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:19:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 74a9ecae-7a27-45d9-b50d-7236c4d50b9f does not exist
Nov 24 20:19:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 5049be49-609e-4312-9a0f-462e4f9b3396 does not exist
Nov 24 20:19:15 compute-0 sudo[268534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:19:15 compute-0 sudo[268534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:15 compute-0 sudo[268534]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:16.056+0000 7f2ca3ee7640 -1 osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:16 compute-0 ceph-osd[88624]: osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:16 compute-0 sudo[268559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:19:16 compute-0 sudo[268559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:19:16 compute-0 sudo[268559]: pam_unix(sudo:session): session closed for user root
Nov 24 20:19:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:16 compute-0 ceph-mon[75677]: pgmap v1074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 55 op/s
Nov 24 20:19:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:19:16 compute-0 ceph-osd[89640]: osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:16.334+0000 7f1a67169640 -1 osd.1 128 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:19:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3868223140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:19:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:19:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3868223140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:19:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1672 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 24 20:19:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 24 20:19:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 24 20:19:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:17.069+0000 7f2ca3ee7640 -1 osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:17 compute-0 ceph-osd[88624]: osd.0 128 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3868223140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:19:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3868223140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:19:17 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1672 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:17 compute-0 ceph-mon[75677]: osdmap e129: 3 total, 3 up, 3 in
Nov 24 20:19:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.1 KiB/s wr, 60 op/s
Nov 24 20:19:17 compute-0 ceph-osd[89640]: osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:17.324+0000 7f1a67169640 -1 osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:18.053+0000 7f2ca3ee7640 -1 osd.0 129 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:18 compute-0 ceph-osd[88624]: osd.0 129 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:18 compute-0 ceph-mon[75677]: pgmap v1076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.1 KiB/s wr, 60 op/s
Nov 24 20:19:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:18 compute-0 ceph-osd[89640]: osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:18.323+0000 7f1a67169640 -1 osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:19.068+0000 7f2ca3ee7640 -1 osd.0 129 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:19 compute-0 ceph-osd[88624]: osd.0 129 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 24 20:19:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:19 compute-0 ceph-osd[89640]: osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:19.371+0000 7f1a67169640 -1 osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:20.033+0000 7f2ca3ee7640 -1 osd.0 129 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:20 compute-0 ceph-osd[88624]: osd.0 129 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:20 compute-0 ceph-mon[75677]: pgmap v1077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Nov 24 20:19:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:20 compute-0 ceph-osd[89640]: osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:20.361+0000 7f1a67169640 -1 osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:21.062+0000 7f2ca3ee7640 -1 osd.0 129 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:21 compute-0 ceph-osd[88624]: osd.0 129 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Nov 24 20:19:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:21 compute-0 ceph-osd[89640]: osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:21.380+0000 7f1a67169640 -1 osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1682 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 24 20:19:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 24 20:19:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 24 20:19:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:22.016+0000 7f2ca3ee7640 -1 osd.0 130 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:22 compute-0 ceph-osd[88624]: osd.0 130 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:22 compute-0 ceph-mon[75677]: pgmap v1078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Nov 24 20:19:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:22 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1682 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:22 compute-0 ceph-mon[75677]: osdmap e130: 3 total, 3 up, 3 in
Nov 24 20:19:22 compute-0 ceph-osd[89640]: osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:22.395+0000 7f1a67169640 -1 osd.1 129 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:23.043+0000 7f2ca3ee7640 -1 osd.0 130 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:23 compute-0 ceph-osd[88624]: osd.0 130 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 6 op/s
Nov 24 20:19:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 24 20:19:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:23 compute-0 ceph-mon[75677]: pgmap v1080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 6 op/s
Nov 24 20:19:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 24 20:19:23 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 24 20:19:23 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:23.419+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:24.042+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:24 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:24 compute-0 ceph-mon[75677]: osdmap e131: 3 total, 3 up, 3 in
Nov 24 20:19:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:19:24
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'backups', 'volumes']
Nov 24 20:19:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:19:24 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:24.456+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:25.063+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:25 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 127 B/s wr, 4 op/s
Nov 24 20:19:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:25 compute-0 ceph-mon[75677]: pgmap v1082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 127 B/s wr, 4 op/s
Nov 24 20:19:25 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:25.504+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:26.107+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:26 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:26 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:26.489+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:27.131+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:27 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.0 KiB/s wr, 28 op/s
Nov 24 20:19:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1687 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:27 compute-0 ceph-mon[75677]: pgmap v1083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.0 KiB/s wr, 28 op/s
Nov 24 20:19:27 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:27.497+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:28.155+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:28 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:28 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1687 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:28 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:28.506+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:28 compute-0 podman[268584]: 2025-11-24 20:19:28.881048118 +0000 UTC m=+0.104010368 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:19:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:29.115+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:29 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.0 KiB/s wr, 28 op/s
Nov 24 20:19:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:29 compute-0 ceph-mon[75677]: pgmap v1084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.0 KiB/s wr, 28 op/s
Nov 24 20:19:29 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:29.484+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:30.162+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:30 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:30 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:30.490+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:31.152+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:31 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 KiB/s wr, 24 op/s
Nov 24 20:19:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:31 compute-0 ceph-mon[75677]: pgmap v1085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 KiB/s wr, 24 op/s
Nov 24 20:19:31 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:31.512+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:32.135+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:32 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:32 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:32.546+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:33.088+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:33 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.6 KiB/s wr, 22 op/s
Nov 24 20:19:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:33 compute-0 ceph-mon[75677]: pgmap v1086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.6 KiB/s wr, 22 op/s
Nov 24 20:19:33 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:33.552+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:34.038+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:34 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:34 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:34.531+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:19:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:19:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:35.040+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:35 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 KiB/s wr, 19 op/s
Nov 24 20:19:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:35 compute-0 ceph-mon[75677]: pgmap v1087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 KiB/s wr, 19 op/s
Nov 24 20:19:35 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:35.522+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:36.032+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:36 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:36 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:36.475+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1692 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:37.072+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:37 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Nov 24 20:19:37 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:37.468+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:37 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1692 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:37 compute-0 ceph-mon[75677]: pgmap v1088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Nov 24 20:19:38 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:38.059+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:38 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:38.466+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:39.091+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:39 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:39 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:39.428+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:39 compute-0 ceph-mon[75677]: pgmap v1089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:39 compute-0 podman[268604]: 2025-11-24 20:19:39.874005048 +0000 UTC m=+0.088746019 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:19:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:40.087+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:40 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:19:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:19:40 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:40.441+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:19:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:19:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:19:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:41.109+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:41 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:41 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:41.463+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:41 compute-0 ceph-mon[75677]: pgmap v1090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1702 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:42.080+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:42 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:42 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:42.420+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:42 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1702 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:43.031+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:43 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:43.418+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:43 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:43 compute-0 ceph-mon[75677]: pgmap v1091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:43 compute-0 podman[268624]: 2025-11-24 20:19:43.951087591 +0000 UTC m=+0.178710137 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 20:19:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:44.068+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:44 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:44.452+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:44 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:45.080+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:45 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:45.480+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:45 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:45 compute-0 ceph-mon[75677]: pgmap v1092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:46.032+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:46 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:46.450+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:46 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:47.081+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:47 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:47.435+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:47 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1707 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:47 compute-0 ceph-mon[75677]: pgmap v1093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:48.051+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:48 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:48.479+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:48 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:48 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1707 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:49.060+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:49 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:49.463+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:49 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:49 compute-0 ceph-mon[75677]: pgmap v1094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:50.058+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:50 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:50.456+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:50 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:51.012+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:51 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:51.425+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:51 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:51 compute-0 ceph-mon[75677]: pgmap v1095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:51.991+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:51 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:52.428+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:52 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:53.010+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:53 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:53.419+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:53 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:53 compute-0 ceph-mon[75677]: pgmap v1096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:53.974+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:53 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:19:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:19:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:19:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:19:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:19:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:19:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:54.428+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:54 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:54.949+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:54 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:55.415+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:55 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:55 compute-0 ceph-mon[75677]: pgmap v1097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:55.942+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:55 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:56.438+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:56 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1712 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.883061) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015596883117, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2162, "num_deletes": 259, "total_data_size": 2706738, "memory_usage": 2753400, "flush_reason": "Manual Compaction"}
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015596904034, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2620666, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28277, "largest_seqno": 30438, "table_properties": {"data_size": 2611174, "index_size": 5474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 25341, "raw_average_key_size": 21, "raw_value_size": 2589639, "raw_average_value_size": 2242, "num_data_blocks": 242, "num_entries": 1155, "num_filter_entries": 1155, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764015458, "oldest_key_time": 1764015458, "file_creation_time": 1764015596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 21043 microseconds, and 11258 cpu microseconds.
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.904104) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2620666 bytes OK
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.904132) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.905879) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.905899) EVENT_LOG_v1 {"time_micros": 1764015596905893, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.905924) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2696865, prev total WAL file size 2696865, number of live WAL files 2.
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.907292) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303039' seq:72057594037927935, type:22 .. '6C6F676D0031323631' seq:0, type:0; will stop at (end)
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2559KB)], [59(7498KB)]
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015596907374, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10299445, "oldest_snapshot_seqno": -1}
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 8251 keys, 10104590 bytes, temperature: kUnknown
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015596950101, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 10104590, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10053766, "index_size": 29080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 218950, "raw_average_key_size": 26, "raw_value_size": 9906997, "raw_average_value_size": 1200, "num_data_blocks": 1150, "num_entries": 8251, "num_filter_entries": 8251, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764015596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.950510) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10104590 bytes
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.952165) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 240.4 rd, 235.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 7.3 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(7.8) write-amplify(3.9) OK, records in: 8782, records dropped: 531 output_compression: NoCompression
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.952205) EVENT_LOG_v1 {"time_micros": 1764015596952186, "job": 32, "event": "compaction_finished", "compaction_time_micros": 42846, "compaction_time_cpu_micros": 21914, "output_level": 6, "num_output_files": 1, "total_output_size": 10104590, "num_input_records": 8782, "num_output_records": 8251, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015596953512, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015596957174, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.907143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.957256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.957264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.957267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.957270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:19:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:19:56.957273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:19:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:56.963+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:56 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:57.424+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:57 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:57 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1712 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:19:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:57 compute-0 ceph-mon[75677]: pgmap v1098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:57.964+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:57 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:58.443+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:58 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:58.915+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:58 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:19:59.483+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:59 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:19:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:59 compute-0 podman[268650]: 2025-11-24 20:19:59.859270639 +0000 UTC m=+0.085391249 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 20:19:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:19:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:19:59 compute-0 ceph-mon[75677]: pgmap v1099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:19:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:19:59.924+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:59 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:19:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:00.457+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:00 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:00.935+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:00 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:01.456+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:01 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1717 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:01 compute-0 ceph-mon[75677]: pgmap v1100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:01 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1717 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:01.941+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:01 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:02.429+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:02 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:02.962+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:02 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:03.467+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:03 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:03.922+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:03 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:03 compute-0 ceph-mon[75677]: pgmap v1101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:04.468+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:04 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:04.896+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:04 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:05.497+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:05 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:05.935+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:05 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:05 compute-0 ceph-mon[75677]: pgmap v1102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:06.496+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:06 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1722 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:06.947+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:06 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:06 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:06 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1722 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:07 compute-0 nova_compute[257476]: 2025-11-24 20:20:07.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:20:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:07.474+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:07 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:07.957+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:07 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:07 compute-0 ceph-mon[75677]: pgmap v1103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:08.447+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:08 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:09.004+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:09 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.181 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.181 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.182 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.182 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.183 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:20:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:20:09.373 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:20:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:20:09.373 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:20:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:20:09.374 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:20:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:09.408+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:09 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:20:09 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/813564338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.724 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.953 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.955 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5180MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.955 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:20:09 compute-0 nova_compute[257476]: 2025-11-24 20:20:09.955 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:20:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:09 compute-0 ceph-mon[75677]: pgmap v1104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:09 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/813564338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:20:10 compute-0 nova_compute[257476]: 2025-11-24 20:20:10.008 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:20:10 compute-0 nova_compute[257476]: 2025-11-24 20:20:10.009 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:20:10 compute-0 nova_compute[257476]: 2025-11-24 20:20:10.033 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:20:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:10.048+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:10 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:10.441+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:10 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:20:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1458220027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:20:10 compute-0 nova_compute[257476]: 2025-11-24 20:20:10.531 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:20:10 compute-0 nova_compute[257476]: 2025-11-24 20:20:10.540 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:20:10 compute-0 nova_compute[257476]: 2025-11-24 20:20:10.568 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:20:10 compute-0 nova_compute[257476]: 2025-11-24 20:20:10.571 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:20:10 compute-0 nova_compute[257476]: 2025-11-24 20:20:10.572 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:20:10 compute-0 podman[268712]: 2025-11-24 20:20:10.872783639 +0000 UTC m=+0.096747192 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 20:20:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:11 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1458220027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:20:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:11.006+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:11 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:11.404+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:11 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:11 compute-0 nova_compute[257476]: 2025-11-24 20:20:11.573 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:20:11 compute-0 nova_compute[257476]: 2025-11-24 20:20:11.573 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:20:11 compute-0 nova_compute[257476]: 2025-11-24 20:20:11.573 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:20:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1727 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:11.965+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:11 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:12 compute-0 ceph-mon[75677]: pgmap v1105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1727 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:12 compute-0 nova_compute[257476]: 2025-11-24 20:20:12.146 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:20:12 compute-0 nova_compute[257476]: 2025-11-24 20:20:12.149 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:20:12 compute-0 nova_compute[257476]: 2025-11-24 20:20:12.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:20:12 compute-0 nova_compute[257476]: 2025-11-24 20:20:12.150 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:20:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:12.369+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:12 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:13.006+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:13 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:13.323+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:13 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:13.962+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:13 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:14 compute-0 ceph-mon[75677]: pgmap v1106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.039514) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015614039760, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 466, "num_deletes": 251, "total_data_size": 311944, "memory_usage": 321848, "flush_reason": "Manual Compaction"}
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015614044576, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 307452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30439, "largest_seqno": 30904, "table_properties": {"data_size": 304794, "index_size": 630, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7046, "raw_average_key_size": 19, "raw_value_size": 299333, "raw_average_value_size": 833, "num_data_blocks": 28, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764015597, "oldest_key_time": 1764015597, "file_creation_time": 1764015614, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 5254 microseconds, and 2550 cpu microseconds.
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.044767) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 307452 bytes OK
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.044841) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.047301) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.047324) EVENT_LOG_v1 {"time_micros": 1764015614047316, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.047345) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 309055, prev total WAL file size 309055, number of live WAL files 2.
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.048328) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(300KB)], [62(9867KB)]
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015614048366, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10412042, "oldest_snapshot_seqno": -1}
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 8101 keys, 8981476 bytes, temperature: kUnknown
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015614093973, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 8981476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8932792, "index_size": 27324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 216777, "raw_average_key_size": 26, "raw_value_size": 8789633, "raw_average_value_size": 1085, "num_data_blocks": 1069, "num_entries": 8101, "num_filter_entries": 8101, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764015614, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.094358) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8981476 bytes
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.095743) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.7 rd, 196.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.6 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(63.1) write-amplify(29.2) OK, records in: 8610, records dropped: 509 output_compression: NoCompression
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.095773) EVENT_LOG_v1 {"time_micros": 1764015614095759, "job": 34, "event": "compaction_finished", "compaction_time_micros": 45722, "compaction_time_cpu_micros": 21676, "output_level": 6, "num_output_files": 1, "total_output_size": 8981476, "num_input_records": 8610, "num_output_records": 8101, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015614096055, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015614099436, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.048229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.099571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.099582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.099714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.099718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:20:14 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:20:14.099722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:20:14 compute-0 nova_compute[257476]: 2025-11-24 20:20:14.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:20:14 compute-0 nova_compute[257476]: 2025-11-24 20:20:14.152 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:20:14 compute-0 nova_compute[257476]: 2025-11-24 20:20:14.152 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:20:14 compute-0 nova_compute[257476]: 2025-11-24 20:20:14.171 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:20:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:14.293+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:14 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:14 compute-0 podman[268732]: 2025-11-24 20:20:14.910931092 +0000 UTC m=+0.136032266 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 20:20:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:15.008+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:15 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:15.305+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:15 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:16.011+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:16 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:16 compute-0 ceph-mon[75677]: pgmap v1107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:16 compute-0 sudo[268760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:20:16 compute-0 sudo[268760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:16 compute-0 sudo[268760]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:16 compute-0 sudo[268785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:20:16 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:16 compute-0 sudo[268785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:16.308+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:16 compute-0 sudo[268785]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:20:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2162938407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:20:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:20:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2162938407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:20:16 compute-0 sudo[268810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:20:16 compute-0 sudo[268810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:16 compute-0 sudo[268810]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:16 compute-0 sudo[268835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:20:16 compute-0 sudo[268835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1732 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:16.977+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:16 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2162938407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:20:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2162938407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:20:17 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1732 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:17 compute-0 sudo[268835]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 20:20:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 20:20:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:20:17 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:20:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:20:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:20:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:20:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:20:17 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4efab669-82cb-427f-9201-b4f7899ced2c does not exist
Nov 24 20:20:17 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ea81ad0c-7cd6-474f-a4a0-fc2369a92fe2 does not exist
Nov 24 20:20:17 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 328f4b33-2134-401a-8002-5a3f174026ee does not exist
Nov 24 20:20:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:20:17 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:20:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:20:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:20:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:20:17 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:20:17 compute-0 sudo[268892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:20:17 compute-0 sudo[268892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:17 compute-0 sudo[268892]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:17.329+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:17 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:17 compute-0 sudo[268917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:20:17 compute-0 sudo[268917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:17 compute-0 sudo[268917]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:17 compute-0 sudo[268942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:20:17 compute-0 sudo[268942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:17 compute-0 sudo[268942]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:17 compute-0 sudo[268967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:20:17 compute-0 sudo[268967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:17.963+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:17 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 20:20:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:20:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:20:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:20:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:20:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:20:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:20:18 compute-0 ceph-mon[75677]: pgmap v1108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:18 compute-0 podman[269034]: 2025-11-24 20:20:18.071898764 +0000 UTC m=+0.075275508 container create 28b6b663b0972c4a3cd8f4c2ba3169cf0a945fa763a5c2c789adcb90f5c80a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 20:20:18 compute-0 systemd[1]: Started libpod-conmon-28b6b663b0972c4a3cd8f4c2ba3169cf0a945fa763a5c2c789adcb90f5c80a8f.scope.
Nov 24 20:20:18 compute-0 podman[269034]: 2025-11-24 20:20:18.041861049 +0000 UTC m=+0.045237613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:20:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:20:18 compute-0 podman[269034]: 2025-11-24 20:20:18.182555279 +0000 UTC m=+0.185931833 container init 28b6b663b0972c4a3cd8f4c2ba3169cf0a945fa763a5c2c789adcb90f5c80a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keller, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 20:20:18 compute-0 podman[269034]: 2025-11-24 20:20:18.195766122 +0000 UTC m=+0.199142666 container start 28b6b663b0972c4a3cd8f4c2ba3169cf0a945fa763a5c2c789adcb90f5c80a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:20:18 compute-0 podman[269034]: 2025-11-24 20:20:18.200574411 +0000 UTC m=+0.203950945 container attach 28b6b663b0972c4a3cd8f4c2ba3169cf0a945fa763a5c2c789adcb90f5c80a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 20:20:18 compute-0 sweet_keller[269050]: 167 167
Nov 24 20:20:18 compute-0 systemd[1]: libpod-28b6b663b0972c4a3cd8f4c2ba3169cf0a945fa763a5c2c789adcb90f5c80a8f.scope: Deactivated successfully.
Nov 24 20:20:18 compute-0 conmon[269050]: conmon 28b6b663b0972c4a3cd8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28b6b663b0972c4a3cd8f4c2ba3169cf0a945fa763a5c2c789adcb90f5c80a8f.scope/container/memory.events
Nov 24 20:20:18 compute-0 podman[269034]: 2025-11-24 20:20:18.208037971 +0000 UTC m=+0.211414525 container died 28b6b663b0972c4a3cd8f4c2ba3169cf0a945fa763a5c2c789adcb90f5c80a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ba5e5559f70ee29acf82add6dee200e50924679a4dcbb8ca8ecf1a831013dd2-merged.mount: Deactivated successfully.
Nov 24 20:20:18 compute-0 podman[269034]: 2025-11-24 20:20:18.259288934 +0000 UTC m=+0.262665448 container remove 28b6b663b0972c4a3cd8f4c2ba3169cf0a945fa763a5c2c789adcb90f5c80a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:20:18 compute-0 systemd[1]: libpod-conmon-28b6b663b0972c4a3cd8f4c2ba3169cf0a945fa763a5c2c789adcb90f5c80a8f.scope: Deactivated successfully.
Nov 24 20:20:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:18.368+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:18 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:18 compute-0 podman[269073]: 2025-11-24 20:20:18.500395553 +0000 UTC m=+0.075419802 container create f83c2c2bd60ec04e12acf03b9a760b5a354ade3d73e4d1e49704097948e7e81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 20:20:18 compute-0 podman[269073]: 2025-11-24 20:20:18.470847161 +0000 UTC m=+0.045871460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:20:18 compute-0 systemd[1]: Started libpod-conmon-f83c2c2bd60ec04e12acf03b9a760b5a354ade3d73e4d1e49704097948e7e81d.scope.
Nov 24 20:20:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070a965b387af7ced1552e72223b2a43f067294e38eefb2fc615bdbdf6d3665f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070a965b387af7ced1552e72223b2a43f067294e38eefb2fc615bdbdf6d3665f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070a965b387af7ced1552e72223b2a43f067294e38eefb2fc615bdbdf6d3665f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070a965b387af7ced1552e72223b2a43f067294e38eefb2fc615bdbdf6d3665f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070a965b387af7ced1552e72223b2a43f067294e38eefb2fc615bdbdf6d3665f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:18 compute-0 podman[269073]: 2025-11-24 20:20:18.626898402 +0000 UTC m=+0.201922711 container init f83c2c2bd60ec04e12acf03b9a760b5a354ade3d73e4d1e49704097948e7e81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_davinci, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 20:20:18 compute-0 podman[269073]: 2025-11-24 20:20:18.641536584 +0000 UTC m=+0.216560813 container start f83c2c2bd60ec04e12acf03b9a760b5a354ade3d73e4d1e49704097948e7e81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 20:20:18 compute-0 podman[269073]: 2025-11-24 20:20:18.646012754 +0000 UTC m=+0.221037063 container attach f83c2c2bd60ec04e12acf03b9a760b5a354ade3d73e4d1e49704097948e7e81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 20:20:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:18.922+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:18 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:19.360+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:19 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:19 compute-0 goofy_davinci[269090]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:20:19 compute-0 goofy_davinci[269090]: --> relative data size: 1.0
Nov 24 20:20:19 compute-0 goofy_davinci[269090]: --> All data devices are unavailable
Nov 24 20:20:19 compute-0 systemd[1]: libpod-f83c2c2bd60ec04e12acf03b9a760b5a354ade3d73e4d1e49704097948e7e81d.scope: Deactivated successfully.
Nov 24 20:20:19 compute-0 systemd[1]: libpod-f83c2c2bd60ec04e12acf03b9a760b5a354ade3d73e4d1e49704097948e7e81d.scope: Consumed 1.264s CPU time.
Nov 24 20:20:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:19.971+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:19 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:20 compute-0 podman[269119]: 2025-11-24 20:20:20.00722129 +0000 UTC m=+0.044085272 container died f83c2c2bd60ec04e12acf03b9a760b5a354ade3d73e4d1e49704097948e7e81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 20:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-070a965b387af7ced1552e72223b2a43f067294e38eefb2fc615bdbdf6d3665f-merged.mount: Deactivated successfully.
Nov 24 20:20:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:20 compute-0 ceph-mon[75677]: pgmap v1109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:20 compute-0 podman[269119]: 2025-11-24 20:20:20.091165749 +0000 UTC m=+0.128029701 container remove f83c2c2bd60ec04e12acf03b9a760b5a354ade3d73e4d1e49704097948e7e81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_davinci, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:20:20 compute-0 systemd[1]: libpod-conmon-f83c2c2bd60ec04e12acf03b9a760b5a354ade3d73e4d1e49704097948e7e81d.scope: Deactivated successfully.
Nov 24 20:20:20 compute-0 sudo[268967]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:20 compute-0 sudo[269134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:20:20 compute-0 sudo[269134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:20 compute-0 sudo[269134]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:20.347+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:20 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:20 compute-0 sudo[269159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:20:20 compute-0 sudo[269159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:20 compute-0 sudo[269159]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:20 compute-0 sudo[269184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:20:20 compute-0 sudo[269184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:20 compute-0 sudo[269184]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:20 compute-0 sudo[269209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:20:20 compute-0 sudo[269209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:20.925+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:20 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:21 compute-0 podman[269275]: 2025-11-24 20:20:21.100384355 +0000 UTC m=+0.074703772 container create 435ad3e25db4fa9ef805e9ac1f1f332eeb7d9547084feb491cb2d1a904c1c37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:20:21 compute-0 systemd[1]: Started libpod-conmon-435ad3e25db4fa9ef805e9ac1f1f332eeb7d9547084feb491cb2d1a904c1c37a.scope.
Nov 24 20:20:21 compute-0 podman[269275]: 2025-11-24 20:20:21.070495264 +0000 UTC m=+0.044814721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:20:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:20:21 compute-0 podman[269275]: 2025-11-24 20:20:21.20542658 +0000 UTC m=+0.179746027 container init 435ad3e25db4fa9ef805e9ac1f1f332eeb7d9547084feb491cb2d1a904c1c37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dijkstra, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 20:20:21 compute-0 podman[269275]: 2025-11-24 20:20:21.218632613 +0000 UTC m=+0.192952030 container start 435ad3e25db4fa9ef805e9ac1f1f332eeb7d9547084feb491cb2d1a904c1c37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dijkstra, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:20:21 compute-0 podman[269275]: 2025-11-24 20:20:21.223326349 +0000 UTC m=+0.197645776 container attach 435ad3e25db4fa9ef805e9ac1f1f332eeb7d9547084feb491cb2d1a904c1c37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dijkstra, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:20:21 compute-0 bold_dijkstra[269292]: 167 167
Nov 24 20:20:21 compute-0 systemd[1]: libpod-435ad3e25db4fa9ef805e9ac1f1f332eeb7d9547084feb491cb2d1a904c1c37a.scope: Deactivated successfully.
Nov 24 20:20:21 compute-0 podman[269275]: 2025-11-24 20:20:21.229946716 +0000 UTC m=+0.204266133 container died 435ad3e25db4fa9ef805e9ac1f1f332eeb7d9547084feb491cb2d1a904c1c37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dijkstra, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6f1d28e48a45a60ecbe135a019072fa1adf6dc44c859173c09c9e6642cdb04a-merged.mount: Deactivated successfully.
Nov 24 20:20:21 compute-0 podman[269275]: 2025-11-24 20:20:21.280039208 +0000 UTC m=+0.254358625 container remove 435ad3e25db4fa9ef805e9ac1f1f332eeb7d9547084feb491cb2d1a904c1c37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dijkstra, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:20:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:21 compute-0 systemd[1]: libpod-conmon-435ad3e25db4fa9ef805e9ac1f1f332eeb7d9547084feb491cb2d1a904c1c37a.scope: Deactivated successfully.
Nov 24 20:20:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:21.312+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:21 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:21 compute-0 podman[269315]: 2025-11-24 20:20:21.535088071 +0000 UTC m=+0.076773988 container create 78bd1c1aab72f1846c1aa12f3fa314483a2bee5e0583e037f97d5b34dcff7e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:20:21 compute-0 podman[269315]: 2025-11-24 20:20:21.505670993 +0000 UTC m=+0.047356970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:20:21 compute-0 systemd[1]: Started libpod-conmon-78bd1c1aab72f1846c1aa12f3fa314483a2bee5e0583e037f97d5b34dcff7e5b.scope.
Nov 24 20:20:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b6e3a334066d18736ad75fbb89d9daad8d41d87c03f11bfea5084e47fd4b3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b6e3a334066d18736ad75fbb89d9daad8d41d87c03f11bfea5084e47fd4b3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b6e3a334066d18736ad75fbb89d9daad8d41d87c03f11bfea5084e47fd4b3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b6e3a334066d18736ad75fbb89d9daad8d41d87c03f11bfea5084e47fd4b3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:21 compute-0 podman[269315]: 2025-11-24 20:20:21.660033748 +0000 UTC m=+0.201719705 container init 78bd1c1aab72f1846c1aa12f3fa314483a2bee5e0583e037f97d5b34dcff7e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:20:21 compute-0 podman[269315]: 2025-11-24 20:20:21.675662037 +0000 UTC m=+0.217347964 container start 78bd1c1aab72f1846c1aa12f3fa314483a2bee5e0583e037f97d5b34dcff7e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:20:21 compute-0 podman[269315]: 2025-11-24 20:20:21.68024619 +0000 UTC m=+0.221932167 container attach 78bd1c1aab72f1846c1aa12f3fa314483a2bee5e0583e037f97d5b34dcff7e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:20:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1737 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:21.907+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:21 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:22 compute-0 ceph-mon[75677]: pgmap v1110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:22 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1737 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:22.302+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:22 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]: {
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:     "0": [
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:         {
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "devices": [
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "/dev/loop3"
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             ],
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_name": "ceph_lv0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_size": "21470642176",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "name": "ceph_lv0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "tags": {
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.cluster_name": "ceph",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.crush_device_class": "",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.encrypted": "0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.osd_id": "0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.type": "block",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.vdo": "0"
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             },
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "type": "block",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "vg_name": "ceph_vg0"
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:         }
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:     ],
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:     "1": [
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:         {
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "devices": [
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "/dev/loop4"
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             ],
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_name": "ceph_lv1",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_size": "21470642176",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "name": "ceph_lv1",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "tags": {
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.cluster_name": "ceph",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.crush_device_class": "",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.encrypted": "0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.osd_id": "1",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.type": "block",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.vdo": "0"
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             },
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "type": "block",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "vg_name": "ceph_vg1"
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:         }
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:     ],
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:     "2": [
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:         {
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "devices": [
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "/dev/loop5"
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             ],
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_name": "ceph_lv2",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_size": "21470642176",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "name": "ceph_lv2",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "tags": {
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.cluster_name": "ceph",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.crush_device_class": "",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.encrypted": "0",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.osd_id": "2",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.type": "block",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:                 "ceph.vdo": "0"
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             },
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "type": "block",
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:             "vg_name": "ceph_vg2"
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:         }
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]:     ]
Nov 24 20:20:22 compute-0 amazing_nightingale[269331]: }
Nov 24 20:20:22 compute-0 systemd[1]: libpod-78bd1c1aab72f1846c1aa12f3fa314483a2bee5e0583e037f97d5b34dcff7e5b.scope: Deactivated successfully.
Nov 24 20:20:22 compute-0 podman[269315]: 2025-11-24 20:20:22.491887064 +0000 UTC m=+1.033573011 container died 78bd1c1aab72f1846c1aa12f3fa314483a2bee5e0583e037f97d5b34dcff7e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-47b6e3a334066d18736ad75fbb89d9daad8d41d87c03f11bfea5084e47fd4b3c-merged.mount: Deactivated successfully.
Nov 24 20:20:22 compute-0 podman[269315]: 2025-11-24 20:20:22.570996483 +0000 UTC m=+1.112682400 container remove 78bd1c1aab72f1846c1aa12f3fa314483a2bee5e0583e037f97d5b34dcff7e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nightingale, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:20:22 compute-0 systemd[1]: libpod-conmon-78bd1c1aab72f1846c1aa12f3fa314483a2bee5e0583e037f97d5b34dcff7e5b.scope: Deactivated successfully.
Nov 24 20:20:22 compute-0 sudo[269209]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:22 compute-0 sudo[269351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:20:22 compute-0 sudo[269351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:22 compute-0 sudo[269351]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:22 compute-0 sudo[269376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:20:22 compute-0 sudo[269376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:22 compute-0 sudo[269376]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:22.918+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:22 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:22 compute-0 sudo[269401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:20:22 compute-0 sudo[269401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:22 compute-0 sudo[269401]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:23 compute-0 sudo[269426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:20:23 compute-0 sudo[269426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:23 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:23.334+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:23 compute-0 podman[269494]: 2025-11-24 20:20:23.514020257 +0000 UTC m=+0.059942258 container create e92d85c97f38e6e392b0494e147121f6ea2346ca41ba5cf68bb538d71877a2dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 20:20:23 compute-0 systemd[1]: Started libpod-conmon-e92d85c97f38e6e392b0494e147121f6ea2346ca41ba5cf68bb538d71877a2dd.scope.
Nov 24 20:20:23 compute-0 podman[269494]: 2025-11-24 20:20:23.48653337 +0000 UTC m=+0.032455431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:20:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:20:23 compute-0 podman[269494]: 2025-11-24 20:20:23.617783766 +0000 UTC m=+0.163705827 container init e92d85c97f38e6e392b0494e147121f6ea2346ca41ba5cf68bb538d71877a2dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:20:23 compute-0 podman[269494]: 2025-11-24 20:20:23.631850383 +0000 UTC m=+0.177772404 container start e92d85c97f38e6e392b0494e147121f6ea2346ca41ba5cf68bb538d71877a2dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:20:23 compute-0 podman[269494]: 2025-11-24 20:20:23.63624749 +0000 UTC m=+0.182169551 container attach e92d85c97f38e6e392b0494e147121f6ea2346ca41ba5cf68bb538d71877a2dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:20:23 compute-0 charming_heyrovsky[269510]: 167 167
Nov 24 20:20:23 compute-0 systemd[1]: libpod-e92d85c97f38e6e392b0494e147121f6ea2346ca41ba5cf68bb538d71877a2dd.scope: Deactivated successfully.
Nov 24 20:20:23 compute-0 podman[269494]: 2025-11-24 20:20:23.640223508 +0000 UTC m=+0.186145519 container died e92d85c97f38e6e392b0494e147121f6ea2346ca41ba5cf68bb538d71877a2dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 20:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f70713d85ba27c7e83b5db850b03d0c5c22e2b3d7e1188fc14db5f5868beedc7-merged.mount: Deactivated successfully.
Nov 24 20:20:23 compute-0 podman[269494]: 2025-11-24 20:20:23.687066682 +0000 UTC m=+0.232988693 container remove e92d85c97f38e6e392b0494e147121f6ea2346ca41ba5cf68bb538d71877a2dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:20:23 compute-0 systemd[1]: libpod-conmon-e92d85c97f38e6e392b0494e147121f6ea2346ca41ba5cf68bb538d71877a2dd.scope: Deactivated successfully.
Nov 24 20:20:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:23.918+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:23 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:23 compute-0 podman[269532]: 2025-11-24 20:20:23.948426694 +0000 UTC m=+0.062864066 container create 68b5b6cf53e7efa469e92ab70cbf03ccbea3296aea224dd3dfd7851c56e2f7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:20:24 compute-0 systemd[1]: Started libpod-conmon-68b5b6cf53e7efa469e92ab70cbf03ccbea3296aea224dd3dfd7851c56e2f7a2.scope.
Nov 24 20:20:24 compute-0 podman[269532]: 2025-11-24 20:20:23.92702287 +0000 UTC m=+0.041460242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:20:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72f6d9881588c80b759072a917767975ec85e3ceffa7247d0ca812fde03f598/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72f6d9881588c80b759072a917767975ec85e3ceffa7247d0ca812fde03f598/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72f6d9881588c80b759072a917767975ec85e3ceffa7247d0ca812fde03f598/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72f6d9881588c80b759072a917767975ec85e3ceffa7247d0ca812fde03f598/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:20:24 compute-0 podman[269532]: 2025-11-24 20:20:24.06812999 +0000 UTC m=+0.182567432 container init 68b5b6cf53e7efa469e92ab70cbf03ccbea3296aea224dd3dfd7851c56e2f7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:20:24 compute-0 podman[269532]: 2025-11-24 20:20:24.074389009 +0000 UTC m=+0.188826351 container start 68b5b6cf53e7efa469e92ab70cbf03ccbea3296aea224dd3dfd7851c56e2f7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:20:24 compute-0 podman[269532]: 2025-11-24 20:20:24.078663563 +0000 UTC m=+0.193100995 container attach 68b5b6cf53e7efa469e92ab70cbf03ccbea3296aea224dd3dfd7851c56e2f7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 20:20:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:24 compute-0 ceph-mon[75677]: pgmap v1111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:24.290+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:24 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:20:24
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['backups', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', 'volumes']
Nov 24 20:20:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:20:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:24.915+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:24 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:25 compute-0 serene_darwin[269549]: {
Nov 24 20:20:25 compute-0 serene_darwin[269549]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "osd_id": 2,
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "type": "bluestore"
Nov 24 20:20:25 compute-0 serene_darwin[269549]:     },
Nov 24 20:20:25 compute-0 serene_darwin[269549]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "osd_id": 1,
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "type": "bluestore"
Nov 24 20:20:25 compute-0 serene_darwin[269549]:     },
Nov 24 20:20:25 compute-0 serene_darwin[269549]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "osd_id": 0,
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:20:25 compute-0 serene_darwin[269549]:         "type": "bluestore"
Nov 24 20:20:25 compute-0 serene_darwin[269549]:     }
Nov 24 20:20:25 compute-0 serene_darwin[269549]: }
Nov 24 20:20:25 compute-0 systemd[1]: libpod-68b5b6cf53e7efa469e92ab70cbf03ccbea3296aea224dd3dfd7851c56e2f7a2.scope: Deactivated successfully.
Nov 24 20:20:25 compute-0 podman[269532]: 2025-11-24 20:20:25.193025116 +0000 UTC m=+1.307462468 container died 68b5b6cf53e7efa469e92ab70cbf03ccbea3296aea224dd3dfd7851c56e2f7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:20:25 compute-0 systemd[1]: libpod-68b5b6cf53e7efa469e92ab70cbf03ccbea3296aea224dd3dfd7851c56e2f7a2.scope: Consumed 1.127s CPU time.
Nov 24 20:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f72f6d9881588c80b759072a917767975ec85e3ceffa7247d0ca812fde03f598-merged.mount: Deactivated successfully.
Nov 24 20:20:25 compute-0 podman[269532]: 2025-11-24 20:20:25.271072127 +0000 UTC m=+1.385509479 container remove 68b5b6cf53e7efa469e92ab70cbf03ccbea3296aea224dd3dfd7851c56e2f7a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 20:20:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:25.286+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:25 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:25 compute-0 systemd[1]: libpod-conmon-68b5b6cf53e7efa469e92ab70cbf03ccbea3296aea224dd3dfd7851c56e2f7a2.scope: Deactivated successfully.
Nov 24 20:20:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:25 compute-0 sudo[269426]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:20:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:20:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:20:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:20:25 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1f775506-9326-425e-830a-d35ef182de01 does not exist
Nov 24 20:20:25 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 532323f2-1127-4011-bdd0-8584c64ca22a does not exist
Nov 24 20:20:25 compute-0 sudo[269595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:20:25 compute-0 sudo[269595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:25 compute-0 sudo[269595]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:25 compute-0 sudo[269620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:20:25 compute-0 sudo[269620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:20:25 compute-0 sudo[269620]: pam_unix(sudo:session): session closed for user root
Nov 24 20:20:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:25.881+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:25 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:26.285+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:26 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:26 compute-0 ceph-mon[75677]: pgmap v1112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:20:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:20:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:26.880+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:26 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1742 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:27.304+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:27 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1742 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:27.855+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:27 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:28.329+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:28 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:28 compute-0 ceph-mon[75677]: pgmap v1113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:28.898+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:28 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:29.283+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:29 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:29 compute-0 ceph-mon[75677]: pgmap v1114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:29.861+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:29 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:30.244+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:30 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:30.847+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:30 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:30 compute-0 podman[269645]: 2025-11-24 20:20:30.85956093 +0000 UTC m=+0.085855381 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 20:20:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:31.205+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:31 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:31 compute-0 ceph-mon[75677]: pgmap v1115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:31.864+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:31 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1752 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:32.232+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:32 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1752 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:32 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:32.848+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:33.249+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:33 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:33 compute-0 ceph-mon[75677]: pgmap v1116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:33.874+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:33 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:34.274+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:34 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:20:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:20:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:34.828+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:34 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:35.295+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:35 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:35 compute-0 ceph-mon[75677]: pgmap v1117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:35.822+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:35 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:36.280+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:36 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:36.794+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:36 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:37.243+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:37 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1757 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:37 compute-0 ceph-mon[75677]: pgmap v1118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:37 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:37.827+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:38.210+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:38 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:38 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1757 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:38.841+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:38 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:39.210+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:39 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:39 compute-0 ceph-mon[75677]: pgmap v1119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:39.849+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:39 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:40.251+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:40 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:20:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:20:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:20:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:20:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:20:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:40.856+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:40 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:41.221+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:41 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:41 compute-0 ceph-mon[75677]: pgmap v1120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:41 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:41.816+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:41 compute-0 podman[269665]: 2025-11-24 20:20:41.878888124 +0000 UTC m=+0.104590643 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:20:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:42.262+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:42 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:42.849+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:42 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:43 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:43.305+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:43 compute-0 ceph-mon[75677]: pgmap v1121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:43.881+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:43 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:44 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:44.342+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:44.912+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:44 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:45 compute-0 rsyslogd[1003]: imjournal: 5311 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 24 20:20:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:45 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:45.351+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:45 compute-0 ceph-mon[75677]: pgmap v1122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:45.885+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:45 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:45 compute-0 podman[269686]: 2025-11-24 20:20:45.945071146 +0000 UTC m=+0.162010061 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 20:20:46 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:46.306+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1762 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:46.914+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:46 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:47 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:47.259+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1762 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:47 compute-0 ceph-mon[75677]: pgmap v1123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:47.904+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:47 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:48 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:48.262+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:48.909+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:48 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:49 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:49.230+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:49 compute-0 ceph-mon[75677]: pgmap v1124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:49.940+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:49 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:50 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:50.276+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:50.939+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:50 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:51 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:51.270+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:51 compute-0 ceph-mon[75677]: pgmap v1125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1772 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:51.918+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:51 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:52 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:52.296+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1772 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:52.873+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:52 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:53 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:53.310+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:53 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:53 compute-0 ceph-mon[75677]: pgmap v1126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:53.833+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:53 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:54 compute-0 sshd-session[269711]: Invalid user free from 182.93.7.194 port 49408
Nov 24 20:20:54 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:54.360+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:20:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:20:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:20:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:20:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:20:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:20:54 compute-0 sshd-session[269711]: Received disconnect from 182.93.7.194 port 49408:11: Bye Bye [preauth]
Nov 24 20:20:54 compute-0 sshd-session[269711]: Disconnected from invalid user free 182.93.7.194 port 49408 [preauth]
Nov 24 20:20:54 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:54.830+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:54 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:55 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:55.327+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:55 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:55 compute-0 ceph-mon[75677]: pgmap v1127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:55.785+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:55 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:56 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:56.376+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:56 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:56.748+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:56 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:20:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:57 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:57.354+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1777 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:57 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:57 compute-0 ceph-mon[75677]: pgmap v1128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:57.749+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:57 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:58 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:58.334+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:58 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:58 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1777 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:20:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:58.764+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:58 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:59 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:20:59.362+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:20:59 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:20:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:20:59 compute-0 ceph-mon[75677]: pgmap v1129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:20:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:20:59.737+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:59 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:20:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:00 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:00.375+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:00.688+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:00 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:00 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:01 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:01.419+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:01.648+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:01 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:01 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:01 compute-0 ceph-mon[75677]: pgmap v1130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:01 compute-0 podman[269713]: 2025-11-24 20:21:01.869479106 +0000 UTC m=+0.086118777 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 20:21:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:21:02 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:02.380+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:02.638+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:02 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:02 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:03 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:03.409+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:03.663+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:03 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:03 compute-0 ceph-mon[75677]: pgmap v1131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:03 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:04 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:04.455+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:04.666+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:04 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:04 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:05 compute-0 nova_compute[257476]: 2025-11-24 20:21:05.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:05 compute-0 nova_compute[257476]: 2025-11-24 20:21:05.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 20:21:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:05 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:05.431+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:05.711+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:05 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:05 compute-0 ceph-mon[75677]: pgmap v1132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:05 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:06 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:06.424+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:06.756+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:06 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1782 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 20:21:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:07 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:07.446+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:07.743+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:07 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:07 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:07 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1782 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:07 compute-0 ceph-mon[75677]: pgmap v1133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:08 compute-0 nova_compute[257476]: 2025-11-24 20:21:08.166 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:08 compute-0 nova_compute[257476]: 2025-11-24 20:21:08.167 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:08 compute-0 nova_compute[257476]: 2025-11-24 20:21:08.167 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 20:21:08 compute-0 nova_compute[257476]: 2025-11-24 20:21:08.182 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 20:21:08 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:08.492+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:08.753+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:08 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:08 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:21:09.373 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:21:09.374 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:21:09.374 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:21:09 compute-0 ceph-osd[89640]: osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:09.471+0000 7f1a67169640 -1 osd.1 131 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:09.774+0000 7f2ca3ee7640 -1 osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:09 compute-0 ceph-osd[88624]: osd.0 131 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 24 20:21:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:09 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:09 compute-0 ceph-mon[75677]: pgmap v1134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 24 20:21:09 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 24 20:21:10 compute-0 nova_compute[257476]: 2025-11-24 20:21:10.166 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:10 compute-0 ceph-osd[89640]: osd.1 132 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:10.497+0000 7f1a67169640 -1 osd.1 132 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:10.750+0000 7f2ca3ee7640 -1 osd.0 132 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:10 compute-0 ceph-osd[88624]: osd.0 132 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:10 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:10 compute-0 ceph-mon[75677]: osdmap e132: 3 total, 3 up, 3 in
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.149 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.180 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.181 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.181 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.181 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.182 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.0 MiB/s wr, 22 op/s
Nov 24 20:21:11 compute-0 ceph-osd[89640]: osd.1 132 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:11.514+0000 7f1a67169640 -1 osd.1 132 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:21:11 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3522797890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.639 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:11.800+0000 7f2ca3ee7640 -1 osd.0 132 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:11 compute-0 ceph-osd[88624]: osd.0 132 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 24 20:21:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 24 20:21:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:11 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:11 compute-0 ceph-mon[75677]: pgmap v1136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.0 MiB/s wr, 22 op/s
Nov 24 20:21:11 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3522797890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:21:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.869 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.872 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5144MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.872 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:11 compute-0 nova_compute[257476]: 2025-11-24 20:21:11.873 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1792 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.093 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.094 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.202 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Refreshing inventories for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.296 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Updating ProviderTree inventory for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.297 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Updating inventory in ProviderTree for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.316 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Refreshing aggregate associations for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.338 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Refreshing trait associations for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66, traits: HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.360 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:12 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:12.503+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:12.778+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:12 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:21:12 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4198917618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.820 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:12 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:12 compute-0 ceph-mon[75677]: osdmap e133: 3 total, 3 up, 3 in
Nov 24 20:21:12 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1792 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:12 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4198917618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.827 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.848 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.850 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.851 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:21:12 compute-0 nova_compute[257476]: 2025-11-24 20:21:12.851 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:12 compute-0 podman[269774]: 2025-11-24 20:21:12.856456355 +0000 UTC m=+0.084500225 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:21:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 MiB/s wr, 27 op/s
Nov 24 20:21:13 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:13.542+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:13.777+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:13 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:13 compute-0 ceph-mon[75677]: pgmap v1138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 MiB/s wr, 27 op/s
Nov 24 20:21:13 compute-0 nova_compute[257476]: 2025-11-24 20:21:13.861 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:13 compute-0 nova_compute[257476]: 2025-11-24 20:21:13.861 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:13 compute-0 nova_compute[257476]: 2025-11-24 20:21:13.878 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:13 compute-0 nova_compute[257476]: 2025-11-24 20:21:13.879 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:13 compute-0 nova_compute[257476]: 2025-11-24 20:21:13.879 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:21:14 compute-0 nova_compute[257476]: 2025-11-24 20:21:14.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:14 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:14.519+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:14.812+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:14 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:15 compute-0 nova_compute[257476]: 2025-11-24 20:21:15.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:15 compute-0 nova_compute[257476]: 2025-11-24 20:21:15.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:21:15 compute-0 nova_compute[257476]: 2025-11-24 20:21:15.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:21:15 compute-0 nova_compute[257476]: 2025-11-24 20:21:15.166 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:21:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.6 MiB/s wr, 42 op/s
Nov 24 20:21:15 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:15.530+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:15.777+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:15 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:15 compute-0 ceph-mon[75677]: pgmap v1139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.6 MiB/s wr, 42 op/s
Nov 24 20:21:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:21:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2525907258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:21:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:21:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2525907258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:21:16 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:16.511+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:16.789+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:16 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2525907258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:21:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2525907258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:21:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:21:16 compute-0 podman[269796]: 2025-11-24 20:21:16.939632091 +0000 UTC m=+0.171421503 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 20:21:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 24 20:21:17 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:17.491+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:17.807+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:17 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1797 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:17 compute-0 ceph-mon[75677]: pgmap v1140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 24 20:21:18 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:18.448+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:18.769+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:18 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:19 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:19 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1797 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.3 MiB/s wr, 40 op/s
Nov 24 20:21:19 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:19.447+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:19.764+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:19 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:20 compute-0 ceph-mon[75677]: pgmap v1141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.3 MiB/s wr, 40 op/s
Nov 24 20:21:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:20 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:20.427+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:20.788+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:20 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Nov 24 20:21:21 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:21.423+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:21.814+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:21 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:21:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:22 compute-0 ceph-mon[75677]: pgmap v1142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Nov 24 20:21:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:22 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:22.445+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:22.839+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:22 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.8 MiB/s wr, 13 op/s
Nov 24 20:21:23 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:23.458+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:23.808+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:23 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:24 compute-0 ceph-mon[75677]: pgmap v1143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.8 MiB/s wr, 13 op/s
Nov 24 20:21:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:21:24
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'backups']
Nov 24 20:21:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:21:24 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:24.485+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:24.848+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:24 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Nov 24 20:21:25 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:25.464+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:25 compute-0 sudo[269822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:21:25 compute-0 sudo[269822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:25 compute-0 sudo[269822]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:25 compute-0 sudo[269847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:21:25 compute-0 sudo[269847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:25 compute-0 sudo[269847]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:25.812+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:25 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:25 compute-0 sudo[269872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:21:25 compute-0 sudo[269872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:25 compute-0 sudo[269872]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:25 compute-0 sudo[269897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:21:25 compute-0 sudo[269897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:26 compute-0 ceph-mon[75677]: pgmap v1144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Nov 24 20:21:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:26 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:26.435+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:26 compute-0 sudo[269897]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:21:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:21:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:21:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:21:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:21:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:21:26 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 725619b9-4a80-4a28-8e62-fec25c2ede1b does not exist
Nov 24 20:21:26 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7c4a00b8-21d3-42fb-ab4c-6456279435ae does not exist
Nov 24 20:21:26 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev b38a8184-02db-4138-950f-7b90a43e4485 does not exist
Nov 24 20:21:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:21:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:21:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:21:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:21:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:21:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:21:26 compute-0 sudo[269953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:21:26 compute-0 sudo[269953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:26 compute-0 sudo[269953]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:26.786+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:26 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:26 compute-0 sudo[269978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:21:26 compute-0 sudo[269978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:26 compute-0 sudo[269978]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1802 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:21:26 compute-0 sudo[270003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:21:26 compute-0 sudo[270003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:26 compute-0 sudo[270003]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:27 compute-0 sudo[270028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:21:27 compute-0 sudo[270028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:21:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:21:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:21:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:21:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:21:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:21:27 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1802 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.0 MiB/s wr, 3 op/s
Nov 24 20:21:27 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:27.435+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:27 compute-0 podman[270093]: 2025-11-24 20:21:27.575047145 +0000 UTC m=+0.068271678 container create 9d18d96e41f9773431a30021cc5b06bf910ca74a2bbaa233b4f1b970ac5d7c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_carson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:21:27 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:21:27.626 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:21:27 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:21:27.628 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:21:27 compute-0 systemd[1]: Started libpod-conmon-9d18d96e41f9773431a30021cc5b06bf910ca74a2bbaa233b4f1b970ac5d7c6f.scope.
Nov 24 20:21:27 compute-0 podman[270093]: 2025-11-24 20:21:27.546836686 +0000 UTC m=+0.040061279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:21:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:21:27 compute-0 podman[270093]: 2025-11-24 20:21:27.692810165 +0000 UTC m=+0.186034758 container init 9d18d96e41f9773431a30021cc5b06bf910ca74a2bbaa233b4f1b970ac5d7c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_carson, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:21:27 compute-0 podman[270093]: 2025-11-24 20:21:27.700901163 +0000 UTC m=+0.194125696 container start 9d18d96e41f9773431a30021cc5b06bf910ca74a2bbaa233b4f1b970ac5d7c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_carson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:21:27 compute-0 podman[270093]: 2025-11-24 20:21:27.704665835 +0000 UTC m=+0.197890428 container attach 9d18d96e41f9773431a30021cc5b06bf910ca74a2bbaa233b4f1b970ac5d7c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_carson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:21:27 compute-0 quirky_carson[270110]: 167 167
Nov 24 20:21:27 compute-0 systemd[1]: libpod-9d18d96e41f9773431a30021cc5b06bf910ca74a2bbaa233b4f1b970ac5d7c6f.scope: Deactivated successfully.
Nov 24 20:21:27 compute-0 podman[270093]: 2025-11-24 20:21:27.709368761 +0000 UTC m=+0.202593264 container died 9d18d96e41f9773431a30021cc5b06bf910ca74a2bbaa233b4f1b970ac5d7c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_carson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e48c69d5c6db50adbe4058700db9bda9a4db76d7876479c15f7ec6c673a2c81a-merged.mount: Deactivated successfully.
Nov 24 20:21:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:27.756+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:27 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:27 compute-0 podman[270093]: 2025-11-24 20:21:27.765646136 +0000 UTC m=+0.258870669 container remove 9d18d96e41f9773431a30021cc5b06bf910ca74a2bbaa233b4f1b970ac5d7c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 20:21:27 compute-0 systemd[1]: libpod-conmon-9d18d96e41f9773431a30021cc5b06bf910ca74a2bbaa233b4f1b970ac5d7c6f.scope: Deactivated successfully.
Nov 24 20:21:28 compute-0 podman[270133]: 2025-11-24 20:21:28.007366673 +0000 UTC m=+0.056274335 container create d7be016959278be654f82e800c01185af849235e13a2d68b2ca10b074b00d2cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:21:28 compute-0 systemd[1]: Started libpod-conmon-d7be016959278be654f82e800c01185af849235e13a2d68b2ca10b074b00d2cc.scope.
Nov 24 20:21:28 compute-0 podman[270133]: 2025-11-24 20:21:27.99015329 +0000 UTC m=+0.039060952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:21:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749225f61a839c3395af4a8873e25fae849360a3ff4ccfc5a644b1e45be127e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749225f61a839c3395af4a8873e25fae849360a3ff4ccfc5a644b1e45be127e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749225f61a839c3395af4a8873e25fae849360a3ff4ccfc5a644b1e45be127e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749225f61a839c3395af4a8873e25fae849360a3ff4ccfc5a644b1e45be127e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749225f61a839c3395af4a8873e25fae849360a3ff4ccfc5a644b1e45be127e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:28 compute-0 podman[270133]: 2025-11-24 20:21:28.123818777 +0000 UTC m=+0.172726449 container init d7be016959278be654f82e800c01185af849235e13a2d68b2ca10b074b00d2cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 20:21:28 compute-0 podman[270133]: 2025-11-24 20:21:28.143399935 +0000 UTC m=+0.192307637 container start d7be016959278be654f82e800c01185af849235e13a2d68b2ca10b074b00d2cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:21:28 compute-0 podman[270133]: 2025-11-24 20:21:28.149870399 +0000 UTC m=+0.198778091 container attach d7be016959278be654f82e800c01185af849235e13a2d68b2ca10b074b00d2cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 20:21:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:28 compute-0 ceph-mon[75677]: pgmap v1145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.0 MiB/s wr, 3 op/s
Nov 24 20:21:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:28 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:28.395+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:28.769+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:28 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:29 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:29.364+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:29 compute-0 agitated_kepler[270150]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:21:29 compute-0 agitated_kepler[270150]: --> relative data size: 1.0
Nov 24 20:21:29 compute-0 agitated_kepler[270150]: --> All data devices are unavailable
Nov 24 20:21:29 compute-0 systemd[1]: libpod-d7be016959278be654f82e800c01185af849235e13a2d68b2ca10b074b00d2cc.scope: Deactivated successfully.
Nov 24 20:21:29 compute-0 systemd[1]: libpod-d7be016959278be654f82e800c01185af849235e13a2d68b2ca10b074b00d2cc.scope: Consumed 1.247s CPU time.
Nov 24 20:21:29 compute-0 podman[270179]: 2025-11-24 20:21:29.504324008 +0000 UTC m=+0.044599142 container died d7be016959278be654f82e800c01185af849235e13a2d68b2ca10b074b00d2cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:21:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-749225f61a839c3395af4a8873e25fae849360a3ff4ccfc5a644b1e45be127e0-merged.mount: Deactivated successfully.
Nov 24 20:21:29 compute-0 podman[270179]: 2025-11-24 20:21:29.577780885 +0000 UTC m=+0.118055949 container remove d7be016959278be654f82e800c01185af849235e13a2d68b2ca10b074b00d2cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 20:21:29 compute-0 systemd[1]: libpod-conmon-d7be016959278be654f82e800c01185af849235e13a2d68b2ca10b074b00d2cc.scope: Deactivated successfully.
Nov 24 20:21:29 compute-0 sudo[270028]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:29 compute-0 sudo[270195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:21:29 compute-0 sudo[270195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:29 compute-0 sudo[270195]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:29.809+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:29 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:29 compute-0 sudo[270220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:21:29 compute-0 sudo[270220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:29 compute-0 sudo[270220]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:29 compute-0 sudo[270245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:21:29 compute-0 sudo[270245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:29 compute-0 sudo[270245]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:30 compute-0 sudo[270270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:21:30 compute-0 sudo[270270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:30 compute-0 ceph-mon[75677]: pgmap v1146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:30 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:30.347+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:30 compute-0 podman[270336]: 2025-11-24 20:21:30.496498385 +0000 UTC m=+0.039362080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:21:30 compute-0 podman[270336]: 2025-11-24 20:21:30.617458701 +0000 UTC m=+0.160322346 container create 63cea3182e3965ed8ab2ab9b389fb0bfd490bc4df0311659b9a8966e65de5888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:21:30 compute-0 systemd[1]: Started libpod-conmon-63cea3182e3965ed8ab2ab9b389fb0bfd490bc4df0311659b9a8966e65de5888.scope.
Nov 24 20:21:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:21:30 compute-0 podman[270336]: 2025-11-24 20:21:30.725231863 +0000 UTC m=+0.268095558 container init 63cea3182e3965ed8ab2ab9b389fb0bfd490bc4df0311659b9a8966e65de5888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:21:30 compute-0 podman[270336]: 2025-11-24 20:21:30.739338603 +0000 UTC m=+0.282202248 container start 63cea3182e3965ed8ab2ab9b389fb0bfd490bc4df0311659b9a8966e65de5888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:21:30 compute-0 podman[270336]: 2025-11-24 20:21:30.742851707 +0000 UTC m=+0.285715392 container attach 63cea3182e3965ed8ab2ab9b389fb0bfd490bc4df0311659b9a8966e65de5888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:21:30 compute-0 strange_napier[270352]: 167 167
Nov 24 20:21:30 compute-0 systemd[1]: libpod-63cea3182e3965ed8ab2ab9b389fb0bfd490bc4df0311659b9a8966e65de5888.scope: Deactivated successfully.
Nov 24 20:21:30 compute-0 podman[270336]: 2025-11-24 20:21:30.748479548 +0000 UTC m=+0.291343193 container died 63cea3182e3965ed8ab2ab9b389fb0bfd490bc4df0311659b9a8966e65de5888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 20:21:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:30.773+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:30 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-298e6c6f358ef032cee314b14400520bde528b770fe3a00d25aa0bcd2cba0487-merged.mount: Deactivated successfully.
Nov 24 20:21:30 compute-0 podman[270336]: 2025-11-24 20:21:30.802907384 +0000 UTC m=+0.345771019 container remove 63cea3182e3965ed8ab2ab9b389fb0bfd490bc4df0311659b9a8966e65de5888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:21:30 compute-0 systemd[1]: libpod-conmon-63cea3182e3965ed8ab2ab9b389fb0bfd490bc4df0311659b9a8966e65de5888.scope: Deactivated successfully.
Nov 24 20:21:31 compute-0 podman[270377]: 2025-11-24 20:21:31.065008839 +0000 UTC m=+0.070139349 container create 087d30bb7d31196199299867e3f6b974b82c79e0f0b7e50c9b6312a8f6523538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pike, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:21:31 compute-0 systemd[1]: Started libpod-conmon-087d30bb7d31196199299867e3f6b974b82c79e0f0b7e50c9b6312a8f6523538.scope.
Nov 24 20:21:31 compute-0 podman[270377]: 2025-11-24 20:21:31.036952253 +0000 UTC m=+0.042082793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:21:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b45e0b9295b923b76c84a87d49694a209bf814a1610c929374fe07d153c131dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b45e0b9295b923b76c84a87d49694a209bf814a1610c929374fe07d153c131dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b45e0b9295b923b76c84a87d49694a209bf814a1610c929374fe07d153c131dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b45e0b9295b923b76c84a87d49694a209bf814a1610c929374fe07d153c131dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:31 compute-0 podman[270377]: 2025-11-24 20:21:31.190412605 +0000 UTC m=+0.195543105 container init 087d30bb7d31196199299867e3f6b974b82c79e0f0b7e50c9b6312a8f6523538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:21:31 compute-0 podman[270377]: 2025-11-24 20:21:31.203297451 +0000 UTC m=+0.208427981 container start 087d30bb7d31196199299867e3f6b974b82c79e0f0b7e50c9b6312a8f6523538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pike, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:21:31 compute-0 podman[270377]: 2025-11-24 20:21:31.207123174 +0000 UTC m=+0.212253674 container attach 087d30bb7d31196199299867e3f6b974b82c79e0f0b7e50c9b6312a8f6523538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pike, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:21:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:31 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:31.325+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:31.767+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:31 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1807 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:21:32 compute-0 elastic_pike[270393]: {
Nov 24 20:21:32 compute-0 elastic_pike[270393]:     "0": [
Nov 24 20:21:32 compute-0 elastic_pike[270393]:         {
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "devices": [
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "/dev/loop3"
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             ],
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_name": "ceph_lv0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_size": "21470642176",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "name": "ceph_lv0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "tags": {
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.cluster_name": "ceph",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.crush_device_class": "",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.encrypted": "0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.osd_id": "0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.type": "block",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.vdo": "0"
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             },
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "type": "block",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "vg_name": "ceph_vg0"
Nov 24 20:21:32 compute-0 elastic_pike[270393]:         }
Nov 24 20:21:32 compute-0 elastic_pike[270393]:     ],
Nov 24 20:21:32 compute-0 elastic_pike[270393]:     "1": [
Nov 24 20:21:32 compute-0 elastic_pike[270393]:         {
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "devices": [
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "/dev/loop4"
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             ],
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_name": "ceph_lv1",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_size": "21470642176",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "name": "ceph_lv1",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "tags": {
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.cluster_name": "ceph",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.crush_device_class": "",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.encrypted": "0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.osd_id": "1",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.type": "block",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.vdo": "0"
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             },
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "type": "block",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "vg_name": "ceph_vg1"
Nov 24 20:21:32 compute-0 elastic_pike[270393]:         }
Nov 24 20:21:32 compute-0 elastic_pike[270393]:     ],
Nov 24 20:21:32 compute-0 elastic_pike[270393]:     "2": [
Nov 24 20:21:32 compute-0 elastic_pike[270393]:         {
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "devices": [
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "/dev/loop5"
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             ],
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_name": "ceph_lv2",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_size": "21470642176",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "name": "ceph_lv2",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "tags": {
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.cluster_name": "ceph",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.crush_device_class": "",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.encrypted": "0",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.osd_id": "2",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.type": "block",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:                 "ceph.vdo": "0"
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             },
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "type": "block",
Nov 24 20:21:32 compute-0 elastic_pike[270393]:             "vg_name": "ceph_vg2"
Nov 24 20:21:32 compute-0 elastic_pike[270393]:         }
Nov 24 20:21:32 compute-0 elastic_pike[270393]:     ]
Nov 24 20:21:32 compute-0 elastic_pike[270393]: }
Nov 24 20:21:32 compute-0 systemd[1]: libpod-087d30bb7d31196199299867e3f6b974b82c79e0f0b7e50c9b6312a8f6523538.scope: Deactivated successfully.
Nov 24 20:21:32 compute-0 podman[270377]: 2025-11-24 20:21:32.037415624 +0000 UTC m=+1.042546144 container died 087d30bb7d31196199299867e3f6b974b82c79e0f0b7e50c9b6312a8f6523538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:21:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b45e0b9295b923b76c84a87d49694a209bf814a1610c929374fe07d153c131dd-merged.mount: Deactivated successfully.
Nov 24 20:21:32 compute-0 podman[270377]: 2025-11-24 20:21:32.122751511 +0000 UTC m=+1.127882041 container remove 087d30bb7d31196199299867e3f6b974b82c79e0f0b7e50c9b6312a8f6523538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:21:32 compute-0 systemd[1]: libpod-conmon-087d30bb7d31196199299867e3f6b974b82c79e0f0b7e50c9b6312a8f6523538.scope: Deactivated successfully.
Nov 24 20:21:32 compute-0 sudo[270270]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:32 compute-0 ceph-mon[75677]: pgmap v1147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:32 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1807 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:32 compute-0 podman[270403]: 2025-11-24 20:21:32.228281591 +0000 UTC m=+0.133024721 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 20:21:32 compute-0 sudo[270433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:21:32 compute-0 sudo[270433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:32 compute-0 sudo[270433]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:32.375+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:32 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:32 compute-0 sudo[270461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:21:32 compute-0 sudo[270461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:32 compute-0 sudo[270461]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:32 compute-0 sudo[270486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:21:32 compute-0 sudo[270486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:32 compute-0 sudo[270486]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:32 compute-0 sudo[270511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:21:32 compute-0 sudo[270511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:32.808+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:32 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:33 compute-0 podman[270578]: 2025-11-24 20:21:33.071097229 +0000 UTC m=+0.066293255 container create b7712fb75f949a8c9e436d38591fc0b2f805395f2876e417a934a0da9b462a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heyrovsky, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 20:21:33 compute-0 systemd[1]: Started libpod-conmon-b7712fb75f949a8c9e436d38591fc0b2f805395f2876e417a934a0da9b462a39.scope.
Nov 24 20:21:33 compute-0 podman[270578]: 2025-11-24 20:21:33.043883667 +0000 UTC m=+0.039079743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:21:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:21:33 compute-0 podman[270578]: 2025-11-24 20:21:33.179812935 +0000 UTC m=+0.175009001 container init b7712fb75f949a8c9e436d38591fc0b2f805395f2876e417a934a0da9b462a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heyrovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:21:33 compute-0 podman[270578]: 2025-11-24 20:21:33.190516104 +0000 UTC m=+0.185712120 container start b7712fb75f949a8c9e436d38591fc0b2f805395f2876e417a934a0da9b462a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:21:33 compute-0 podman[270578]: 2025-11-24 20:21:33.194969593 +0000 UTC m=+0.190165609 container attach b7712fb75f949a8c9e436d38591fc0b2f805395f2876e417a934a0da9b462a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:21:33 compute-0 agitated_heyrovsky[270594]: 167 167
Nov 24 20:21:33 compute-0 systemd[1]: libpod-b7712fb75f949a8c9e436d38591fc0b2f805395f2876e417a934a0da9b462a39.scope: Deactivated successfully.
Nov 24 20:21:33 compute-0 podman[270578]: 2025-11-24 20:21:33.199774522 +0000 UTC m=+0.194970548 container died b7712fb75f949a8c9e436d38591fc0b2f805395f2876e417a934a0da9b462a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:21:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba243c93d5f7c44edcb7146002de08914673483b9d4cb10de60f236c2a668d0e-merged.mount: Deactivated successfully.
Nov 24 20:21:33 compute-0 podman[270578]: 2025-11-24 20:21:33.254241849 +0000 UTC m=+0.249437875 container remove b7712fb75f949a8c9e436d38591fc0b2f805395f2876e417a934a0da9b462a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:21:33 compute-0 systemd[1]: libpod-conmon-b7712fb75f949a8c9e436d38591fc0b2f805395f2876e417a934a0da9b462a39.scope: Deactivated successfully.
Nov 24 20:21:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:33 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:33.354+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:33 compute-0 podman[270620]: 2025-11-24 20:21:33.521961005 +0000 UTC m=+0.066616224 container create 2c2598eae38c8cd7e21c678328b1e0a87bdb0e646aab0062afe2cf2dfb070e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 20:21:33 compute-0 systemd[1]: Started libpod-conmon-2c2598eae38c8cd7e21c678328b1e0a87bdb0e646aab0062afe2cf2dfb070e11.scope.
Nov 24 20:21:33 compute-0 podman[270620]: 2025-11-24 20:21:33.495027041 +0000 UTC m=+0.039682300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:21:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b23784f12fe1e42b569ab541507d8525f7b396de53dc7646305973b0207485/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b23784f12fe1e42b569ab541507d8525f7b396de53dc7646305973b0207485/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b23784f12fe1e42b569ab541507d8525f7b396de53dc7646305973b0207485/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b23784f12fe1e42b569ab541507d8525f7b396de53dc7646305973b0207485/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:21:33 compute-0 podman[270620]: 2025-11-24 20:21:33.63283197 +0000 UTC m=+0.177487229 container init 2c2598eae38c8cd7e21c678328b1e0a87bdb0e646aab0062afe2cf2dfb070e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:21:33 compute-0 podman[270620]: 2025-11-24 20:21:33.648068319 +0000 UTC m=+0.192723528 container start 2c2598eae38c8cd7e21c678328b1e0a87bdb0e646aab0062afe2cf2dfb070e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:21:33 compute-0 podman[270620]: 2025-11-24 20:21:33.652279143 +0000 UTC m=+0.196934422 container attach 2c2598eae38c8cd7e21c678328b1e0a87bdb0e646aab0062afe2cf2dfb070e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:21:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:33.784+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:33 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:34 compute-0 ceph-mon[75677]: pgmap v1148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:34.350+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:34 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:34 compute-0 romantic_hermann[270638]: {
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "osd_id": 2,
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "type": "bluestore"
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:     },
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "osd_id": 1,
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "type": "bluestore"
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:     },
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "osd_id": 0,
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:         "type": "bluestore"
Nov 24 20:21:34 compute-0 romantic_hermann[270638]:     }
Nov 24 20:21:34 compute-0 romantic_hermann[270638]: }
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:21:34 compute-0 systemd[1]: libpod-2c2598eae38c8cd7e21c678328b1e0a87bdb0e646aab0062afe2cf2dfb070e11.scope: Deactivated successfully.
Nov 24 20:21:34 compute-0 systemd[1]: libpod-2c2598eae38c8cd7e21c678328b1e0a87bdb0e646aab0062afe2cf2dfb070e11.scope: Consumed 1.070s CPU time.
Nov 24 20:21:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:34.773+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:34 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:34 compute-0 podman[270672]: 2025-11-24 20:21:34.775663332 +0000 UTC m=+0.044962681 container died 2c2598eae38c8cd7e21c678328b1e0a87bdb0e646aab0062afe2cf2dfb070e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7b23784f12fe1e42b569ab541507d8525f7b396de53dc7646305973b0207485-merged.mount: Deactivated successfully.
Nov 24 20:21:34 compute-0 podman[270672]: 2025-11-24 20:21:34.850868077 +0000 UTC m=+0.120167376 container remove 2c2598eae38c8cd7e21c678328b1e0a87bdb0e646aab0062afe2cf2dfb070e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hermann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:21:34 compute-0 systemd[1]: libpod-conmon-2c2598eae38c8cd7e21c678328b1e0a87bdb0e646aab0062afe2cf2dfb070e11.scope: Deactivated successfully.
Nov 24 20:21:34 compute-0 sudo[270511]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:21:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:21:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:21:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0e866fd2-2f1e-488a-9c2a-490ef3c91980 does not exist
Nov 24 20:21:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7582b753-6893-4e94-8f40-6639e726fec6 does not exist
Nov 24 20:21:35 compute-0 sudo[270685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:21:35 compute-0 sudo[270685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:35 compute-0 sudo[270685]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:35 compute-0 sudo[270710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:21:35 compute-0 sudo[270710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:21:35 compute-0 sudo[270710]: pam_unix(sudo:session): session closed for user root
Nov 24 20:21:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:21:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:21:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:35.386+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:35 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:35 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:21:35.631 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:21:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:35.759+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:35 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:36 compute-0 ceph-mon[75677]: pgmap v1149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:36.352+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:36 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:36.792+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:36 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1812 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:21:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:37 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1812 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:37.339+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:37 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:37.789+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:37 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:38 compute-0 ceph-mon[75677]: pgmap v1150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:38.355+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:38 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:38.829+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:38 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:39.402+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:39 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:39.783+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:39 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:40 compute-0 ceph-mon[75677]: pgmap v1151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:40.408+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:40 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:21:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:21:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:21:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:21:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:21:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:40.826+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:40 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:41.405+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:41 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:41.788+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:41 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1817 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:21:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:42 compute-0 ceph-mon[75677]: pgmap v1152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:42 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1817 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:42.441+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:42 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:42.822+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:42 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:43 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:43.451+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:43 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:43.775+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:43 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:43 compute-0 podman[270735]: 2025-11-24 20:21:43.881496603 +0000 UTC m=+0.101149093 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:21:44 compute-0 nova_compute[257476]: 2025-11-24 20:21:44.116 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:21:44 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:44 compute-0 ceph-mon[75677]: pgmap v1153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:44.424+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:44 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:44.825+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:44 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:45 compute-0 nova_compute[257476]: 2025-11-24 20:21:45.188 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Acquiring lock "a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:45 compute-0 nova_compute[257476]: 2025-11-24 20:21:45.189 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:45 compute-0 nova_compute[257476]: 2025-11-24 20:21:45.206 257491 DEBUG nova.compute.manager [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 20:21:45 compute-0 nova_compute[257476]: 2025-11-24 20:21:45.322 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:45 compute-0 nova_compute[257476]: 2025-11-24 20:21:45.323 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:45 compute-0 nova_compute[257476]: 2025-11-24 20:21:45.334 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 20:21:45 compute-0 nova_compute[257476]: 2025-11-24 20:21:45.335 257491 INFO nova.compute.claims [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Claim successful on node compute-0.ctlplane.example.com
Nov 24 20:21:45 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:45.413+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:45 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:45 compute-0 nova_compute[257476]: 2025-11-24 20:21:45.495 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:45.793+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:45 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:21:45 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/90134173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:21:45 compute-0 nova_compute[257476]: 2025-11-24 20:21:45.969 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:45 compute-0 nova_compute[257476]: 2025-11-24 20:21:45.978 257491 DEBUG nova.compute.provider_tree [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.002 257491 DEBUG nova.scheduler.client.report [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.022 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.023 257491 DEBUG nova.compute.manager [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.071 257491 DEBUG nova.compute.manager [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.085 257491 INFO nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.100 257491 DEBUG nova.compute.manager [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.175 257491 DEBUG nova.compute.manager [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.177 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.178 257491 INFO nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Creating image(s)
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.215 257491 DEBUG nova.storage.rbd_utils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] rbd image a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.248 257491 DEBUG nova.storage.rbd_utils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] rbd image a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.271 257491 DEBUG nova.storage.rbd_utils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] rbd image a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.276 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Acquiring lock "218f8903fd6674ce56e8c19056c812cf16f46909" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:46 compute-0 nova_compute[257476]: 2025-11-24 20:21:46.277 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:46 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:46 compute-0 ceph-mon[75677]: pgmap v1154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:46 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/90134173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:21:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:46.443+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:46 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:46.815+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:46 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1822 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:21:47 compute-0 nova_compute[257476]: 2025-11-24 20:21:47.317 257491 DEBUG nova.virt.libvirt.imagebackend [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Image locations are: [{'url': 'rbd://05e060a3-406b-57f0-89d2-ec35f5b09305/images/7b556eea-44a0-401c-a3e5-213a835e1fc5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://05e060a3-406b-57f0-89d2-ec35f5b09305/images/7b556eea-44a0-401c-a3e5-213a835e1fc5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 24 20:21:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:47 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:47 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1822 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:47.455+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:47 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:47.851+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:47 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:47 compute-0 podman[270832]: 2025-11-24 20:21:47.912032477 +0000 UTC m=+0.135096887 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:21:48 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:48 compute-0 ceph-mon[75677]: pgmap v1155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:48.506+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:48 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:48.888+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:48 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:48 compute-0 nova_compute[257476]: 2025-11-24 20:21:48.890 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:48 compute-0 nova_compute[257476]: 2025-11-24 20:21:48.980 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909.part --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:48 compute-0 nova_compute[257476]: 2025-11-24 20:21:48.982 257491 DEBUG nova.virt.images [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] 7b556eea-44a0-401c-a3e5-213a835e1fc5 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 24 20:21:48 compute-0 nova_compute[257476]: 2025-11-24 20:21:48.988 257491 DEBUG nova.privsep.utils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 20:21:48 compute-0 nova_compute[257476]: 2025-11-24 20:21:48.989 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909.part /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.276 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909.part /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909.converted" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.287 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.380 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909.converted --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.382 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.412 257491 DEBUG nova.storage.rbd_utils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] rbd image a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:49 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:49 compute-0 ceph-mon[75677]: pgmap v1156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.418 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.498 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Acquiring lock "43bc955c-77ee-42d8-98e2-84163217d1aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.499 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Lock "43bc955c-77ee-42d8-98e2-84163217d1aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.517 257491 DEBUG nova.compute.manager [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 20:21:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:49.545+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:49 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.594 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.595 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.603 257491 DEBUG nova.virt.hardware [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.604 257491 INFO nova.compute.claims [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Claim successful on node compute-0.ctlplane.example.com
Nov 24 20:21:49 compute-0 nova_compute[257476]: 2025-11-24 20:21:49.716 257491 DEBUG oslo_concurrency.processutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:49.912+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:49 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:21:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4238066929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.155 257491 DEBUG oslo_concurrency.processutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.164 257491 DEBUG nova.compute.provider_tree [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Updating inventory in ProviderTree for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.217 257491 ERROR nova.scheduler.client.report [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [req-ada4290f-e4a4-41ca-a6f6-8c69a6a1fd1a] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 36172ea5-11d9-49c4-91b9-fe09a4a54b66.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-ada4290f-e4a4-41ca-a6f6-8c69a6a1fd1a"}]}
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.238 257491 DEBUG nova.scheduler.client.report [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Refreshing inventories for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.269 257491 DEBUG nova.scheduler.client.report [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Updating ProviderTree inventory for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.270 257491 DEBUG nova.compute.provider_tree [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Updating inventory in ProviderTree for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.315 257491 DEBUG nova.scheduler.client.report [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Refreshing aggregate associations for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.342 257491 DEBUG nova.scheduler.client.report [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Refreshing trait associations for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66, traits: HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.420 257491 DEBUG oslo_concurrency.processutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 24 20:21:50 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:50 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4238066929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:21:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 24 20:21:50 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 24 20:21:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:50.510+0000 7f1a67169640 -1 osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:50 compute-0 ceph-osd[89640]: osd.1 133 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:21:50.864+0000 7f2ca3ee7640 -1 osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:50 compute-0 ceph-osd[88624]: osd.0 133 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:21:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:21:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826665769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.916 257491 DEBUG oslo_concurrency.processutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.923 257491 DEBUG nova.compute.provider_tree [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Updating inventory in ProviderTree for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.990 257491 DEBUG nova.scheduler.client.report [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Updated inventory for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.991 257491 DEBUG nova.compute.provider_tree [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Updating resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 20:21:50 compute-0 nova_compute[257476]: 2025-11-24 20:21:50.991 257491 DEBUG nova.compute.provider_tree [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Updating inventory in ProviderTree for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.024 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.025 257491 DEBUG nova.compute.manager [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.090 257491 DEBUG nova.compute.manager [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.091 257491 DEBUG nova.network.neutron [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.121 257491 INFO nova.virt.libvirt.driver [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.142 257491 DEBUG nova.compute.manager [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.293 257491 DEBUG nova.compute.manager [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.295 257491 DEBUG nova.virt.libvirt.driver [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.295 257491 INFO nova.virt.libvirt.driver [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Creating image(s)
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.324 257491 DEBUG nova.storage.rbd_utils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] rbd image 43bc955c-77ee-42d8-98e2-84163217d1aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.360 257491 DEBUG nova.storage.rbd_utils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] rbd image 43bc955c-77ee-42d8-98e2-84163217d1aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.396 257491 DEBUG nova.storage.rbd_utils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] rbd image 43bc955c-77ee-42d8-98e2-84163217d1aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.401 257491 DEBUG oslo_concurrency.processutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 24 20:21:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 24 20:21:51 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:51 compute-0 ceph-mon[75677]: osdmap e134: 3 total, 3 up, 3 in
Nov 24 20:21:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:51 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3826665769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:21:51 compute-0 ceph-mon[75677]: pgmap v1158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Nov 24 20:21:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 24 20:21:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:51.464+0000 7f1a67169640 -1 osd.1 134 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:51 compute-0 ceph-osd[89640]: osd.1 134 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.490 257491 DEBUG oslo_concurrency.processutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.492 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Acquiring lock "218f8903fd6674ce56e8c19056c812cf16f46909" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.494 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.494 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.534 257491 DEBUG nova.storage.rbd_utils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] rbd image 43bc955c-77ee-42d8-98e2-84163217d1aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.542 257491 DEBUG oslo_concurrency.processutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 43bc955c-77ee-42d8-98e2-84163217d1aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.596 257491 WARNING oslo_policy.policy [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.597 257491 WARNING oslo_policy.policy [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.602 257491 DEBUG nova.policy [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdcce01fe61847e0972b7d8925fc4984', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c56e6d5c1eae48bfa49e12800a76eaa4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.727 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.792 257491 DEBUG nova.storage.rbd_utils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] resizing rbd image a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.882 257491 DEBUG nova.objects.instance [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lazy-loading 'migration_context' on Instance uuid a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.909 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.910 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Ensure instance console log exists: /var/lib/nova/instances/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.910 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.910 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.911 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.912 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '7b556eea-44a0-401c-a3e5-213a835e1fc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.916 257491 WARNING nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.919 257491 DEBUG nova.virt.libvirt.host [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 20:21:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 20 slow ops, oldest one blocked for 1832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.920 257491 DEBUG nova.virt.libvirt.host [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 20:21:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.922 257491 DEBUG nova.virt.libvirt.host [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.923 257491 DEBUG nova.virt.libvirt.host [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.923 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.923 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T20:21:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='67120476-40a0-42ea-948d-218bf9a62474',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.924 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.924 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.924 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.924 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.925 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.925 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.925 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.925 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.925 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.926 257491 DEBUG nova.virt.hardware [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.929 257491 DEBUG nova.privsep.utils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 20:21:51 compute-0 nova_compute[257476]: 2025-11-24 20:21:51.929 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:21:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649132145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:21:52 compute-0 nova_compute[257476]: 2025-11-24 20:21:52.316 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:52 compute-0 nova_compute[257476]: 2025-11-24 20:21:52.351 257491 DEBUG nova.storage.rbd_utils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] rbd image a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:52 compute-0 nova_compute[257476]: 2025-11-24 20:21:52.357 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:52 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:21:52 compute-0 ceph-mon[75677]: osdmap e135: 3 total, 3 up, 3 in
Nov 24 20:21:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:52 compute-0 ceph-mon[75677]: Health check update: 20 slow ops, oldest one blocked for 1832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:52 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2649132145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:21:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:52.480+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:52 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:21:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238778778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:21:52 compute-0 nova_compute[257476]: 2025-11-24 20:21:52.763 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:52 compute-0 nova_compute[257476]: 2025-11-24 20:21:52.766 257491 DEBUG nova.objects.instance [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lazy-loading 'pci_devices' on Instance uuid a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:21:52 compute-0 nova_compute[257476]: 2025-11-24 20:21:52.794 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] End _get_guest_xml xml=<domain type="kvm">
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <uuid>a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f</uuid>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <name>instance-00000001</name>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <memory>131072</memory>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <vcpu>1</vcpu>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <metadata>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <nova:name>tempest-AutoAllocateNetworkTest-server-1852662230</nova:name>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <nova:creationTime>2025-11-24 20:21:51</nova:creationTime>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <nova:flavor name="m1.nano">
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <nova:memory>128</nova:memory>
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <nova:disk>1</nova:disk>
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <nova:swap>0</nova:swap>
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <nova:vcpus>1</nova:vcpus>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       </nova:flavor>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <nova:owner>
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <nova:user uuid="03dfa2b5407c443a974308a641006734">tempest-AutoAllocateNetworkTest-1112163353-project-member</nova:user>
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <nova:project uuid="e9aa686e6bd349388ffa9f482970d883">tempest-AutoAllocateNetworkTest-1112163353</nova:project>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       </nova:owner>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <nova:root type="image" uuid="7b556eea-44a0-401c-a3e5-213a835e1fc5"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <nova:ports/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     </nova:instance>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   </metadata>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <sysinfo type="smbios">
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <system>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <entry name="manufacturer">RDO</entry>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <entry name="product">OpenStack Compute</entry>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <entry name="serial">a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f</entry>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <entry name="uuid">a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f</entry>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <entry name="family">Virtual Machine</entry>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     </system>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   </sysinfo>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <os>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <boot dev="hd"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <smbios mode="sysinfo"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   </os>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <features>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <acpi/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <apic/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <vmcoreinfo/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   </features>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <clock offset="utc">
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <timer name="hpet" present="no"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   </clock>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <cpu mode="host-model" match="exact">
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   </cpu>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   <devices>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <disk type="network" device="disk">
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk">
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       </source>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <target dev="vda" bus="virtio"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <disk type="network" device="cdrom">
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk.config">
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       </source>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:21:52 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <target dev="sda" bus="sata"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <serial type="pty">
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <log file="/var/lib/nova/instances/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f/console.log" append="off"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     </serial>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <video>
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <model type="virtio"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     </video>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <input type="tablet" bus="usb"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <rng model="virtio">
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <backend model="random">/dev/urandom</backend>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     </rng>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <controller type="usb" index="0"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     <memballoon model="virtio">
Nov 24 20:21:52 compute-0 nova_compute[257476]:       <stats period="10"/>
Nov 24 20:21:52 compute-0 nova_compute[257476]:     </memballoon>
Nov 24 20:21:52 compute-0 nova_compute[257476]:   </devices>
Nov 24 20:21:52 compute-0 nova_compute[257476]: </domain>
Nov 24 20:21:52 compute-0 nova_compute[257476]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 20:21:52 compute-0 nova_compute[257476]: 2025-11-24 20:21:52.839 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:21:52 compute-0 nova_compute[257476]: 2025-11-24 20:21:52.840 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:21:52 compute-0 nova_compute[257476]: 2025-11-24 20:21:52.841 257491 INFO nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Using config drive
Nov 24 20:21:52 compute-0 nova_compute[257476]: 2025-11-24 20:21:52.874 257491 DEBUG nova.storage.rbd_utils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] rbd image a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Nov 24 20:21:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:53 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/238778778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:21:53 compute-0 ceph-mon[75677]: pgmap v1160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Nov 24 20:21:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:53.485+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:53 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:53 compute-0 nova_compute[257476]: 2025-11-24 20:21:53.678 257491 INFO nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Creating config drive at /var/lib/nova/instances/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f/disk.config
Nov 24 20:21:53 compute-0 nova_compute[257476]: 2025-11-24 20:21:53.686 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp71jobre2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:53 compute-0 nova_compute[257476]: 2025-11-24 20:21:53.833 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp71jobre2" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:53 compute-0 nova_compute[257476]: 2025-11-24 20:21:53.875 257491 DEBUG nova.storage.rbd_utils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] rbd image a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:21:53 compute-0 nova_compute[257476]: 2025-11-24 20:21:53.881 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f/disk.config a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:21:54 compute-0 nova_compute[257476]: 2025-11-24 20:21:54.066 257491 DEBUG oslo_concurrency.processutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f/disk.config a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:21:54 compute-0 nova_compute[257476]: 2025-11-24 20:21:54.068 257491 INFO nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Deleting local config drive /var/lib/nova/instances/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f/disk.config because it was imported into RBD.
Nov 24 20:21:54 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 24 20:21:54 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 24 20:21:54 compute-0 systemd-machined[218733]: New machine qemu-1-instance-00000001.
Nov 24 20:21:54 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 24 20:21:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:21:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:21:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:21:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:21:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:21:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:21:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:54.526+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:54 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:54 compute-0 nova_compute[257476]: 2025-11-24 20:21:54.561 257491 DEBUG nova.network.neutron [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Successfully created port: fbcb6a22-b0b3-4e46-8e69-66b38826f649 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.008 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015714.9898841, a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.009 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] VM Resumed (Lifecycle Event)
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.013 257491 DEBUG nova.compute.manager [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.013 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.018 257491 INFO nova.virt.libvirt.driver [-] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Instance spawned successfully.
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.019 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.052 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.062 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.066 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.067 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.068 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.068 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.069 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.070 257491 DEBUG nova.virt.libvirt.driver [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.099 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.100 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015714.990961, a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.100 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] VM Started (Lifecycle Event)
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.128 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.133 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.143 257491 INFO nova.compute.manager [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Took 8.97 seconds to spawn the instance on the hypervisor.
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.145 257491 DEBUG nova.compute.manager [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.155 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.204 257491 INFO nova.compute.manager [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Took 9.93 seconds to build instance.
Nov 24 20:21:55 compute-0 nova_compute[257476]: 2025-11-24 20:21:55.227 257491 DEBUG oslo_concurrency.lockutils [None req-b4c09f15-b190-4d61-9ea8-a28122d1551a 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:21:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 56 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.1 MiB/s wr, 33 op/s
Nov 24 20:21:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:55 compute-0 ceph-mon[75677]: pgmap v1161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 56 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.1 MiB/s wr, 33 op/s
Nov 24 20:21:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:55.520+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:55 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:56 compute-0 nova_compute[257476]: 2025-11-24 20:21:56.293 257491 DEBUG nova.network.neutron [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Successfully updated port: fbcb6a22-b0b3-4e46-8e69-66b38826f649 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 20:21:56 compute-0 nova_compute[257476]: 2025-11-24 20:21:56.308 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Acquiring lock "refresh_cache-43bc955c-77ee-42d8-98e2-84163217d1aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:21:56 compute-0 nova_compute[257476]: 2025-11-24 20:21:56.308 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Acquired lock "refresh_cache-43bc955c-77ee-42d8-98e2-84163217d1aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:21:56 compute-0 nova_compute[257476]: 2025-11-24 20:21:56.308 257491 DEBUG nova.network.neutron [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 20:21:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:56.536+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:56 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:56 compute-0 nova_compute[257476]: 2025-11-24 20:21:56.802 257491 DEBUG nova.network.neutron [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:21:56 compute-0 nova_compute[257476]: 2025-11-24 20:21:56.897 257491 DEBUG nova.compute.manager [req-69045cab-2ff3-46df-8db2-2143db23fa99 req-4b573f56-ec0c-4fc9-bd5f-2be20128e40a b50a45f13fc34a9aabdfef0b89af3db3 bd01811db01143da8b89621d101abbcb - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Received event network-changed-fbcb6a22-b0b3-4e46-8e69-66b38826f649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 20:21:56 compute-0 nova_compute[257476]: 2025-11-24 20:21:56.898 257491 DEBUG nova.compute.manager [req-69045cab-2ff3-46df-8db2-2143db23fa99 req-4b573f56-ec0c-4fc9-bd5f-2be20128e40a b50a45f13fc34a9aabdfef0b89af3db3 bd01811db01143da8b89621d101abbcb - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Refreshing instance network info cache due to event network-changed-fbcb6a22-b0b3-4e46-8e69-66b38826f649. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 20:21:56 compute-0 nova_compute[257476]: 2025-11-24 20:21:56.898 257491 DEBUG oslo_concurrency.lockutils [req-69045cab-2ff3-46df-8db2-2143db23fa99 req-4b573f56-ec0c-4fc9-bd5f-2be20128e40a b50a45f13fc34a9aabdfef0b89af3db3 bd01811db01143da8b89621d101abbcb - - default default] Acquiring lock "refresh_cache-43bc955c-77ee-42d8-98e2-84163217d1aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:21:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 1832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:21:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.8 MiB/s wr, 101 op/s
Nov 24 20:21:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:57 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 1832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:21:57 compute-0 ceph-mon[75677]: pgmap v1162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.8 MiB/s wr, 101 op/s
Nov 24 20:21:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:57.559+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:57 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:57 compute-0 nova_compute[257476]: 2025-11-24 20:21:57.953 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Acquiring lock "a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:57 compute-0 nova_compute[257476]: 2025-11-24 20:21:57.954 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:57 compute-0 nova_compute[257476]: 2025-11-24 20:21:57.954 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Acquiring lock "a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:21:57 compute-0 nova_compute[257476]: 2025-11-24 20:21:57.955 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:21:57 compute-0 nova_compute[257476]: 2025-11-24 20:21:57.955 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:21:57 compute-0 nova_compute[257476]: 2025-11-24 20:21:57.957 257491 INFO nova.compute.manager [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Terminating instance
Nov 24 20:21:57 compute-0 nova_compute[257476]: 2025-11-24 20:21:57.959 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Acquiring lock "refresh_cache-a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:21:57 compute-0 nova_compute[257476]: 2025-11-24 20:21:57.959 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Acquired lock "refresh_cache-a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:21:57 compute-0 nova_compute[257476]: 2025-11-24 20:21:57.960 257491 DEBUG nova.network.neutron [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 20:21:58 compute-0 nova_compute[257476]: 2025-11-24 20:21:58.108 257491 DEBUG nova.network.neutron [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:21:58 compute-0 nova_compute[257476]: 2025-11-24 20:21:58.171 257491 DEBUG nova.network.neutron [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Updating instance_info_cache with network_info: [{"id": "fbcb6a22-b0b3-4e46-8e69-66b38826f649", "address": "fa:16:3e:bd:83:ee", "network": {"id": "4a485bc3-226d-456b-9867-555937265557", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-477509760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e6d5c1eae48bfa49e12800a76eaa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbcb6a22-b0", "ovs_interfaceid": "fbcb6a22-b0b3-4e46-8e69-66b38826f649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:21:58 compute-0 nova_compute[257476]: 2025-11-24 20:21:58.220 257491 DEBUG oslo_concurrency.lockutils [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Releasing lock "refresh_cache-43bc955c-77ee-42d8-98e2-84163217d1aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:21:58 compute-0 nova_compute[257476]: 2025-11-24 20:21:58.220 257491 DEBUG nova.compute.manager [None req-ba3f03dd-66f2-4990-aa3b-a93f29ccb4b1 fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Instance network_info: |[{"id": "fbcb6a22-b0b3-4e46-8e69-66b38826f649", "address": "fa:16:3e:bd:83:ee", "network": {"id": "4a485bc3-226d-456b-9867-555937265557", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-477509760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e6d5c1eae48bfa49e12800a76eaa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbcb6a22-b0", "ovs_interfaceid": "fbcb6a22-b0b3-4e46-8e69-66b38826f649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 20:21:58 compute-0 nova_compute[257476]: 2025-11-24 20:21:58.221 257491 DEBUG oslo_concurrency.lockutils [req-69045cab-2ff3-46df-8db2-2143db23fa99 req-4b573f56-ec0c-4fc9-bd5f-2be20128e40a b50a45f13fc34a9aabdfef0b89af3db3 bd01811db01143da8b89621d101abbcb - - default default] Acquired lock "refresh_cache-43bc955c-77ee-42d8-98e2-84163217d1aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:21:58 compute-0 nova_compute[257476]: 2025-11-24 20:21:58.221 257491 DEBUG nova.network.neutron [req-69045cab-2ff3-46df-8db2-2143db23fa99 req-4b573f56-ec0c-4fc9-bd5f-2be20128e40a b50a45f13fc34a9aabdfef0b89af3db3 bd01811db01143da8b89621d101abbcb - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Refreshing network info cache for port fbcb6a22-b0b3-4e46-8e69-66b38826f649 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 20:21:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:58.527+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:58 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:59 compute-0 nova_compute[257476]: 2025-11-24 20:21:59.176 257491 DEBUG nova.network.neutron [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:21:59 compute-0 nova_compute[257476]: 2025-11-24 20:21:59.195 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Releasing lock "refresh_cache-a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:21:59 compute-0 nova_compute[257476]: 2025-11-24 20:21:59.196 257491 DEBUG nova.compute.manager [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 20:21:59 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 24 20:21:59 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 5.039s CPU time.
Nov 24 20:21:59 compute-0 systemd-machined[218733]: Machine qemu-1-instance-00000001 terminated.
Nov 24 20:21:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 98 op/s
Nov 24 20:21:59 compute-0 nova_compute[257476]: 2025-11-24 20:21:59.423 257491 INFO nova.virt.libvirt.driver [-] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Instance destroyed successfully.
Nov 24 20:21:59 compute-0 nova_compute[257476]: 2025-11-24 20:21:59.423 257491 DEBUG nova.objects.instance [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lazy-loading 'resources' on Instance uuid a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:21:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:21:59.551+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:59 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:21:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 20:21:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:21:59 compute-0 ceph-mon[75677]: pgmap v1163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 98 op/s
Nov 24 20:22:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:00.512+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:00 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:00 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 24 20:22:00 compute-0 nova_compute[257476]: 2025-11-24 20:22:00.717 257491 INFO nova.virt.libvirt.driver [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Deleting instance files /var/lib/nova/instances/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_del
Nov 24 20:22:00 compute-0 nova_compute[257476]: 2025-11-24 20:22:00.718 257491 INFO nova.virt.libvirt.driver [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Deletion of /var/lib/nova/instances/a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f_del complete
Nov 24 20:22:00 compute-0 nova_compute[257476]: 2025-11-24 20:22:00.854 257491 DEBUG nova.virt.libvirt.host [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 24 20:22:00 compute-0 nova_compute[257476]: 2025-11-24 20:22:00.855 257491 INFO nova.virt.libvirt.host [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] UEFI support detected
Nov 24 20:22:00 compute-0 nova_compute[257476]: 2025-11-24 20:22:00.859 257491 INFO nova.compute.manager [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Took 1.66 seconds to destroy the instance on the hypervisor.
Nov 24 20:22:00 compute-0 nova_compute[257476]: 2025-11-24 20:22:00.860 257491 DEBUG oslo.service.loopingcall [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 20:22:00 compute-0 nova_compute[257476]: 2025-11-24 20:22:00.860 257491 DEBUG nova.compute.manager [-] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 20:22:00 compute-0 nova_compute[257476]: 2025-11-24 20:22:00.861 257491 DEBUG nova.network.neutron [-] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 20:22:01 compute-0 nova_compute[257476]: 2025-11-24 20:22:01.095 257491 DEBUG nova.network.neutron [-] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:22:01 compute-0 nova_compute[257476]: 2025-11-24 20:22:01.108 257491 DEBUG nova.network.neutron [-] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:22:01 compute-0 nova_compute[257476]: 2025-11-24 20:22:01.122 257491 INFO nova.compute.manager [-] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Took 0.26 seconds to deallocate network for instance.
Nov 24 20:22:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 88 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 24 20:22:01 compute-0 nova_compute[257476]: 2025-11-24 20:22:01.347 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:01 compute-0 nova_compute[257476]: 2025-11-24 20:22:01.348 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:01 compute-0 nova_compute[257476]: 2025-11-24 20:22:01.440 257491 DEBUG oslo_concurrency.processutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:01.505+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:01 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:01 compute-0 ceph-mon[75677]: pgmap v1164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 88 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 24 20:22:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 1842 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 24 20:22:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 24 20:22:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 24 20:22:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:22:01 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918667269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:22:02 compute-0 nova_compute[257476]: 2025-11-24 20:22:02.002 257491 DEBUG oslo_concurrency.processutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:02 compute-0 nova_compute[257476]: 2025-11-24 20:22:02.018 257491 DEBUG nova.compute.provider_tree [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:22:02 compute-0 nova_compute[257476]: 2025-11-24 20:22:02.037 257491 DEBUG nova.scheduler.client.report [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:22:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:02.518+0000 7f1a67169640 -1 osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:02 compute-0 ceph-osd[89640]: osd.1 135 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:02 compute-0 podman[271361]: 2025-11-24 20:22:02.839814505 +0000 UTC m=+0.071667791 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 20:22:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:03 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 1842 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:03 compute-0 ceph-mon[75677]: osdmap e136: 3 total, 3 up, 3 in
Nov 24 20:22:03 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1918667269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:22:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 88 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 24 20:22:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:03.476+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:03 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:03 compute-0 nova_compute[257476]: 2025-11-24 20:22:03.535 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:03 compute-0 nova_compute[257476]: 2025-11-24 20:22:03.863 257491 DEBUG nova.network.neutron [req-69045cab-2ff3-46df-8db2-2143db23fa99 req-4b573f56-ec0c-4fc9-bd5f-2be20128e40a b50a45f13fc34a9aabdfef0b89af3db3 bd01811db01143da8b89621d101abbcb - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Updated VIF entry in instance network info cache for port fbcb6a22-b0b3-4e46-8e69-66b38826f649. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 20:22:03 compute-0 nova_compute[257476]: 2025-11-24 20:22:03.864 257491 DEBUG nova.network.neutron [req-69045cab-2ff3-46df-8db2-2143db23fa99 req-4b573f56-ec0c-4fc9-bd5f-2be20128e40a b50a45f13fc34a9aabdfef0b89af3db3 bd01811db01143da8b89621d101abbcb - - default default] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Updating instance_info_cache with network_info: [{"id": "fbcb6a22-b0b3-4e46-8e69-66b38826f649", "address": "fa:16:3e:bd:83:ee", "network": {"id": "4a485bc3-226d-456b-9867-555937265557", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-477509760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e6d5c1eae48bfa49e12800a76eaa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbcb6a22-b0", "ovs_interfaceid": "fbcb6a22-b0b3-4e46-8e69-66b38826f649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:22:03 compute-0 nova_compute[257476]: 2025-11-24 20:22:03.878 257491 DEBUG oslo_concurrency.lockutils [req-69045cab-2ff3-46df-8db2-2143db23fa99 req-4b573f56-ec0c-4fc9-bd5f-2be20128e40a b50a45f13fc34a9aabdfef0b89af3db3 bd01811db01143da8b89621d101abbcb - - default default] Releasing lock "refresh_cache-43bc955c-77ee-42d8-98e2-84163217d1aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:22:03 compute-0 nova_compute[257476]: 2025-11-24 20:22:03.886 257491 INFO nova.scheduler.client.report [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Deleted allocations for instance a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f
Nov 24 20:22:04 compute-0 nova_compute[257476]: 2025-11-24 20:22:04.066 257491 DEBUG oslo_concurrency.lockutils [None req-321efd91-2b92-4d19-b37d-74900d213f10 03dfa2b5407c443a974308a641006734 e9aa686e6bd349388ffa9f482970d883 - - default default] Lock "a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:04.443+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:04 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:04 compute-0 ceph-mon[75677]: pgmap v1166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 88 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 24 20:22:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 152 op/s
Nov 24 20:22:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:05.401+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:05 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:06 compute-0 ceph-mon[75677]: pgmap v1167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 152 op/s
Nov 24 20:22:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:06.449+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:06 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 98 op/s
Nov 24 20:22:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:07.435+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:07 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 1847 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:08 compute-0 nova_compute[257476]: 2025-11-24 20:22:08.168 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:22:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:08.412+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:08 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:08 compute-0 ceph-mon[75677]: pgmap v1168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 98 op/s
Nov 24 20:22:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:08 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 1847 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 92 op/s
Nov 24 20:22:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:22:09.374 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:22:09.375 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:22:09.375 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:09.378+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:09 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:10 compute-0 nova_compute[257476]: 2025-11-24 20:22:10.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:22:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:10 compute-0 ceph-mon[75677]: pgmap v1169: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 92 op/s
Nov 24 20:22:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:10.398+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:10 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 409 B/s wr, 14 op/s
Nov 24 20:22:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:11.380+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:11 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.146 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.174 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.174 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.175 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.175 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.176 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:12.376+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:12 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:22:12 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655300795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.615 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:12 compute-0 ceph-mon[75677]: pgmap v1170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 409 B/s wr, 14 op/s
Nov 24 20:22:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:12 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2655300795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.812 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.813 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5064MB free_disk=59.971431732177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.813 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.814 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.891 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 43bc955c-77ee-42d8-98e2-84163217d1aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.891 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.891 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:22:12 compute-0 nova_compute[257476]: 2025-11-24 20:22:12.918 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 359 B/s wr, 12 op/s
Nov 24 20:22:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:13.364+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:13 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:22:13 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250900728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:22:13 compute-0 nova_compute[257476]: 2025-11-24 20:22:13.446 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:13 compute-0 nova_compute[257476]: 2025-11-24 20:22:13.455 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:22:13 compute-0 nova_compute[257476]: 2025-11-24 20:22:13.476 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:22:13 compute-0 nova_compute[257476]: 2025-11-24 20:22:13.501 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:22:13 compute-0 nova_compute[257476]: 2025-11-24 20:22:13.502 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:13 compute-0 ceph-mon[75677]: pgmap v1171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 359 B/s wr, 12 op/s
Nov 24 20:22:13 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2250900728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:22:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:14.373+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:14 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:14 compute-0 nova_compute[257476]: 2025-11-24 20:22:14.421 257491 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764015719.419862, a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:22:14 compute-0 nova_compute[257476]: 2025-11-24 20:22:14.422 257491 INFO nova.compute.manager [-] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] VM Stopped (Lifecycle Event)
Nov 24 20:22:14 compute-0 nova_compute[257476]: 2025-11-24 20:22:14.503 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:22:14 compute-0 nova_compute[257476]: 2025-11-24 20:22:14.579 257491 DEBUG nova.compute.manager [None req-7f1a6fee-318d-425f-b24e-0d53149543df - - - - - -] [instance: a81b2b4c-cc1e-4a84-ae91-3b766b5dc52f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:22:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:14 compute-0 podman[271425]: 2025-11-24 20:22:14.885040177 +0000 UTC m=+0.104071262 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Nov 24 20:22:15 compute-0 nova_compute[257476]: 2025-11-24 20:22:15.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:22:15 compute-0 nova_compute[257476]: 2025-11-24 20:22:15.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:22:15 compute-0 nova_compute[257476]: 2025-11-24 20:22:15.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:22:15 compute-0 nova_compute[257476]: 2025-11-24 20:22:15.152 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:22:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Nov 24 20:22:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:15.410+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:15 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:15 compute-0 ceph-mon[75677]: pgmap v1172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Nov 24 20:22:16 compute-0 nova_compute[257476]: 2025-11-24 20:22:16.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:22:16 compute-0 nova_compute[257476]: 2025-11-24 20:22:16.152 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:22:16 compute-0 nova_compute[257476]: 2025-11-24 20:22:16.152 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:22:16 compute-0 nova_compute[257476]: 2025-11-24 20:22:16.175 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:22:16 compute-0 nova_compute[257476]: 2025-11-24 20:22:16.176 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:22:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:22:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/177356864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:22:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:22:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/177356864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:22:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:16.448+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:16 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/177356864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:22:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/177356864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:22:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 1852 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:22:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:17.491+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:17 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:17 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 1852 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:17 compute-0 ceph-mon[75677]: pgmap v1173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:22:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:18.533+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:18 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:18 compute-0 podman[271447]: 2025-11-24 20:22:18.899275602 +0000 UTC m=+0.123021962 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:22:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:22:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:19.536+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:19 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:19 compute-0 ceph-mon[75677]: pgmap v1174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:22:19 compute-0 nova_compute[257476]: 2025-11-24 20:22:19.858 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Acquiring lock "5bc0dcb2-bec5-4d33-a8c8-42baca81a650" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:19 compute-0 nova_compute[257476]: 2025-11-24 20:22:19.858 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "5bc0dcb2-bec5-4d33-a8c8-42baca81a650" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:19 compute-0 nova_compute[257476]: 2025-11-24 20:22:19.875 257491 DEBUG nova.compute.manager [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 20:22:19 compute-0 nova_compute[257476]: 2025-11-24 20:22:19.936 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:19 compute-0 nova_compute[257476]: 2025-11-24 20:22:19.937 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:19 compute-0 nova_compute[257476]: 2025-11-24 20:22:19.942 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 20:22:19 compute-0 nova_compute[257476]: 2025-11-24 20:22:19.943 257491 INFO nova.compute.claims [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Claim successful on node compute-0.ctlplane.example.com
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.054 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:20.493+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:20 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:22:20 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771919242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.530 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.539 257491 DEBUG nova.compute.provider_tree [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.557 257491 DEBUG nova.scheduler.client.report [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.589 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.590 257491 DEBUG nova.compute.manager [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.634 257491 DEBUG nova.compute.manager [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.651 257491 INFO nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.668 257491 DEBUG nova.compute.manager [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.748 257491 DEBUG nova.compute.manager [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.750 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.751 257491 INFO nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Creating image(s)
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.776 257491 DEBUG nova.storage.rbd_utils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] rbd image 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.801 257491 DEBUG nova.storage.rbd_utils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] rbd image 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:20 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1771919242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.834 257491 DEBUG nova.storage.rbd_utils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] rbd image 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.838 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.927 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.928 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Acquiring lock "218f8903fd6674ce56e8c19056c812cf16f46909" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.930 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.930 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.963 257491 DEBUG nova.storage.rbd_utils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] rbd image 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:20 compute-0 nova_compute[257476]: 2025-11-24 20:22:20.968 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.287 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.375 257491 DEBUG nova.storage.rbd_utils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] resizing rbd image 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.502 257491 DEBUG nova.objects.instance [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lazy-loading 'migration_context' on Instance uuid 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.514 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.515 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Ensure instance console log exists: /var/lib/nova/instances/5bc0dcb2-bec5-4d33-a8c8-42baca81a650/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.516 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.516 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.517 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.520 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '7b556eea-44a0-401c-a3e5-213a835e1fc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.525 257491 WARNING nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.532 257491 DEBUG nova.virt.libvirt.host [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.533 257491 DEBUG nova.virt.libvirt.host [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 20:22:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:21.533+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:21 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.538 257491 DEBUG nova.virt.libvirt.host [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.539 257491 DEBUG nova.virt.libvirt.host [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.540 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.540 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T20:21:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='67120476-40a0-42ea-948d-218bf9a62474',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.541 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.542 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.542 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.543 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.543 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.544 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.544 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.545 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.545 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.546 257491 DEBUG nova.virt.hardware [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 20:22:21 compute-0 nova_compute[257476]: 2025-11-24 20:22:21.551 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:21 compute-0 ceph-mon[75677]: pgmap v1175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:22:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:21.823+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:21 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 19 slow ops, oldest one blocked for 1862 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:22:22 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1152911669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:22:22 compute-0 nova_compute[257476]: 2025-11-24 20:22:22.049 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:22 compute-0 nova_compute[257476]: 2025-11-24 20:22:22.082 257491 DEBUG nova.storage.rbd_utils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] rbd image 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:22 compute-0 nova_compute[257476]: 2025-11-24 20:22:22.088 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:22:22 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2051279850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:22:22 compute-0 nova_compute[257476]: 2025-11-24 20:22:22.534 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:22 compute-0 nova_compute[257476]: 2025-11-24 20:22:22.538 257491 DEBUG nova.objects.instance [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:22:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:22.543+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:22 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:22 compute-0 nova_compute[257476]: 2025-11-24 20:22:22.555 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] End _get_guest_xml xml=<domain type="kvm">
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <uuid>5bc0dcb2-bec5-4d33-a8c8-42baca81a650</uuid>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <name>instance-00000003</name>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <memory>131072</memory>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <vcpu>1</vcpu>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <metadata>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-1557015641</nova:name>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <nova:creationTime>2025-11-24 20:22:21</nova:creationTime>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <nova:flavor name="m1.nano">
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <nova:memory>128</nova:memory>
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <nova:disk>1</nova:disk>
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <nova:swap>0</nova:swap>
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <nova:vcpus>1</nova:vcpus>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       </nova:flavor>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <nova:owner>
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <nova:user uuid="9ea4388cccf747e2a607d61819d256a9">tempest-ServerDiagnosticsV248Test-1395222832-project-member</nova:user>
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <nova:project uuid="a7d03dd4405c44598bab35e85a4fc731">tempest-ServerDiagnosticsV248Test-1395222832</nova:project>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       </nova:owner>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <nova:root type="image" uuid="7b556eea-44a0-401c-a3e5-213a835e1fc5"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <nova:ports/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     </nova:instance>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   </metadata>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <sysinfo type="smbios">
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <system>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <entry name="manufacturer">RDO</entry>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <entry name="product">OpenStack Compute</entry>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <entry name="serial">5bc0dcb2-bec5-4d33-a8c8-42baca81a650</entry>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <entry name="uuid">5bc0dcb2-bec5-4d33-a8c8-42baca81a650</entry>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <entry name="family">Virtual Machine</entry>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     </system>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   </sysinfo>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <os>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <boot dev="hd"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <smbios mode="sysinfo"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   </os>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <features>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <acpi/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <apic/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <vmcoreinfo/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   </features>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <clock offset="utc">
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <timer name="hpet" present="no"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   </clock>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <cpu mode="host-model" match="exact">
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   </cpu>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   <devices>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <disk type="network" device="disk">
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk">
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       </source>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <target dev="vda" bus="virtio"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <disk type="network" device="cdrom">
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk.config">
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       </source>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:22:22 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <target dev="sda" bus="sata"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <serial type="pty">
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <log file="/var/lib/nova/instances/5bc0dcb2-bec5-4d33-a8c8-42baca81a650/console.log" append="off"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     </serial>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <video>
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <model type="virtio"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     </video>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <input type="tablet" bus="usb"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <rng model="virtio">
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <backend model="random">/dev/urandom</backend>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     </rng>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <controller type="usb" index="0"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     <memballoon model="virtio">
Nov 24 20:22:22 compute-0 nova_compute[257476]:       <stats period="10"/>
Nov 24 20:22:22 compute-0 nova_compute[257476]:     </memballoon>
Nov 24 20:22:22 compute-0 nova_compute[257476]:   </devices>
Nov 24 20:22:22 compute-0 nova_compute[257476]: </domain>
Nov 24 20:22:22 compute-0 nova_compute[257476]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 20:22:22 compute-0 nova_compute[257476]: 2025-11-24 20:22:22.608 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:22:22 compute-0 nova_compute[257476]: 2025-11-24 20:22:22.609 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:22:22 compute-0 nova_compute[257476]: 2025-11-24 20:22:22.609 257491 INFO nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Using config drive
Nov 24 20:22:22 compute-0 nova_compute[257476]: 2025-11-24 20:22:22.641 257491 DEBUG nova.storage.rbd_utils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] rbd image 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:22.783+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:22 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:22 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:22 compute-0 ceph-mon[75677]: Health check update: 19 slow ops, oldest one blocked for 1862 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:22 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1152911669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:22:22 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2051279850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:22:23 compute-0 nova_compute[257476]: 2025-11-24 20:22:23.048 257491 INFO nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Creating config drive at /var/lib/nova/instances/5bc0dcb2-bec5-4d33-a8c8-42baca81a650/disk.config
Nov 24 20:22:23 compute-0 nova_compute[257476]: 2025-11-24 20:22:23.056 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5bc0dcb2-bec5-4d33-a8c8-42baca81a650/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4_b5e7z6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:23 compute-0 nova_compute[257476]: 2025-11-24 20:22:23.199 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5bc0dcb2-bec5-4d33-a8c8-42baca81a650/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4_b5e7z6" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:23 compute-0 nova_compute[257476]: 2025-11-24 20:22:23.239 257491 DEBUG nova.storage.rbd_utils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] rbd image 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:23 compute-0 nova_compute[257476]: 2025-11-24 20:22:23.245 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5bc0dcb2-bec5-4d33-a8c8-42baca81a650/disk.config 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:22:23 compute-0 nova_compute[257476]: 2025-11-24 20:22:23.458 257491 DEBUG oslo_concurrency.processutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5bc0dcb2-bec5-4d33-a8c8-42baca81a650/disk.config 5bc0dcb2-bec5-4d33-a8c8-42baca81a650_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:23 compute-0 nova_compute[257476]: 2025-11-24 20:22:23.459 257491 INFO nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Deleting local config drive /var/lib/nova/instances/5bc0dcb2-bec5-4d33-a8c8-42baca81a650/disk.config because it was imported into RBD.
Nov 24 20:22:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:23.539+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:23 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:23 compute-0 systemd-machined[218733]: New machine qemu-2-instance-00000003.
Nov 24 20:22:23 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Nov 24 20:22:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:23.826+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:23 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:23 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:23 compute-0 ceph-mon[75677]: pgmap v1176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 207 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.265 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015744.264816, 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.266 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] VM Resumed (Lifecycle Event)
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.270 257491 DEBUG nova.compute.manager [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.271 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.276 257491 INFO nova.virt.libvirt.driver [-] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Instance spawned successfully.
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.276 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.308 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.313 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.358 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.359 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015744.2653546, 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.359 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] VM Started (Lifecycle Event)
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.379 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.380 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.380 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.380 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.381 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.381 257491 DEBUG nova.virt.libvirt.driver [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.385 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.388 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:22:24
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', 'volumes', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr']
Nov 24 20:22:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.425 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.469 257491 INFO nova.compute.manager [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Took 3.72 seconds to spawn the instance on the hypervisor.
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.470 257491 DEBUG nova.compute.manager [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.517 257491 INFO nova.compute.manager [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Took 4.60 seconds to build instance.
Nov 24 20:22:24 compute-0 nova_compute[257476]: 2025-11-24 20:22:24.533 257491 DEBUG oslo_concurrency.lockutils [None req-e89105d0-8f75-45be-a1b1-a7a8b622478a 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "5bc0dcb2-bec5-4d33-a8c8-42baca81a650" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:24.565+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:24 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:24.782+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:24 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:24 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:25 compute-0 nova_compute[257476]: 2025-11-24 20:22:25.285 257491 DEBUG nova.compute.manager [None req-01988f08-5219-45c3-9cd7-18ac0af4ff7e ea2f6f3706c94607aa79e2d12a7dcead 0c2e62c353ab4d14bed78fee5a096f25 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:22:25 compute-0 nova_compute[257476]: 2025-11-24 20:22:25.289 257491 INFO nova.compute.manager [None req-01988f08-5219-45c3-9cd7-18ac0af4ff7e ea2f6f3706c94607aa79e2d12a7dcead 0c2e62c353ab4d14bed78fee5a096f25 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Retrieving diagnostics
Nov 24 20:22:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 115 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Nov 24 20:22:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:25.608+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:25 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:25.773+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:25 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:25 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:25 compute-0 ceph-mon[75677]: pgmap v1177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 115 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Nov 24 20:22:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:26.656+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:26 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:26.763+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:26 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:26 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1862 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 564 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 24 20:22:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:27.680+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:27 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:27 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:22:27.687 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:22:27 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:22:27.689 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:22:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:27.780+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:27 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:27 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:27 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1862 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:27 compute-0 ceph-mon[75677]: pgmap v1178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 564 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 24 20:22:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:28.724+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:28 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:28.795+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:28 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:28 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 24 20:22:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:29.766+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:29 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:29.776+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:29 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:29 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:29 compute-0 ceph-mon[75677]: pgmap v1179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 24 20:22:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:30.750+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:30 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:30.782+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:30 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:30 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 24 20:22:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:31.710+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:31 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:31.762+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:31 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:31 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:31 compute-0 ceph-mon[75677]: pgmap v1180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 24 20:22:31 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1871 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:32.757+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:32 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:32.778+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:32 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:32 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1871 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 24 20:22:33 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:22:33.691 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:22:33 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:33.785+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:33.814+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:33 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:33 compute-0 podman[271836]: 2025-11-24 20:22:33.865631188 +0000 UTC m=+0.092877191 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 24 20:22:33 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:33 compute-0 ceph-mon[75677]: pgmap v1181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006292300538842222 of space, bias 1.0, pg target 0.18876901616526667 quantized to 32 (current 32)
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:22:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:22:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:34.738+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:34 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:34.790+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:34 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:34 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:34 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:35 compute-0 sudo[271856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:35 compute-0 sudo[271856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:35 compute-0 sudo[271856]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:35 compute-0 sudo[271881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:22:35 compute-0 sudo[271881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 24 20:22:35 compute-0 sudo[271881]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:35 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 20:22:35 compute-0 sudo[271907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:35 compute-0 sudo[271907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:35 compute-0 sudo[271907]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:35 compute-0 sudo[271932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 20:22:35 compute-0 sudo[271932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.586 257491 DEBUG nova.compute.manager [None req-ea0c8a7d-f49d-472d-bdf5-f82cd5a41abb ea2f6f3706c94607aa79e2d12a7dcead 0c2e62c353ab4d14bed78fee5a096f25 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.591 257491 INFO nova.compute.manager [None req-ea0c8a7d-f49d-472d-bdf5-f82cd5a41abb ea2f6f3706c94607aa79e2d12a7dcead 0c2e62c353ab4d14bed78fee5a096f25 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Retrieving diagnostics
Nov 24 20:22:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:35.736+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:35 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:35.767+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:35 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.877 257491 DEBUG oslo_concurrency.lockutils [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Acquiring lock "5bc0dcb2-bec5-4d33-a8c8-42baca81a650" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.878 257491 DEBUG oslo_concurrency.lockutils [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "5bc0dcb2-bec5-4d33-a8c8-42baca81a650" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.879 257491 DEBUG oslo_concurrency.lockutils [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Acquiring lock "5bc0dcb2-bec5-4d33-a8c8-42baca81a650-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.879 257491 DEBUG oslo_concurrency.lockutils [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "5bc0dcb2-bec5-4d33-a8c8-42baca81a650-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.880 257491 DEBUG oslo_concurrency.lockutils [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lock "5bc0dcb2-bec5-4d33-a8c8-42baca81a650-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.881 257491 INFO nova.compute.manager [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Terminating instance
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.882 257491 DEBUG oslo_concurrency.lockutils [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Acquiring lock "refresh_cache-5bc0dcb2-bec5-4d33-a8c8-42baca81a650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.882 257491 DEBUG oslo_concurrency.lockutils [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Acquired lock "refresh_cache-5bc0dcb2-bec5-4d33-a8c8-42baca81a650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:22:35 compute-0 nova_compute[257476]: 2025-11-24 20:22:35.883 257491 DEBUG nova.network.neutron [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 20:22:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:35 compute-0 ceph-mon[75677]: pgmap v1182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 130 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 24 20:22:35 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:36 compute-0 podman[272027]: 2025-11-24 20:22:36.168386924 +0000 UTC m=+0.096048816 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:22:36 compute-0 podman[272027]: 2025-11-24 20:22:36.292163225 +0000 UTC m=+0.219825107 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:22:36 compute-0 nova_compute[257476]: 2025-11-24 20:22:36.478 257491 DEBUG nova.network.neutron [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:22:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:36.744+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:36 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:36.755+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:36 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:36 compute-0 nova_compute[257476]: 2025-11-24 20:22:36.994 257491 DEBUG nova.network.neutron [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:22:37 compute-0 nova_compute[257476]: 2025-11-24 20:22:37.008 257491 DEBUG oslo_concurrency.lockutils [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Releasing lock "refresh_cache-5bc0dcb2-bec5-4d33-a8c8-42baca81a650" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:22:37 compute-0 nova_compute[257476]: 2025-11-24 20:22:37.009 257491 DEBUG nova.compute.manager [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 20:22:37 compute-0 sudo[271932]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:22:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:22:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:22:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:22:37 compute-0 sudo[272188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 139 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 94 op/s
Nov 24 20:22:37 compute-0 sudo[272188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:37 compute-0 sudo[272188]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:37 compute-0 sudo[272213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:22:37 compute-0 sudo[272213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:37 compute-0 sudo[272213]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:37 compute-0 sudo[272238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:37 compute-0 sudo[272238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:37 compute-0 sudo[272238]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:37 compute-0 sudo[272263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:22:37 compute-0 sudo[272263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:37.717+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:37 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:37.723+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:37 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:37 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:22:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:22:37 compute-0 ceph-mon[75677]: pgmap v1183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 139 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 94 op/s
Nov 24 20:22:37 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:38 compute-0 sudo[272263]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:22:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:22:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:22:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:22:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:22:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:22:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ce5a43e6-7684-430d-a706-31a1d3fd0c06 does not exist
Nov 24 20:22:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e4f3673a-2a23-4722-87d9-fab00803ddf8 does not exist
Nov 24 20:22:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 8f66ccb7-27cd-44f0-9f67-5637c42bc261 does not exist
Nov 24 20:22:38 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1877 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:22:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:22:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:22:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:22:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:22:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:22:38 compute-0 sudo[272319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:38 compute-0 sudo[272319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:38 compute-0 sudo[272319]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:38 compute-0 sudo[272344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:22:38 compute-0 sudo[272344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:38 compute-0 sudo[272344]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:38 compute-0 sudo[272369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:38 compute-0 sudo[272369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:38 compute-0 sudo[272369]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:38 compute-0 sudo[272394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:22:38 compute-0 sudo[272394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:38.728+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:38 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:38.746+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:38 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:22:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:22:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:22:38 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1877 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:22:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:22:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:22:38 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:39 compute-0 podman[272460]: 2025-11-24 20:22:39.176322622 +0000 UTC m=+0.094656959 container create 6eb16975a833c98069fb76aee6017f6e4668f0f98ddbdca0103cdccbd236bac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 20:22:39 compute-0 podman[272460]: 2025-11-24 20:22:39.118483664 +0000 UTC m=+0.036818051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:22:39 compute-0 systemd[1]: Started libpod-conmon-6eb16975a833c98069fb76aee6017f6e4668f0f98ddbdca0103cdccbd236bac4.scope.
Nov 24 20:22:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:22:39 compute-0 podman[272460]: 2025-11-24 20:22:39.281832821 +0000 UTC m=+0.200167198 container init 6eb16975a833c98069fb76aee6017f6e4668f0f98ddbdca0103cdccbd236bac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:22:39 compute-0 podman[272460]: 2025-11-24 20:22:39.293646069 +0000 UTC m=+0.211980416 container start 6eb16975a833c98069fb76aee6017f6e4668f0f98ddbdca0103cdccbd236bac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:22:39 compute-0 podman[272460]: 2025-11-24 20:22:39.298924551 +0000 UTC m=+0.217258948 container attach 6eb16975a833c98069fb76aee6017f6e4668f0f98ddbdca0103cdccbd236bac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:22:39 compute-0 reverent_curie[272477]: 167 167
Nov 24 20:22:39 compute-0 systemd[1]: libpod-6eb16975a833c98069fb76aee6017f6e4668f0f98ddbdca0103cdccbd236bac4.scope: Deactivated successfully.
Nov 24 20:22:39 compute-0 podman[272460]: 2025-11-24 20:22:39.301226673 +0000 UTC m=+0.219561010 container died 6eb16975a833c98069fb76aee6017f6e4668f0f98ddbdca0103cdccbd236bac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 20:22:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d16e9953bfaa5dc6d845ea91f722ac3e515618059de0abee72d4e9d2d21515a0-merged.mount: Deactivated successfully.
Nov 24 20:22:39 compute-0 podman[272460]: 2025-11-24 20:22:39.36093011 +0000 UTC m=+0.279264457 container remove 6eb16975a833c98069fb76aee6017f6e4668f0f98ddbdca0103cdccbd236bac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 20:22:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 157 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 24 20:22:39 compute-0 systemd[1]: libpod-conmon-6eb16975a833c98069fb76aee6017f6e4668f0f98ddbdca0103cdccbd236bac4.scope: Deactivated successfully.
Nov 24 20:22:39 compute-0 podman[272499]: 2025-11-24 20:22:39.611910886 +0000 UTC m=+0.070859118 container create 674c26214399682c6fa6b13361e78e113fcf79d29fd487aba20d5f0599f1ae3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 20:22:39 compute-0 systemd[1]: Started libpod-conmon-674c26214399682c6fa6b13361e78e113fcf79d29fd487aba20d5f0599f1ae3c.scope.
Nov 24 20:22:39 compute-0 podman[272499]: 2025-11-24 20:22:39.583975274 +0000 UTC m=+0.042923556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:22:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12d652b3ada4efbece034d3b795a139c453c5c4cc9653c97a1d460296c971db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12d652b3ada4efbece034d3b795a139c453c5c4cc9653c97a1d460296c971db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12d652b3ada4efbece034d3b795a139c453c5c4cc9653c97a1d460296c971db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12d652b3ada4efbece034d3b795a139c453c5c4cc9653c97a1d460296c971db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12d652b3ada4efbece034d3b795a139c453c5c4cc9653c97a1d460296c971db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:39 compute-0 podman[272499]: 2025-11-24 20:22:39.714262322 +0000 UTC m=+0.173210554 container init 674c26214399682c6fa6b13361e78e113fcf79d29fd487aba20d5f0599f1ae3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 20:22:39 compute-0 podman[272499]: 2025-11-24 20:22:39.728906876 +0000 UTC m=+0.187855098 container start 674c26214399682c6fa6b13361e78e113fcf79d29fd487aba20d5f0599f1ae3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:22:39 compute-0 podman[272499]: 2025-11-24 20:22:39.733006866 +0000 UTC m=+0.191955088 container attach 674c26214399682c6fa6b13361e78e113fcf79d29fd487aba20d5f0599f1ae3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:22:39 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:39.739+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:39.752+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:39 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:39 compute-0 ceph-mon[75677]: pgmap v1184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 157 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 24 20:22:39 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:22:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:22:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:22:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:22:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:22:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:40.779+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:40 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:40.783+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:40 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:40 compute-0 confident_ganguly[272516]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:22:40 compute-0 confident_ganguly[272516]: --> relative data size: 1.0
Nov 24 20:22:40 compute-0 confident_ganguly[272516]: --> All data devices are unavailable
Nov 24 20:22:40 compute-0 systemd[1]: libpod-674c26214399682c6fa6b13361e78e113fcf79d29fd487aba20d5f0599f1ae3c.scope: Deactivated successfully.
Nov 24 20:22:40 compute-0 systemd[1]: libpod-674c26214399682c6fa6b13361e78e113fcf79d29fd487aba20d5f0599f1ae3c.scope: Consumed 1.167s CPU time.
Nov 24 20:22:40 compute-0 podman[272499]: 2025-11-24 20:22:40.936260776 +0000 UTC m=+1.395209008 container died 674c26214399682c6fa6b13361e78e113fcf79d29fd487aba20d5f0599f1ae3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:22:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c12d652b3ada4efbece034d3b795a139c453c5c4cc9653c97a1d460296c971db-merged.mount: Deactivated successfully.
Nov 24 20:22:41 compute-0 podman[272499]: 2025-11-24 20:22:41.008695076 +0000 UTC m=+1.467643278 container remove 674c26214399682c6fa6b13361e78e113fcf79d29fd487aba20d5f0599f1ae3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ganguly, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:22:41 compute-0 systemd[1]: libpod-conmon-674c26214399682c6fa6b13361e78e113fcf79d29fd487aba20d5f0599f1ae3c.scope: Deactivated successfully.
Nov 24 20:22:41 compute-0 sudo[272394]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:41 compute-0 sudo[272555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:41 compute-0 sudo[272555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:41 compute-0 sudo[272555]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:41 compute-0 sudo[272580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:22:41 compute-0 sudo[272580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:41 compute-0 sudo[272580]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:41 compute-0 sudo[272605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:41 compute-0 sudo[272605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:41 compute-0 sudo[272605]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 24 20:22:41 compute-0 sudo[272630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:22:41 compute-0 sudo[272630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:41.766+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:41 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:41.822+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:41 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:41 compute-0 podman[272696]: 2025-11-24 20:22:41.926742227 +0000 UTC m=+0.052442642 container create 0ef796147728a3b30fd555af448fd0d9da69ecbebe5e2a165424b347134dfa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:22:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:41 compute-0 systemd[1]: Started libpod-conmon-0ef796147728a3b30fd555af448fd0d9da69ecbebe5e2a165424b347134dfa08.scope.
Nov 24 20:22:41 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:41 compute-0 ceph-mon[75677]: pgmap v1185: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 24 20:22:41 compute-0 podman[272696]: 2025-11-24 20:22:41.901132318 +0000 UTC m=+0.026832783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:22:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:22:42 compute-0 podman[272696]: 2025-11-24 20:22:42.030050258 +0000 UTC m=+0.155750703 container init 0ef796147728a3b30fd555af448fd0d9da69ecbebe5e2a165424b347134dfa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:22:42 compute-0 podman[272696]: 2025-11-24 20:22:42.041547737 +0000 UTC m=+0.167248112 container start 0ef796147728a3b30fd555af448fd0d9da69ecbebe5e2a165424b347134dfa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:22:42 compute-0 podman[272696]: 2025-11-24 20:22:42.045541345 +0000 UTC m=+0.171241790 container attach 0ef796147728a3b30fd555af448fd0d9da69ecbebe5e2a165424b347134dfa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 20:22:42 compute-0 affectionate_chebyshev[272712]: 167 167
Nov 24 20:22:42 compute-0 systemd[1]: libpod-0ef796147728a3b30fd555af448fd0d9da69ecbebe5e2a165424b347134dfa08.scope: Deactivated successfully.
Nov 24 20:22:42 compute-0 podman[272696]: 2025-11-24 20:22:42.050403076 +0000 UTC m=+0.176103491 container died 0ef796147728a3b30fd555af448fd0d9da69ecbebe5e2a165424b347134dfa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 24 20:22:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcfa2b18334b760394594d68f2f7ea34e4284eaf2fc4c912e93fb46b8b4c022f-merged.mount: Deactivated successfully.
Nov 24 20:22:42 compute-0 podman[272696]: 2025-11-24 20:22:42.100261838 +0000 UTC m=+0.225962253 container remove 0ef796147728a3b30fd555af448fd0d9da69ecbebe5e2a165424b347134dfa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:22:42 compute-0 systemd[1]: libpod-conmon-0ef796147728a3b30fd555af448fd0d9da69ecbebe5e2a165424b347134dfa08.scope: Deactivated successfully.
Nov 24 20:22:42 compute-0 podman[272736]: 2025-11-24 20:22:42.37672738 +0000 UTC m=+0.090090246 container create 78274bef5fda324ef1b9eb232942f2e6394a19a24e359ef0739a9c54bab832a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:22:42 compute-0 podman[272736]: 2025-11-24 20:22:42.334037991 +0000 UTC m=+0.047400917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:22:42 compute-0 systemd[1]: Started libpod-conmon-78274bef5fda324ef1b9eb232942f2e6394a19a24e359ef0739a9c54bab832a6.scope.
Nov 24 20:22:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1bb1676f42be1e1d71ab1420a42fa29a1ce7e6ce3721e7505b59399f28ad4dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1bb1676f42be1e1d71ab1420a42fa29a1ce7e6ce3721e7505b59399f28ad4dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1bb1676f42be1e1d71ab1420a42fa29a1ce7e6ce3721e7505b59399f28ad4dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1bb1676f42be1e1d71ab1420a42fa29a1ce7e6ce3721e7505b59399f28ad4dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:42 compute-0 podman[272736]: 2025-11-24 20:22:42.589281522 +0000 UTC m=+0.302644388 container init 78274bef5fda324ef1b9eb232942f2e6394a19a24e359ef0739a9c54bab832a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:22:42 compute-0 podman[272736]: 2025-11-24 20:22:42.603010061 +0000 UTC m=+0.316372917 container start 78274bef5fda324ef1b9eb232942f2e6394a19a24e359ef0739a9c54bab832a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:22:42 compute-0 podman[272736]: 2025-11-24 20:22:42.607171123 +0000 UTC m=+0.320534079 container attach 78274bef5fda324ef1b9eb232942f2e6394a19a24e359ef0739a9c54bab832a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 20:22:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:42.782+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:42 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:42.866+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:42 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:42 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:43 compute-0 pensive_ride[272752]: {
Nov 24 20:22:43 compute-0 pensive_ride[272752]:     "0": [
Nov 24 20:22:43 compute-0 pensive_ride[272752]:         {
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "devices": [
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "/dev/loop3"
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             ],
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_name": "ceph_lv0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_size": "21470642176",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "name": "ceph_lv0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "tags": {
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.cluster_name": "ceph",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.crush_device_class": "",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.encrypted": "0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.osd_id": "0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.type": "block",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.vdo": "0"
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             },
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "type": "block",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "vg_name": "ceph_vg0"
Nov 24 20:22:43 compute-0 pensive_ride[272752]:         }
Nov 24 20:22:43 compute-0 pensive_ride[272752]:     ],
Nov 24 20:22:43 compute-0 pensive_ride[272752]:     "1": [
Nov 24 20:22:43 compute-0 pensive_ride[272752]:         {
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "devices": [
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "/dev/loop4"
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             ],
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_name": "ceph_lv1",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_size": "21470642176",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "name": "ceph_lv1",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "tags": {
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.cluster_name": "ceph",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.crush_device_class": "",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.encrypted": "0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.osd_id": "1",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.type": "block",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.vdo": "0"
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             },
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "type": "block",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "vg_name": "ceph_vg1"
Nov 24 20:22:43 compute-0 pensive_ride[272752]:         }
Nov 24 20:22:43 compute-0 pensive_ride[272752]:     ],
Nov 24 20:22:43 compute-0 pensive_ride[272752]:     "2": [
Nov 24 20:22:43 compute-0 pensive_ride[272752]:         {
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "devices": [
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "/dev/loop5"
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             ],
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_name": "ceph_lv2",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_size": "21470642176",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "name": "ceph_lv2",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "tags": {
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.cluster_name": "ceph",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.crush_device_class": "",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.encrypted": "0",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.osd_id": "2",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.type": "block",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:                 "ceph.vdo": "0"
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             },
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "type": "block",
Nov 24 20:22:43 compute-0 pensive_ride[272752]:             "vg_name": "ceph_vg2"
Nov 24 20:22:43 compute-0 pensive_ride[272752]:         }
Nov 24 20:22:43 compute-0 pensive_ride[272752]:     ]
Nov 24 20:22:43 compute-0 pensive_ride[272752]: }
Nov 24 20:22:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 24 20:22:43 compute-0 systemd[1]: libpod-78274bef5fda324ef1b9eb232942f2e6394a19a24e359ef0739a9c54bab832a6.scope: Deactivated successfully.
Nov 24 20:22:43 compute-0 podman[272736]: 2025-11-24 20:22:43.377154749 +0000 UTC m=+1.090517595 container died 78274bef5fda324ef1b9eb232942f2e6394a19a24e359ef0739a9c54bab832a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1bb1676f42be1e1d71ab1420a42fa29a1ce7e6ce3721e7505b59399f28ad4dd-merged.mount: Deactivated successfully.
Nov 24 20:22:43 compute-0 podman[272736]: 2025-11-24 20:22:43.432017737 +0000 UTC m=+1.145380563 container remove 78274bef5fda324ef1b9eb232942f2e6394a19a24e359ef0739a9c54bab832a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:22:43 compute-0 systemd[1]: libpod-conmon-78274bef5fda324ef1b9eb232942f2e6394a19a24e359ef0739a9c54bab832a6.scope: Deactivated successfully.
Nov 24 20:22:43 compute-0 sudo[272630]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:43 compute-0 sudo[272775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:43 compute-0 sudo[272775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:43 compute-0 sudo[272775]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:43 compute-0 sudo[272800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:22:43 compute-0 sudo[272800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:43 compute-0 sudo[272800]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:43 compute-0 sudo[272825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:43 compute-0 sudo[272825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:43 compute-0 sudo[272825]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:43 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:43.809+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:43.883+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:43 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:43 compute-0 sudo[272850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:22:43 compute-0 sudo[272850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:43 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:43 compute-0 ceph-mon[75677]: pgmap v1186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 24 20:22:44 compute-0 podman[272913]: 2025-11-24 20:22:44.367793746 +0000 UTC m=+0.068835644 container create 04cb445559454b212d7397a281561f7df1c7fd0c169af7de8829127c2fc42605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 20:22:44 compute-0 systemd[1]: Started libpod-conmon-04cb445559454b212d7397a281561f7df1c7fd0c169af7de8829127c2fc42605.scope.
Nov 24 20:22:44 compute-0 podman[272913]: 2025-11-24 20:22:44.338331882 +0000 UTC m=+0.039373840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:22:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:22:44 compute-0 podman[272913]: 2025-11-24 20:22:44.464446027 +0000 UTC m=+0.165487965 container init 04cb445559454b212d7397a281561f7df1c7fd0c169af7de8829127c2fc42605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:22:44 compute-0 podman[272913]: 2025-11-24 20:22:44.477841038 +0000 UTC m=+0.178882936 container start 04cb445559454b212d7397a281561f7df1c7fd0c169af7de8829127c2fc42605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 20:22:44 compute-0 podman[272913]: 2025-11-24 20:22:44.481835045 +0000 UTC m=+0.182876943 container attach 04cb445559454b212d7397a281561f7df1c7fd0c169af7de8829127c2fc42605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:22:44 compute-0 nice_panini[272929]: 167 167
Nov 24 20:22:44 compute-0 systemd[1]: libpod-04cb445559454b212d7397a281561f7df1c7fd0c169af7de8829127c2fc42605.scope: Deactivated successfully.
Nov 24 20:22:44 compute-0 podman[272913]: 2025-11-24 20:22:44.485827923 +0000 UTC m=+0.186869811 container died 04cb445559454b212d7397a281561f7df1c7fd0c169af7de8829127c2fc42605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:22:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd0668aaf5942dea76624e7ca6c2ebb07ae7a0e3921095dafdeed0e1abeb0066-merged.mount: Deactivated successfully.
Nov 24 20:22:44 compute-0 podman[272913]: 2025-11-24 20:22:44.53217716 +0000 UTC m=+0.233219058 container remove 04cb445559454b212d7397a281561f7df1c7fd0c169af7de8829127c2fc42605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:22:44 compute-0 systemd[1]: libpod-conmon-04cb445559454b212d7397a281561f7df1c7fd0c169af7de8829127c2fc42605.scope: Deactivated successfully.
Nov 24 20:22:44 compute-0 podman[272952]: 2025-11-24 20:22:44.80262679 +0000 UTC m=+0.077279171 container create 1aa29c6aefbcd1c6eea891d4bde7946f8590502fd26578cb3fe67bbf968e12ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:22:44 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:44.827+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:44 compute-0 systemd[1]: Started libpod-conmon-1aa29c6aefbcd1c6eea891d4bde7946f8590502fd26578cb3fe67bbf968e12ff.scope.
Nov 24 20:22:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:44.856+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:44 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:44 compute-0 podman[272952]: 2025-11-24 20:22:44.770346531 +0000 UTC m=+0.044999002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:22:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a32dbe6b20a6658ba3e39ee2d328579e20adf372fbd094eece6122f34c6a86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a32dbe6b20a6658ba3e39ee2d328579e20adf372fbd094eece6122f34c6a86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a32dbe6b20a6658ba3e39ee2d328579e20adf372fbd094eece6122f34c6a86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a32dbe6b20a6658ba3e39ee2d328579e20adf372fbd094eece6122f34c6a86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:22:44 compute-0 podman[272952]: 2025-11-24 20:22:44.902018285 +0000 UTC m=+0.176670746 container init 1aa29c6aefbcd1c6eea891d4bde7946f8590502fd26578cb3fe67bbf968e12ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jang, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:22:44 compute-0 podman[272952]: 2025-11-24 20:22:44.914161623 +0000 UTC m=+0.188814024 container start 1aa29c6aefbcd1c6eea891d4bde7946f8590502fd26578cb3fe67bbf968e12ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 20:22:44 compute-0 podman[272952]: 2025-11-24 20:22:44.918338095 +0000 UTC m=+0.192990476 container attach 1aa29c6aefbcd1c6eea891d4bde7946f8590502fd26578cb3fe67bbf968e12ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 20:22:45 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 24 20:22:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:45.781+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:45 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:45 compute-0 podman[272986]: 2025-11-24 20:22:45.863287742 +0000 UTC m=+0.079593604 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 24 20:22:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:45.867+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:45 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:46 compute-0 pensive_jang[272970]: {
Nov 24 20:22:46 compute-0 pensive_jang[272970]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "osd_id": 2,
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "type": "bluestore"
Nov 24 20:22:46 compute-0 pensive_jang[272970]:     },
Nov 24 20:22:46 compute-0 pensive_jang[272970]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "osd_id": 1,
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "type": "bluestore"
Nov 24 20:22:46 compute-0 pensive_jang[272970]:     },
Nov 24 20:22:46 compute-0 pensive_jang[272970]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "osd_id": 0,
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:22:46 compute-0 pensive_jang[272970]:         "type": "bluestore"
Nov 24 20:22:46 compute-0 pensive_jang[272970]:     }
Nov 24 20:22:46 compute-0 pensive_jang[272970]: }
Nov 24 20:22:46 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:46 compute-0 ceph-mon[75677]: pgmap v1187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 24 20:22:46 compute-0 systemd[1]: libpod-1aa29c6aefbcd1c6eea891d4bde7946f8590502fd26578cb3fe67bbf968e12ff.scope: Deactivated successfully.
Nov 24 20:22:46 compute-0 systemd[1]: libpod-1aa29c6aefbcd1c6eea891d4bde7946f8590502fd26578cb3fe67bbf968e12ff.scope: Consumed 1.138s CPU time.
Nov 24 20:22:46 compute-0 podman[272952]: 2025-11-24 20:22:46.044567781 +0000 UTC m=+1.319220182 container died 1aa29c6aefbcd1c6eea891d4bde7946f8590502fd26578cb3fe67bbf968e12ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-92a32dbe6b20a6658ba3e39ee2d328579e20adf372fbd094eece6122f34c6a86-merged.mount: Deactivated successfully.
Nov 24 20:22:46 compute-0 podman[272952]: 2025-11-24 20:22:46.158910379 +0000 UTC m=+1.433562790 container remove 1aa29c6aefbcd1c6eea891d4bde7946f8590502fd26578cb3fe67bbf968e12ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:22:46 compute-0 systemd[1]: libpod-conmon-1aa29c6aefbcd1c6eea891d4bde7946f8590502fd26578cb3fe67bbf968e12ff.scope: Deactivated successfully.
Nov 24 20:22:46 compute-0 sudo[272850]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:22:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:22:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:22:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:22:46 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0edd5977-19d7-4652-bbd6-d586cfd0da2f does not exist
Nov 24 20:22:46 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev fe8e8938-6087-4606-bcaf-6bd59cce329a does not exist
Nov 24 20:22:46 compute-0 sudo[273035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:22:46 compute-0 sudo[273035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:46 compute-0 sudo[273035]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:46 compute-0 sudo[273060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:22:46 compute-0 sudo[273060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:22:46 compute-0 sudo[273060]: pam_unix(sudo:session): session closed for user root
Nov 24 20:22:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:46.744+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:46 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:46.910+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:46 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1881 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:47 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:22:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:22:47 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:47 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1881 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:47 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 24 20:22:47 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 12.578s CPU time.
Nov 24 20:22:47 compute-0 systemd-machined[218733]: Machine qemu-2-instance-00000003 terminated.
Nov 24 20:22:47 compute-0 nova_compute[257476]: 2025-11-24 20:22:47.269 257491 INFO nova.virt.libvirt.driver [-] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Instance destroyed successfully.
Nov 24 20:22:47 compute-0 nova_compute[257476]: 2025-11-24 20:22:47.270 257491 DEBUG nova.objects.instance [None req-8face67d-50b1-4ef8-b062-add80e1a8cb2 9ea4388cccf747e2a607d61819d256a9 a7d03dd4405c44598bab35e85a4fc731 - - default default] Lazy-loading 'resources' on Instance uuid 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:22:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 24 20:22:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:47.707+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:47 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:47.918+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:47 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:48 compute-0 ceph-mon[75677]: pgmap v1188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 24 20:22:48 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:48.671+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:48 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:48.896+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:48 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:49 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 264 KiB/s rd, 1.2 MiB/s wr, 49 op/s
Nov 24 20:22:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:49.654+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:49 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:49 compute-0 podman[273106]: 2025-11-24 20:22:49.887710591 +0000 UTC m=+0.111975665 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 20:22:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:49.935+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:49 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:50 compute-0 ceph-mon[75677]: pgmap v1189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 264 KiB/s rd, 1.2 MiB/s wr, 49 op/s
Nov 24 20:22:50 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:50.664+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:50 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:50.979+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:50 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:51 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 5.3 KiB/s wr, 8 op/s
Nov 24 20:22:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:51.675+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:51 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1887 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:52.019+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:52 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:52 compute-0 ceph-mon[75677]: pgmap v1190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 5.3 KiB/s wr, 8 op/s
Nov 24 20:22:52 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:52 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1887 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:52.651+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:52 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:53.023+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:53 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:53 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 2 op/s
Nov 24 20:22:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:53.677+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:53 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:54.028+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:54 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:54 compute-0 ceph-mon[75677]: pgmap v1191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 2 op/s
Nov 24 20:22:54 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:22:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:22:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:22:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:22:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:22:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:22:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:54.696+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:54 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:54.983+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:54 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:55 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 2 op/s
Nov 24 20:22:55 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:55.735+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:56.019+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:56 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.105845) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015776105908, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2339, "num_deletes": 251, "total_data_size": 2966490, "memory_usage": 3010784, "flush_reason": "Manual Compaction"}
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 24 20:22:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:56 compute-0 ceph-mon[75677]: pgmap v1192: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 2 op/s
Nov 24 20:22:56 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015776124786, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 2886817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30905, "largest_seqno": 33243, "table_properties": {"data_size": 2876729, "index_size": 5878, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 27327, "raw_average_key_size": 22, "raw_value_size": 2853881, "raw_average_value_size": 2308, "num_data_blocks": 259, "num_entries": 1236, "num_filter_entries": 1236, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764015615, "oldest_key_time": 1764015615, "file_creation_time": 1764015776, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 19106 microseconds, and 11702 cpu microseconds.
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.124951) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 2886817 bytes OK
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.125034) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.126705) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.126727) EVENT_LOG_v1 {"time_micros": 1764015776126719, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.126750) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2955950, prev total WAL file size 2955950, number of live WAL files 2.
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.128667) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(2819KB)], [65(8770KB)]
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015776128713, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11868293, "oldest_snapshot_seqno": -1}
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 8819 keys, 10418884 bytes, temperature: kUnknown
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015776198151, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10418884, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10364868, "index_size": 30864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22085, "raw_key_size": 234569, "raw_average_key_size": 26, "raw_value_size": 10208336, "raw_average_value_size": 1157, "num_data_blocks": 1213, "num_entries": 8819, "num_filter_entries": 8819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764015776, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.198647) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10418884 bytes
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.200118) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.5 rd, 149.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 9337, records dropped: 518 output_compression: NoCompression
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.200150) EVENT_LOG_v1 {"time_micros": 1764015776200135, "job": 36, "event": "compaction_finished", "compaction_time_micros": 69605, "compaction_time_cpu_micros": 49813, "output_level": 6, "num_output_files": 1, "total_output_size": 10418884, "num_input_records": 9337, "num_output_records": 8819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015776201473, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015776204999, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.128537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.205130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.205136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.205137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.205139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:22:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:22:56.205141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:22:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:56.685+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:56 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1892 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:22:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:57.010+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:57 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:57 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:57 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1892 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:22:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Nov 24 20:22:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:57.689+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:57 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:58.005+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:58 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:58 compute-0 ceph-mon[75677]: pgmap v1193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Nov 24 20:22:58 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:58 compute-0 nova_compute[257476]: 2025-11-24 20:22:58.223 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Acquiring lock "22c51d67-f5ae-4a75-8f61-73d6e63c4ddf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:58 compute-0 nova_compute[257476]: 2025-11-24 20:22:58.223 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "22c51d67-f5ae-4a75-8f61-73d6e63c4ddf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:58 compute-0 nova_compute[257476]: 2025-11-24 20:22:58.246 257491 DEBUG nova.compute.manager [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 20:22:58 compute-0 nova_compute[257476]: 2025-11-24 20:22:58.347 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:58 compute-0 nova_compute[257476]: 2025-11-24 20:22:58.348 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:58 compute-0 nova_compute[257476]: 2025-11-24 20:22:58.360 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 20:22:58 compute-0 nova_compute[257476]: 2025-11-24 20:22:58.361 257491 INFO nova.compute.claims [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Claim successful on node compute-0.ctlplane.example.com
Nov 24 20:22:58 compute-0 nova_compute[257476]: 2025-11-24 20:22:58.457 257491 DEBUG oslo_concurrency.processutils [None req-cdae78b1-c5ea-4cb2-8889-59644365b7ab a07ebbdf608d48dbab8b86dfeb5ee9ef b01685d0fe8d4dd387ce9a8fa26ccedf - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:58 compute-0 nova_compute[257476]: 2025-11-24 20:22:58.497 257491 DEBUG oslo_concurrency.processutils [None req-cdae78b1-c5ea-4cb2-8889-59644365b7ab a07ebbdf608d48dbab8b86dfeb5ee9ef b01685d0fe8d4dd387ce9a8fa26ccedf - - default default] CMD "env LANG=C uptime" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:58 compute-0 nova_compute[257476]: 2025-11-24 20:22:58.604 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:58.654+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:58 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:22:58.974+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:58 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:22:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:22:59 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2986663228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.072 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.082 257491 DEBUG nova.compute.provider_tree [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.111 257491 DEBUG nova.scheduler.client.report [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:22:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:22:59 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:59 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2986663228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.144 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.145 257491 DEBUG nova.compute.manager [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.335 257491 DEBUG nova.compute.manager [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.336 257491 DEBUG nova.network.neutron [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.356 257491 INFO nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.372 257491 DEBUG nova.compute.manager [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 20:22:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.472 257491 DEBUG nova.compute.manager [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.474 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.474 257491 INFO nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Creating image(s)
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.505 257491 DEBUG nova.storage.rbd_utils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] rbd image 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.537 257491 DEBUG nova.storage.rbd_utils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] rbd image 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.570 257491 DEBUG nova.storage.rbd_utils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] rbd image 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.574 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:22:59.628+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:59 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:22:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.657 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.658 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Acquiring lock "218f8903fd6674ce56e8c19056c812cf16f46909" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.659 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.659 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.692 257491 DEBUG nova.storage.rbd_utils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] rbd image 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.697 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.736 257491 DEBUG nova.network.neutron [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 24 20:22:59 compute-0 nova_compute[257476]: 2025-11-24 20:22:59.737 257491 DEBUG nova.compute.manager [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 20:22:59 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.001 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:00 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:00.019+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.095 257491 DEBUG nova.storage.rbd_utils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] resizing rbd image 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 20:23:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:00 compute-0 ceph-mon[75677]: pgmap v1194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Nov 24 20:23:00 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.264 257491 DEBUG nova.objects.instance [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lazy-loading 'migration_context' on Instance uuid 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.280 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.280 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Ensure instance console log exists: /var/lib/nova/instances/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.281 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.281 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.282 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.284 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '7b556eea-44a0-401c-a3e5-213a835e1fc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.288 257491 WARNING nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.293 257491 DEBUG nova.virt.libvirt.host [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.294 257491 DEBUG nova.virt.libvirt.host [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.297 257491 DEBUG nova.virt.libvirt.host [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.298 257491 DEBUG nova.virt.libvirt.host [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.298 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.298 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T20:21:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='67120476-40a0-42ea-948d-218bf9a62474',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.299 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.299 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.300 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.300 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.300 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.301 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.301 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.301 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.302 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.302 257491 DEBUG nova.virt.hardware [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.305 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:00.648+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:00 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:23:00 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/908088194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.780 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.803 257491 DEBUG nova.storage.rbd_utils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] rbd image 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:00 compute-0 nova_compute[257476]: 2025-11-24 20:23:00.807 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:01 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:01.023+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:01 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:01 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/908088194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:23:01 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1574542488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.336 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.340 257491 DEBUG nova.objects.instance [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lazy-loading 'pci_devices' on Instance uuid 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.360 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] End _get_guest_xml xml=<domain type="kvm">
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <uuid>22c51d67-f5ae-4a75-8f61-73d6e63c4ddf</uuid>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <name>instance-00000004</name>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <memory>131072</memory>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <vcpu>1</vcpu>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <metadata>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <nova:name>tempest-ServerDiagnosticsTest-server-1174769095</nova:name>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <nova:creationTime>2025-11-24 20:23:00</nova:creationTime>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <nova:flavor name="m1.nano">
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <nova:memory>128</nova:memory>
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <nova:disk>1</nova:disk>
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <nova:swap>0</nova:swap>
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <nova:vcpus>1</nova:vcpus>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       </nova:flavor>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <nova:owner>
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <nova:user uuid="28c096a8c1ef436dbe3ce971cab128f3">tempest-ServerDiagnosticsTest-186724727-project-member</nova:user>
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <nova:project uuid="0dcbebeba3504161ab5bfe9433a587cd">tempest-ServerDiagnosticsTest-186724727</nova:project>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       </nova:owner>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <nova:root type="image" uuid="7b556eea-44a0-401c-a3e5-213a835e1fc5"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <nova:ports/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     </nova:instance>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   </metadata>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <sysinfo type="smbios">
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <system>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <entry name="manufacturer">RDO</entry>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <entry name="product">OpenStack Compute</entry>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <entry name="serial">22c51d67-f5ae-4a75-8f61-73d6e63c4ddf</entry>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <entry name="uuid">22c51d67-f5ae-4a75-8f61-73d6e63c4ddf</entry>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <entry name="family">Virtual Machine</entry>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     </system>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   </sysinfo>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <os>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <boot dev="hd"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <smbios mode="sysinfo"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   </os>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <features>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <acpi/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <apic/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <vmcoreinfo/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   </features>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <clock offset="utc">
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <timer name="hpet" present="no"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   </clock>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <cpu mode="host-model" match="exact">
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   </cpu>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   <devices>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <disk type="network" device="disk">
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk">
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       </source>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <target dev="vda" bus="virtio"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <disk type="network" device="cdrom">
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk.config">
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       </source>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:23:01 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <target dev="sda" bus="sata"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <serial type="pty">
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <log file="/var/lib/nova/instances/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf/console.log" append="off"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     </serial>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <video>
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <model type="virtio"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     </video>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <input type="tablet" bus="usb"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <rng model="virtio">
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <backend model="random">/dev/urandom</backend>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     </rng>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <controller type="usb" index="0"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     <memballoon model="virtio">
Nov 24 20:23:01 compute-0 nova_compute[257476]:       <stats period="10"/>
Nov 24 20:23:01 compute-0 nova_compute[257476]:     </memballoon>
Nov 24 20:23:01 compute-0 nova_compute[257476]:   </devices>
Nov 24 20:23:01 compute-0 nova_compute[257476]: </domain>
Nov 24 20:23:01 compute-0 nova_compute[257476]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 20:23:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 196 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.4 MiB/s wr, 17 op/s
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.437 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.438 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.439 257491 INFO nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Using config drive
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.474 257491 DEBUG nova.storage.rbd_utils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] rbd image 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:01.676+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:01 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.680 257491 INFO nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Creating config drive at /var/lib/nova/instances/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf/disk.config
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.688 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1c89wbxe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.835 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1c89wbxe" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.876 257491 DEBUG nova.storage.rbd_utils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] rbd image 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:01 compute-0 nova_compute[257476]: 2025-11-24 20:23:01.881 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf/disk.config 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1897 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:02 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:02.057+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.070 257491 DEBUG oslo_concurrency.processutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf/disk.config 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.072 257491 INFO nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Deleting local config drive /var/lib/nova/instances/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf/disk.config because it was imported into RBD.
Nov 24 20:23:02 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1574542488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:02 compute-0 ceph-mon[75677]: pgmap v1195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 196 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.4 MiB/s wr, 17 op/s
Nov 24 20:23:02 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:02 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1897 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:02 compute-0 systemd-machined[218733]: New machine qemu-3-instance-00000004.
Nov 24 20:23:02 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.268 257491 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764015767.2671244, 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.270 257491 INFO nova.compute.manager [-] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] VM Stopped (Lifecycle Event)
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.297 257491 DEBUG nova.compute.manager [None req-927cf433-a67d-4b26-a293-86f0f70627c3 - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.303 257491 DEBUG nova.compute.manager [None req-927cf433-a67d-4b26-a293-86f0f70627c3 - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.318 257491 INFO nova.compute.manager [None req-927cf433-a67d-4b26-a293-86f0f70627c3 - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 24 20:23:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:02.628+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:02 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.814 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015782.8138094, 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.815 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] VM Resumed (Lifecycle Event)
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.818 257491 DEBUG nova.compute.manager [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.818 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.824 257491 INFO nova.virt.libvirt.driver [-] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Instance spawned successfully.
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.824 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.845 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.854 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.862 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.862 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.863 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.864 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.865 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.866 257491 DEBUG nova.virt.libvirt.driver [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.874 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.874 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015782.8173606, 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.874 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] VM Started (Lifecycle Event)
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.897 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.902 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.927 257491 INFO nova.compute.manager [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Took 3.45 seconds to spawn the instance on the hypervisor.
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.927 257491 DEBUG nova.compute.manager [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.929 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:23:02 compute-0 nova_compute[257476]: 2025-11-24 20:23:02.993 257491 INFO nova.compute.manager [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Took 4.69 seconds to build instance.
Nov 24 20:23:03 compute-0 nova_compute[257476]: 2025-11-24 20:23:03.011 257491 DEBUG oslo_concurrency.lockutils [None req-1b802dd4-e4b1-4035-8649-227449efd7d9 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "22c51d67-f5ae-4a75-8f61-73d6e63c4ddf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:03 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:03.054+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:03 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 196 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Nov 24 20:23:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:03.621+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:03 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:04 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:04.033+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.078 257491 DEBUG nova.compute.manager [None req-ee264c1d-1ff6-439c-88eb-e4019bfe22dc 97c9dfeb4ae94d5a9d3121a118bfe7ea 8168f8e8f2384921b499148c42d70303 - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.083 257491 INFO nova.compute.manager [None req-ee264c1d-1ff6-439c-88eb-e4019bfe22dc 97c9dfeb4ae94d5a9d3121a118bfe7ea 8168f8e8f2384921b499148c42d70303 - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Retrieving diagnostics
Nov 24 20:23:04 compute-0 ceph-mon[75677]: pgmap v1196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 196 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Nov 24 20:23:04 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.314 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Acquiring lock "22c51d67-f5ae-4a75-8f61-73d6e63c4ddf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.316 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "22c51d67-f5ae-4a75-8f61-73d6e63c4ddf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.316 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Acquiring lock "22c51d67-f5ae-4a75-8f61-73d6e63c4ddf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.317 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "22c51d67-f5ae-4a75-8f61-73d6e63c4ddf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.318 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "22c51d67-f5ae-4a75-8f61-73d6e63c4ddf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.319 257491 INFO nova.compute.manager [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Terminating instance
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.321 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Acquiring lock "refresh_cache-22c51d67-f5ae-4a75-8f61-73d6e63c4ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.322 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Acquired lock "refresh_cache-22c51d67-f5ae-4a75-8f61-73d6e63c4ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:23:04 compute-0 nova_compute[257476]: 2025-11-24 20:23:04.322 257491 DEBUG nova.network.neutron [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 20:23:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:04.577+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:04 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:04 compute-0 podman[273500]: 2025-11-24 20:23:04.864812507 +0000 UTC m=+0.085339579 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 20:23:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:04.991+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:04 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:05 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 208 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 24 20:23:05 compute-0 nova_compute[257476]: 2025-11-24 20:23:05.415 257491 DEBUG nova.network.neutron [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:23:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:05.602+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:05 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:05.942+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:05 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:05 compute-0 nova_compute[257476]: 2025-11-24 20:23:05.953 257491 DEBUG nova.network.neutron [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:23:05 compute-0 nova_compute[257476]: 2025-11-24 20:23:05.970 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Releasing lock "refresh_cache-22c51d67-f5ae-4a75-8f61-73d6e63c4ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:23:05 compute-0 nova_compute[257476]: 2025-11-24 20:23:05.971 257491 DEBUG nova.compute.manager [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 20:23:06 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 24 20:23:06 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 3.880s CPU time.
Nov 24 20:23:06 compute-0 systemd-machined[218733]: Machine qemu-3-instance-00000004 terminated.
Nov 24 20:23:06 compute-0 ceph-mon[75677]: pgmap v1197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 208 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 24 20:23:06 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:06 compute-0 nova_compute[257476]: 2025-11-24 20:23:06.196 257491 INFO nova.virt.libvirt.driver [-] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Instance destroyed successfully.
Nov 24 20:23:06 compute-0 nova_compute[257476]: 2025-11-24 20:23:06.197 257491 DEBUG nova.objects.instance [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lazy-loading 'resources' on Instance uuid 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:23:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:06.574+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:06 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:06 compute-0 nova_compute[257476]: 2025-11-24 20:23:06.717 257491 INFO nova.virt.libvirt.driver [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Deleting instance files /var/lib/nova/instances/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_del
Nov 24 20:23:06 compute-0 nova_compute[257476]: 2025-11-24 20:23:06.718 257491 INFO nova.virt.libvirt.driver [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Deletion of /var/lib/nova/instances/22c51d67-f5ae-4a75-8f61-73d6e63c4ddf_del complete
Nov 24 20:23:06 compute-0 nova_compute[257476]: 2025-11-24 20:23:06.769 257491 INFO nova.compute.manager [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Took 0.80 seconds to destroy the instance on the hypervisor.
Nov 24 20:23:06 compute-0 nova_compute[257476]: 2025-11-24 20:23:06.771 257491 DEBUG oslo.service.loopingcall [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 20:23:06 compute-0 nova_compute[257476]: 2025-11-24 20:23:06.771 257491 DEBUG nova.compute.manager [-] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 20:23:06 compute-0 nova_compute[257476]: 2025-11-24 20:23:06.772 257491 DEBUG nova.network.neutron [-] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 20:23:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1902 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:06.957+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:06 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:07 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:07 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1902 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 208 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 24 20:23:07 compute-0 nova_compute[257476]: 2025-11-24 20:23:07.428 257491 DEBUG nova.network.neutron [-] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:23:07 compute-0 nova_compute[257476]: 2025-11-24 20:23:07.449 257491 DEBUG nova.network.neutron [-] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:23:07 compute-0 nova_compute[257476]: 2025-11-24 20:23:07.467 257491 INFO nova.compute.manager [-] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Took 0.70 seconds to deallocate network for instance.
Nov 24 20:23:07 compute-0 nova_compute[257476]: 2025-11-24 20:23:07.535 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:07 compute-0 nova_compute[257476]: 2025-11-24 20:23:07.536 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:07.576+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:07 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:07 compute-0 nova_compute[257476]: 2025-11-24 20:23:07.640 257491 DEBUG oslo_concurrency.processutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:07.982+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:07 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:23:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810703968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:08 compute-0 nova_compute[257476]: 2025-11-24 20:23:08.097 257491 DEBUG oslo_concurrency.processutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:08 compute-0 nova_compute[257476]: 2025-11-24 20:23:08.104 257491 DEBUG nova.compute.provider_tree [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:23:08 compute-0 nova_compute[257476]: 2025-11-24 20:23:08.124 257491 DEBUG nova.scheduler.client.report [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:23:08 compute-0 nova_compute[257476]: 2025-11-24 20:23:08.149 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:08 compute-0 nova_compute[257476]: 2025-11-24 20:23:08.152 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:23:08 compute-0 nova_compute[257476]: 2025-11-24 20:23:08.183 257491 INFO nova.scheduler.client.report [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Deleted allocations for instance 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf
Nov 24 20:23:08 compute-0 ceph-mon[75677]: pgmap v1198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 208 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 24 20:23:08 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/810703968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:08 compute-0 nova_compute[257476]: 2025-11-24 20:23:08.267 257491 DEBUG oslo_concurrency.lockutils [None req-88a78df3-ede5-4526-ae61-a51c2d47b186 28c096a8c1ef436dbe3ce971cab128f3 0dcbebeba3504161ab5bfe9433a587cd - - default default] Lock "22c51d67-f5ae-4a75-8f61-73d6e63c4ddf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:08.531+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:08 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:09.023+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:09 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:09 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:23:09.375 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:23:09.377 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:23:09.377 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 196 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 24 20:23:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:09.545+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:09 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:09.983+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:09 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:10 compute-0 nova_compute[257476]: 2025-11-24 20:23:10.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:23:10 compute-0 ceph-mon[75677]: pgmap v1199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 196 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 24 20:23:10 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:10.555+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:10 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:11.017+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:11 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:11 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 24 20:23:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:11.516+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:11 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 1907 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:11.974+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:11 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:12 compute-0 ceph-mon[75677]: pgmap v1200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 24 20:23:12 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:12 compute-0 ceph-mon[75677]: Health check update: 22 slow ops, oldest one blocked for 1907 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:12.470+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:12 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:12.948+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:12 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.146 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.147 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.174 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.200 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.200 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.200 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.201 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.201 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:13 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 435 KiB/s wr, 112 op/s
Nov 24 20:23:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:13.426+0000 7f2ca3ee7640 -1 osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:13 compute-0 ceph-osd[88624]: osd.0 136 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:23:13 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4221663752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.645 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.727 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.727 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.796 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "4e9758ff-13d1-447b-9a2a-d6ae9f807143" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.797 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "4e9758ff-13d1-447b-9a2a-d6ae9f807143" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.810 257491 DEBUG nova.compute.manager [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.885 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.887 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.895 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 20:23:13 compute-0 nova_compute[257476]: 2025-11-24 20:23:13.896 257491 INFO nova.compute.claims [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Claim successful on node compute-0.ctlplane.example.com
Nov 24 20:23:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:13.911+0000 7f1a67169640 -1 osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:13 compute-0 ceph-osd[89640]: osd.1 136 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.068 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.070 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4990MB free_disk=59.92609405517578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.071 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.073 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.100:0/2819276587"} v 0) v1
Nov 24 20:23:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1412173354' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.100:0/2819276587"}]: dispatch
Nov 24 20:23:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 24 20:23:14 compute-0 ceph-mon[75677]: pgmap v1201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 435 KiB/s wr, 112 op/s
Nov 24 20:23:14 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:14 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4221663752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:14 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1412173354' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.100:0/2819276587"}]: dispatch
Nov 24 20:23:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1412173354' entity='client.openstack' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.100:0/2819276587"}]': finished
Nov 24 20:23:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 24 20:23:14 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 24 20:23:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:14.418+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:14 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:23:14 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2448168679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.603 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.612 257491 DEBUG nova.compute.provider_tree [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.633 257491 DEBUG nova.scheduler.client.report [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.683 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.684 257491 DEBUG nova.compute.manager [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.689 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.736 257491 DEBUG nova.compute.manager [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.736 257491 DEBUG nova.network.neutron [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.753 257491 INFO nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.771 257491 DEBUG nova.compute.manager [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.783 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 43bc955c-77ee-42d8-98e2-84163217d1aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.784 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.784 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 4e9758ff-13d1-447b-9a2a-d6ae9f807143 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.784 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.785 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.849 257491 DEBUG nova.compute.manager [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.851 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.851 257491 INFO nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Creating image(s)
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.887 257491 DEBUG nova.storage.rbd_utils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.925 257491 DEBUG nova.storage.rbd_utils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:14.948+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:14 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.963 257491 DEBUG nova.storage.rbd_utils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:14 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.968 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:14.999 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.060 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.061 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "218f8903fd6674ce56e8c19056c812cf16f46909" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.061 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.062 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.087 257491 DEBUG nova.storage.rbd_utils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.091 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.128 257491 DEBUG nova.network.neutron [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.129 257491 DEBUG nova.compute.manager [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 20:23:15 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1412173354' entity='client.openstack' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.100:0/2819276587"}]': finished
Nov 24 20:23:15 compute-0 ceph-mon[75677]: osdmap e137: 3 total, 3 up, 3 in
Nov 24 20:23:15 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:15 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2448168679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.371 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Nov 24 20:23:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:15.433+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:15 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.450 257491 DEBUG nova.storage.rbd_utils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] resizing rbd image 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 20:23:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:23:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2383763476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.497 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.505 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.518 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.554 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.555 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.561 257491 DEBUG nova.objects.instance [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lazy-loading 'migration_context' on Instance uuid 4e9758ff-13d1-447b-9a2a-d6ae9f807143 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.569 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.570 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Ensure instance console log exists: /var/lib/nova/instances/4e9758ff-13d1-447b-9a2a-d6ae9f807143/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.570 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.570 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.570 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.572 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '7b556eea-44a0-401c-a3e5-213a835e1fc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.576 257491 WARNING nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.581 257491 DEBUG nova.virt.libvirt.host [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.582 257491 DEBUG nova.virt.libvirt.host [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.585 257491 DEBUG nova.virt.libvirt.host [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.586 257491 DEBUG nova.virt.libvirt.host [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.586 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.586 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T20:21:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='67120476-40a0-42ea-948d-218bf9a62474',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.587 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.587 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.587 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.587 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.588 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.588 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.588 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.588 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.588 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.589 257491 DEBUG nova.virt.hardware [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 20:23:15 compute-0 nova_compute[257476]: 2025-11-24 20:23:15.591 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:15.979+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:15 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:23:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/330907603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.067 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.101 257491 DEBUG nova.storage.rbd_utils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.106 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:16 compute-0 ceph-mon[75677]: pgmap v1203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 162 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Nov 24 20:23:16 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2383763476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/330907603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:16.400+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:16 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:23:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2224509006' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:23:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:23:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2224509006' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:23:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:23:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4052234646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.647 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.650 257491 DEBUG nova.objects.instance [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e9758ff-13d1-447b-9a2a-d6ae9f807143 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.663 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] End _get_guest_xml xml=<domain type="kvm">
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <uuid>4e9758ff-13d1-447b-9a2a-d6ae9f807143</uuid>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <name>instance-00000005</name>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <memory>131072</memory>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <vcpu>1</vcpu>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <metadata>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <nova:name>tempest-LiveMigrationNegativeTest-server-1397907160</nova:name>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <nova:creationTime>2025-11-24 20:23:15</nova:creationTime>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <nova:flavor name="m1.nano">
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <nova:memory>128</nova:memory>
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <nova:disk>1</nova:disk>
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <nova:swap>0</nova:swap>
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <nova:vcpus>1</nova:vcpus>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       </nova:flavor>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <nova:owner>
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <nova:user uuid="a063a2ef868d49f69d65f7e71b5ba3c2">tempest-LiveMigrationNegativeTest-517888706-project-member</nova:user>
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <nova:project uuid="4f2eef9d4bd64678b12c95861d4f7f9e">tempest-LiveMigrationNegativeTest-517888706</nova:project>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       </nova:owner>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <nova:root type="image" uuid="7b556eea-44a0-401c-a3e5-213a835e1fc5"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <nova:ports/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     </nova:instance>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   </metadata>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <sysinfo type="smbios">
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <system>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <entry name="manufacturer">RDO</entry>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <entry name="product">OpenStack Compute</entry>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <entry name="serial">4e9758ff-13d1-447b-9a2a-d6ae9f807143</entry>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <entry name="uuid">4e9758ff-13d1-447b-9a2a-d6ae9f807143</entry>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <entry name="family">Virtual Machine</entry>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     </system>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   </sysinfo>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <os>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <boot dev="hd"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <smbios mode="sysinfo"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   </os>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <features>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <acpi/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <apic/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <vmcoreinfo/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   </features>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <clock offset="utc">
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <timer name="hpet" present="no"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   </clock>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <cpu mode="host-model" match="exact">
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   </cpu>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   <devices>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <disk type="network" device="disk">
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk">
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       </source>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <target dev="vda" bus="virtio"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <disk type="network" device="cdrom">
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk.config">
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       </source>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:23:16 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <target dev="sda" bus="sata"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <serial type="pty">
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <log file="/var/lib/nova/instances/4e9758ff-13d1-447b-9a2a-d6ae9f807143/console.log" append="off"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     </serial>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <video>
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <model type="virtio"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     </video>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <input type="tablet" bus="usb"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <rng model="virtio">
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <backend model="random">/dev/urandom</backend>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     </rng>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <controller type="usb" index="0"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     <memballoon model="virtio">
Nov 24 20:23:16 compute-0 nova_compute[257476]:       <stats period="10"/>
Nov 24 20:23:16 compute-0 nova_compute[257476]:     </memballoon>
Nov 24 20:23:16 compute-0 nova_compute[257476]:   </devices>
Nov 24 20:23:16 compute-0 nova_compute[257476]: </domain>
Nov 24 20:23:16 compute-0 nova_compute[257476]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.678 257491 DEBUG oslo_concurrency.lockutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Acquiring lock "db8c22d1-e16d-49f8-b4a5-ba8e87849ea3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.679 257491 DEBUG oslo_concurrency.lockutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Lock "db8c22d1-e16d-49f8-b4a5-ba8e87849ea3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.702 257491 DEBUG nova.compute.manager [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.714 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.714 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.715 257491 INFO nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Using config drive
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.748 257491 DEBUG nova.storage.rbd_utils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.780 257491 DEBUG oslo_concurrency.lockutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.780 257491 DEBUG oslo_concurrency.lockutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.791 257491 DEBUG nova.virt.hardware [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.792 257491 INFO nova.compute.claims [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Claim successful on node compute-0.ctlplane.example.com
Nov 24 20:23:16 compute-0 podman[273874]: 2025-11-24 20:23:16.865962475 +0000 UTC m=+0.094395862 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.950 257491 DEBUG oslo_concurrency.processutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 1912 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.984 257491 INFO nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Creating config drive at /var/lib/nova/instances/4e9758ff-13d1-447b-9a2a-d6ae9f807143/disk.config
Nov 24 20:23:16 compute-0 nova_compute[257476]: 2025-11-24 20:23:16.989 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e9758ff-13d1-447b-9a2a-d6ae9f807143/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4scbomrh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:16.999+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:17 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.135 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e9758ff-13d1-447b-9a2a-d6ae9f807143/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4scbomrh" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.163 257491 DEBUG nova.storage.rbd_utils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.166 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e9758ff-13d1-447b-9a2a-d6ae9f807143/disk.config 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:17 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2224509006' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:23:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2224509006' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:23:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4052234646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:17 compute-0 ceph-mon[75677]: Health check update: 22 slow ops, oldest one blocked for 1912 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.342 257491 DEBUG oslo_concurrency.processutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e9758ff-13d1-447b-9a2a-d6ae9f807143/disk.config 4e9758ff-13d1-447b-9a2a-d6ae9f807143_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.344 257491 INFO nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Deleting local config drive /var/lib/nova/instances/4e9758ff-13d1-447b-9a2a-d6ae9f807143/disk.config because it was imported into RBD.
Nov 24 20:23:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 166 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 1.9 MiB/s wr, 49 op/s
Nov 24 20:23:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:17.395+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:17 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:17 compute-0 systemd-machined[218733]: New machine qemu-4-instance-00000005.
Nov 24 20:23:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:23:17 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/879796891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:17 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000005.
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.453 257491 DEBUG oslo_concurrency.processutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.461 257491 DEBUG nova.compute.provider_tree [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.482 257491 DEBUG nova.scheduler.client.report [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.512 257491 DEBUG oslo_concurrency.lockutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.513 257491 DEBUG nova.compute.manager [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.532 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.532 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.532 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.574 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.574 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.575 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.575 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.575 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.576 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.577 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.577 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.577 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.577 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.591 257491 DEBUG nova.compute.manager [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.592 257491 DEBUG nova.network.neutron [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.611 257491 INFO nova.virt.libvirt.driver [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.638 257491 DEBUG nova.compute.manager [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.742 257491 DEBUG nova.compute.manager [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.744 257491 DEBUG nova.virt.libvirt.driver [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.744 257491 INFO nova.virt.libvirt.driver [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Creating image(s)
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.775 257491 DEBUG nova.storage.rbd_utils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] rbd image db8c22d1-e16d-49f8-b4a5-ba8e87849ea3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.807 257491 DEBUG nova.storage.rbd_utils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] rbd image db8c22d1-e16d-49f8-b4a5-ba8e87849ea3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.840 257491 DEBUG nova.storage.rbd_utils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] rbd image db8c22d1-e16d-49f8-b4a5-ba8e87849ea3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.846 257491 DEBUG oslo_concurrency.processutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.940 257491 DEBUG oslo_concurrency.processutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.941 257491 DEBUG oslo_concurrency.lockutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Acquiring lock "218f8903fd6674ce56e8c19056c812cf16f46909" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.942 257491 DEBUG oslo_concurrency.lockutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.942 257491 DEBUG oslo_concurrency.lockutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.987 257491 DEBUG nova.storage.rbd_utils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] rbd image db8c22d1-e16d-49f8-b4a5-ba8e87849ea3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:17 compute-0 nova_compute[257476]: 2025-11-24 20:23:17.993 257491 DEBUG oslo_concurrency.processutils [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 db8c22d1-e16d-49f8-b4a5-ba8e87849ea3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.025 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015798.0248399, 4e9758ff-13d1-447b-9a2a-d6ae9f807143 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.026 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] VM Resumed (Lifecycle Event)
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.031 257491 DEBUG nova.compute.manager [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.032 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.038 257491 INFO nova.virt.libvirt.driver [-] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Instance spawned successfully.
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.039 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 20:23:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:18.042+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:18 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.049 257491 DEBUG nova.network.neutron [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.049 257491 DEBUG nova.compute.manager [None req-bc47bc98-816e-475c-8f06-44944d97ef18 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.071 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.084 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.089 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.089 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.089 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.090 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.090 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.090 257491 DEBUG nova.virt.libvirt.driver [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.120 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.121 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015798.0300503, 4e9758ff-13d1-447b-9a2a-d6ae9f807143 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.121 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] VM Started (Lifecycle Event)
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.143 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.147 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.153 257491 INFO nova.compute.manager [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Took 3.30 seconds to spawn the instance on the hypervisor.
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.153 257491 DEBUG nova.compute.manager [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.162 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.210 257491 INFO nova.compute.manager [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Took 4.35 seconds to build instance.
Nov 24 20:23:18 compute-0 nova_compute[257476]: 2025-11-24 20:23:18.233 257491 DEBUG oslo_concurrency.lockutils [None req-ea42fc3b-17ea-45ae-80dd-5155e2108e2d a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "4e9758ff-13d1-447b-9a2a-d6ae9f807143" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:18 compute-0 ceph-mon[75677]: pgmap v1204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 166 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 1.9 MiB/s wr, 49 op/s
Nov 24 20:23:18 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:18 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/879796891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:18.359+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:18 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:19 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:19.027+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:19 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:19.374+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:19 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 141 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 2.4 MiB/s wr, 80 op/s
Nov 24 20:23:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:19.991+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:19 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:20 compute-0 ceph-mon[75677]: pgmap v1205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 141 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 2.4 MiB/s wr, 80 op/s
Nov 24 20:23:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:20.341+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:20 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:20 compute-0 podman[274109]: 2025-11-24 20:23:20.895495052 +0000 UTC m=+0.122458498 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 24 20:23:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:20.988+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:20 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:21 compute-0 nova_compute[257476]: 2025-11-24 20:23:21.194 257491 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764015786.1932592, 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:23:21 compute-0 nova_compute[257476]: 2025-11-24 20:23:21.194 257491 INFO nova.compute.manager [-] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] VM Stopped (Lifecycle Event)
Nov 24 20:23:21 compute-0 nova_compute[257476]: 2025-11-24 20:23:21.210 257491 DEBUG nova.compute.manager [None req-a6a00b81-7e94-49f5-b50a-327fb358a5b1 - - - - - -] [instance: 22c51d67-f5ae-4a75-8f61-73d6e63c4ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:21 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:21 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:21.363+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:21 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 173 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 148 op/s
Nov 24 20:23:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1917 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:22.036+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:22 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:22 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:22 compute-0 ceph-mon[75677]: pgmap v1206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 173 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 148 op/s
Nov 24 20:23:22 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1917 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:22.373+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:22 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:22.998+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:22 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:23.360+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:23 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:23 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 173 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 148 op/s
Nov 24 20:23:23 compute-0 nova_compute[257476]: 2025-11-24 20:23:23.649 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "02cdfa83-3da3-4b21-b297-5885c45a3350" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:23 compute-0 nova_compute[257476]: 2025-11-24 20:23:23.650 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "02cdfa83-3da3-4b21-b297-5885c45a3350" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:23 compute-0 nova_compute[257476]: 2025-11-24 20:23:23.690 257491 DEBUG nova.compute.manager [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 20:23:23 compute-0 nova_compute[257476]: 2025-11-24 20:23:23.898 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:23 compute-0 nova_compute[257476]: 2025-11-24 20:23:23.899 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:23 compute-0 nova_compute[257476]: 2025-11-24 20:23:23.905 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 20:23:23 compute-0 nova_compute[257476]: 2025-11-24 20:23:23.905 257491 INFO nova.compute.claims [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Claim successful on node compute-0.ctlplane.example.com
Nov 24 20:23:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:24.044+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:24 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:24 compute-0 nova_compute[257476]: 2025-11-24 20:23:24.183 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:24.373+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:24 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:24 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:24 compute-0 ceph-mon[75677]: pgmap v1207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 173 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 148 op/s
Nov 24 20:23:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:23:24
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'default.rgw.meta', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.control']
Nov 24 20:23:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:23:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:23:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181483751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:24 compute-0 nova_compute[257476]: 2025-11-24 20:23:24.673 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:24 compute-0 nova_compute[257476]: 2025-11-24 20:23:24.682 257491 DEBUG nova.compute.provider_tree [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:23:24 compute-0 nova_compute[257476]: 2025-11-24 20:23:24.700 257491 DEBUG nova.scheduler.client.report [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:23:24 compute-0 nova_compute[257476]: 2025-11-24 20:23:24.797 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:24 compute-0 nova_compute[257476]: 2025-11-24 20:23:24.798 257491 DEBUG nova.compute.manager [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 20:23:24 compute-0 nova_compute[257476]: 2025-11-24 20:23:24.869 257491 DEBUG nova.compute.manager [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 20:23:24 compute-0 nova_compute[257476]: 2025-11-24 20:23:24.869 257491 DEBUG nova.network.neutron [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 20:23:24 compute-0 nova_compute[257476]: 2025-11-24 20:23:24.892 257491 INFO nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 20:23:24 compute-0 nova_compute[257476]: 2025-11-24 20:23:24.949 257491 DEBUG nova.compute.manager [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 20:23:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:25.045+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:25 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.053 257491 DEBUG nova.compute.manager [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.055 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.055 257491 INFO nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Creating image(s)
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.089 257491 DEBUG nova.storage.rbd_utils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 02cdfa83-3da3-4b21-b297-5885c45a3350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.127 257491 DEBUG nova.storage.rbd_utils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 02cdfa83-3da3-4b21-b297-5885c45a3350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.161 257491 DEBUG nova.storage.rbd_utils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 02cdfa83-3da3-4b21-b297-5885c45a3350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.165 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.245 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.246 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "218f8903fd6674ce56e8c19056c812cf16f46909" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.247 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.248 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.281 257491 DEBUG nova.storage.rbd_utils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 02cdfa83-3da3-4b21-b297-5885c45a3350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.286 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 02cdfa83-3da3-4b21-b297-5885c45a3350_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 173 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 138 op/s
Nov 24 20:23:25 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:25 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4181483751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:25.411+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:25 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.578 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 02cdfa83-3da3-4b21-b297-5885c45a3350_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.706 257491 DEBUG nova.storage.rbd_utils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] resizing rbd image 02cdfa83-3da3-4b21-b297-5885c45a3350_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.811 257491 DEBUG nova.network.neutron [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.812 257491 DEBUG nova.compute.manager [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.817 257491 DEBUG nova.objects.instance [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lazy-loading 'migration_context' on Instance uuid 02cdfa83-3da3-4b21-b297-5885c45a3350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.837 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.837 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Ensure instance console log exists: /var/lib/nova/instances/02cdfa83-3da3-4b21-b297-5885c45a3350/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.838 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.838 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.838 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.840 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '7b556eea-44a0-401c-a3e5-213a835e1fc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.844 257491 WARNING nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.849 257491 DEBUG nova.virt.libvirt.host [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.849 257491 DEBUG nova.virt.libvirt.host [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.853 257491 DEBUG nova.virt.libvirt.host [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.854 257491 DEBUG nova.virt.libvirt.host [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.854 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.854 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T20:21:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='67120476-40a0-42ea-948d-218bf9a62474',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.855 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.855 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.855 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.855 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.855 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.856 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.856 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.856 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.856 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.856 257491 DEBUG nova.virt.hardware [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 20:23:25 compute-0 nova_compute[257476]: 2025-11-24 20:23:25.859 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:26.033+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:26 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:23:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578944639' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:26 compute-0 nova_compute[257476]: 2025-11-24 20:23:26.332 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:26 compute-0 nova_compute[257476]: 2025-11-24 20:23:26.365 257491 DEBUG nova.storage.rbd_utils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 02cdfa83-3da3-4b21-b297-5885c45a3350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:26.370+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:26 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:26 compute-0 nova_compute[257476]: 2025-11-24 20:23:26.371 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:26 compute-0 ceph-mon[75677]: pgmap v1208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 173 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 138 op/s
Nov 24 20:23:26 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:26 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2578944639' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:23:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689119963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:26 compute-0 nova_compute[257476]: 2025-11-24 20:23:26.827 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:26 compute-0 nova_compute[257476]: 2025-11-24 20:23:26.830 257491 DEBUG nova.objects.instance [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 02cdfa83-3da3-4b21-b297-5885c45a3350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:23:26 compute-0 nova_compute[257476]: 2025-11-24 20:23:26.850 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] End _get_guest_xml xml=<domain type="kvm">
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <uuid>02cdfa83-3da3-4b21-b297-5885c45a3350</uuid>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <name>instance-00000007</name>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <memory>131072</memory>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <vcpu>1</vcpu>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <metadata>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <nova:name>tempest-LiveMigrationNegativeTest-server-25918466</nova:name>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <nova:creationTime>2025-11-24 20:23:25</nova:creationTime>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <nova:flavor name="m1.nano">
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <nova:memory>128</nova:memory>
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <nova:disk>1</nova:disk>
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <nova:swap>0</nova:swap>
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <nova:vcpus>1</nova:vcpus>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       </nova:flavor>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <nova:owner>
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <nova:user uuid="a063a2ef868d49f69d65f7e71b5ba3c2">tempest-LiveMigrationNegativeTest-517888706-project-member</nova:user>
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <nova:project uuid="4f2eef9d4bd64678b12c95861d4f7f9e">tempest-LiveMigrationNegativeTest-517888706</nova:project>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       </nova:owner>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <nova:root type="image" uuid="7b556eea-44a0-401c-a3e5-213a835e1fc5"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <nova:ports/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     </nova:instance>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   </metadata>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <sysinfo type="smbios">
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <system>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <entry name="manufacturer">RDO</entry>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <entry name="product">OpenStack Compute</entry>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <entry name="serial">02cdfa83-3da3-4b21-b297-5885c45a3350</entry>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <entry name="uuid">02cdfa83-3da3-4b21-b297-5885c45a3350</entry>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <entry name="family">Virtual Machine</entry>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     </system>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   </sysinfo>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <os>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <boot dev="hd"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <smbios mode="sysinfo"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   </os>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <features>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <acpi/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <apic/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <vmcoreinfo/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   </features>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <clock offset="utc">
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <timer name="hpet" present="no"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   </clock>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <cpu mode="host-model" match="exact">
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   </cpu>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   <devices>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <disk type="network" device="disk">
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/02cdfa83-3da3-4b21-b297-5885c45a3350_disk">
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       </source>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <target dev="vda" bus="virtio"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <disk type="network" device="cdrom">
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/02cdfa83-3da3-4b21-b297-5885c45a3350_disk.config">
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       </source>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:23:26 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <target dev="sda" bus="sata"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <serial type="pty">
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <log file="/var/lib/nova/instances/02cdfa83-3da3-4b21-b297-5885c45a3350/console.log" append="off"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     </serial>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <video>
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <model type="virtio"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     </video>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <input type="tablet" bus="usb"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <rng model="virtio">
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <backend model="random">/dev/urandom</backend>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     </rng>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <controller type="usb" index="0"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     <memballoon model="virtio">
Nov 24 20:23:26 compute-0 nova_compute[257476]:       <stats period="10"/>
Nov 24 20:23:26 compute-0 nova_compute[257476]:     </memballoon>
Nov 24 20:23:26 compute-0 nova_compute[257476]:   </devices>
Nov 24 20:23:26 compute-0 nova_compute[257476]: </domain>
Nov 24 20:23:26 compute-0 nova_compute[257476]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 20:23:26 compute-0 nova_compute[257476]: 2025-11-24 20:23:26.919 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:23:26 compute-0 nova_compute[257476]: 2025-11-24 20:23:26.919 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:23:26 compute-0 nova_compute[257476]: 2025-11-24 20:23:26.921 257491 INFO nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Using config drive
Nov 24 20:23:26 compute-0 nova_compute[257476]: 2025-11-24 20:23:26.954 257491 DEBUG nova.storage.rbd_utils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 02cdfa83-3da3-4b21-b297-5885c45a3350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1922 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:27.071+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:27 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 200 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 MiB/s wr, 131 op/s
Nov 24 20:23:27 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:27 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2689119963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:23:27 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1922 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:27 compute-0 nova_compute[257476]: 2025-11-24 20:23:27.410 257491 INFO nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Creating config drive at /var/lib/nova/instances/02cdfa83-3da3-4b21-b297-5885c45a3350/disk.config
Nov 24 20:23:27 compute-0 nova_compute[257476]: 2025-11-24 20:23:27.415 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02cdfa83-3da3-4b21-b297-5885c45a3350/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg8tauoy4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:27.416+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:27 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:27 compute-0 nova_compute[257476]: 2025-11-24 20:23:27.545 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02cdfa83-3da3-4b21-b297-5885c45a3350/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg8tauoy4" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:27 compute-0 nova_compute[257476]: 2025-11-24 20:23:27.582 257491 DEBUG nova.storage.rbd_utils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] rbd image 02cdfa83-3da3-4b21-b297-5885c45a3350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:23:27 compute-0 nova_compute[257476]: 2025-11-24 20:23:27.588 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02cdfa83-3da3-4b21-b297-5885c45a3350/disk.config 02cdfa83-3da3-4b21-b297-5885c45a3350_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:27 compute-0 nova_compute[257476]: 2025-11-24 20:23:27.775 257491 DEBUG oslo_concurrency.processutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02cdfa83-3da3-4b21-b297-5885c45a3350/disk.config 02cdfa83-3da3-4b21-b297-5885c45a3350_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:27 compute-0 nova_compute[257476]: 2025-11-24 20:23:27.776 257491 INFO nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Deleting local config drive /var/lib/nova/instances/02cdfa83-3da3-4b21-b297-5885c45a3350/disk.config because it was imported into RBD.
Nov 24 20:23:27 compute-0 systemd-machined[218733]: New machine qemu-5-instance-00000007.
Nov 24 20:23:27 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000007.
Nov 24 20:23:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:28.121+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:28 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.214 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015808.2143118, 02cdfa83-3da3-4b21-b297-5885c45a3350 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.215 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] VM Resumed (Lifecycle Event)
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.223 257491 DEBUG nova.compute.manager [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.224 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.228 257491 INFO nova.virt.libvirt.driver [-] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Instance spawned successfully.
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.229 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.245 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.257 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.265 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.265 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.266 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.267 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.268 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.269 257491 DEBUG nova.virt.libvirt.driver [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.296 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.297 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015808.2218242, 02cdfa83-3da3-4b21-b297-5885c45a3350 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.298 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] VM Started (Lifecycle Event)
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.322 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.327 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.333 257491 INFO nova.compute.manager [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Took 3.28 seconds to spawn the instance on the hypervisor.
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.333 257491 DEBUG nova.compute.manager [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.346 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.393 257491 INFO nova.compute.manager [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Took 4.53 seconds to build instance.
Nov 24 20:23:28 compute-0 ceph-mon[75677]: pgmap v1209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 200 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 MiB/s wr, 131 op/s
Nov 24 20:23:28 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:28 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:28.424+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.569 257491 DEBUG oslo_concurrency.lockutils [None req-a41a000a-123c-412f-b6bb-2c4becdf6fa9 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "02cdfa83-3da3-4b21-b297-5885c45a3350" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:28 compute-0 nova_compute[257476]: 2025-11-24 20:23:28.969 257491 DEBUG nova.objects.instance [None req-7a69cb67-ce1a-40fa-846e-dad77822b34f 9e77371bc7a54e8d924a894b33af7a7a 72848b7da64644d48573e07b443465d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 02cdfa83-3da3-4b21-b297-5885c45a3350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:23:29 compute-0 nova_compute[257476]: 2025-11-24 20:23:29.015 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764015809.0148706, 02cdfa83-3da3-4b21-b297-5885c45a3350 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:23:29 compute-0 nova_compute[257476]: 2025-11-24 20:23:29.015 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] VM Paused (Lifecycle Event)
Nov 24 20:23:29 compute-0 nova_compute[257476]: 2025-11-24 20:23:29.042 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:29 compute-0 nova_compute[257476]: 2025-11-24 20:23:29.052 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:23:29 compute-0 nova_compute[257476]: 2025-11-24 20:23:29.082 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 24 20:23:29 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 20:23:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:29.130+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:29 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:29 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 24 20:23:29 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000007.scope: Consumed 1.174s CPU time.
Nov 24 20:23:29 compute-0 systemd-machined[218733]: Machine qemu-5-instance-00000007 terminated.
Nov 24 20:23:29 compute-0 nova_compute[257476]: 2025-11-24 20:23:29.306 257491 DEBUG nova.compute.manager [None req-7a69cb67-ce1a-40fa-846e-dad77822b34f 9e77371bc7a54e8d924a894b33af7a7a 72848b7da64644d48573e07b443465d2 - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 222 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 162 op/s
Nov 24 20:23:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:29.406+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:29 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:29 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:30.160+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:30 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:30 compute-0 ceph-mon[75677]: pgmap v1210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 222 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 162 op/s
Nov 24 20:23:30 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:30.450+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:30 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:31.134+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:31 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 248 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 162 op/s
Nov 24 20:23:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:31 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:31.453+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:31 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1932 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:32.173+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:32 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:32 compute-0 ceph-mon[75677]: pgmap v1211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 248 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 162 op/s
Nov 24 20:23:32 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:32 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1932 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:32.461+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:32 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:32 compute-0 nova_compute[257476]: 2025-11-24 20:23:32.659 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "02cdfa83-3da3-4b21-b297-5885c45a3350" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:32 compute-0 nova_compute[257476]: 2025-11-24 20:23:32.660 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "02cdfa83-3da3-4b21-b297-5885c45a3350" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:32 compute-0 nova_compute[257476]: 2025-11-24 20:23:32.661 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "02cdfa83-3da3-4b21-b297-5885c45a3350-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:32 compute-0 nova_compute[257476]: 2025-11-24 20:23:32.661 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "02cdfa83-3da3-4b21-b297-5885c45a3350-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:32 compute-0 nova_compute[257476]: 2025-11-24 20:23:32.662 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "02cdfa83-3da3-4b21-b297-5885c45a3350-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:32 compute-0 nova_compute[257476]: 2025-11-24 20:23:32.663 257491 INFO nova.compute.manager [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Terminating instance
Nov 24 20:23:32 compute-0 nova_compute[257476]: 2025-11-24 20:23:32.665 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "refresh_cache-02cdfa83-3da3-4b21-b297-5885c45a3350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:23:32 compute-0 nova_compute[257476]: 2025-11-24 20:23:32.665 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquired lock "refresh_cache-02cdfa83-3da3-4b21-b297-5885c45a3350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:23:32 compute-0 nova_compute[257476]: 2025-11-24 20:23:32.666 257491 DEBUG nova.network.neutron [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 20:23:32 compute-0 nova_compute[257476]: 2025-11-24 20:23:32.873 257491 DEBUG nova.network.neutron [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:23:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:33.174+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:33 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 248 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 615 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.407 257491 DEBUG nova.network.neutron [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:23:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:33 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:33 compute-0 ceph-mon[75677]: pgmap v1212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 248 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 615 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.433 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Releasing lock "refresh_cache-02cdfa83-3da3-4b21-b297-5885c45a3350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.434 257491 DEBUG nova.compute.manager [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.442 257491 INFO nova.virt.libvirt.driver [-] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Instance destroyed successfully.
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.443 257491 DEBUG nova.objects.instance [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lazy-loading 'resources' on Instance uuid 02cdfa83-3da3-4b21-b297-5885c45a3350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:23:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:33.445+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:33 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.894 257491 INFO nova.virt.libvirt.driver [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Deleting instance files /var/lib/nova/instances/02cdfa83-3da3-4b21-b297-5885c45a3350_del
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.895 257491 INFO nova.virt.libvirt.driver [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Deletion of /var/lib/nova/instances/02cdfa83-3da3-4b21-b297-5885c45a3350_del complete
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.969 257491 INFO nova.compute.manager [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Took 0.54 seconds to destroy the instance on the hypervisor.
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.970 257491 DEBUG oslo.service.loopingcall [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.970 257491 DEBUG nova.compute.manager [-] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 20:23:33 compute-0 nova_compute[257476]: 2025-11-24 20:23:33.971 257491 DEBUG nova.network.neutron [-] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 20:23:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:34.126+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:34 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.147 257491 DEBUG nova.network.neutron [-] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.163 257491 DEBUG nova.network.neutron [-] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.182 257491 INFO nova.compute.manager [-] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Took 0.21 seconds to deallocate network for instance.
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.246 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.247 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.402 257491 DEBUG oslo_concurrency.processutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:23:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:34.450+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:34 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:34 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016676663125936055 of space, bias 1.0, pg target 0.5002998937780816 quantized to 32 (current 32)
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:23:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:23:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:23:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114809728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.899 257491 DEBUG oslo_concurrency.processutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.908 257491 DEBUG nova.compute.provider_tree [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.938 257491 DEBUG nova.scheduler.client.report [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.959 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:34 compute-0 nova_compute[257476]: 2025-11-24 20:23:34.989 257491 INFO nova.scheduler.client.report [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Deleted allocations for instance 02cdfa83-3da3-4b21-b297-5885c45a3350
Nov 24 20:23:35 compute-0 nova_compute[257476]: 2025-11-24 20:23:35.058 257491 DEBUG oslo_concurrency.lockutils [None req-6cfedbce-b47e-4de9-b0f9-f17221811ca3 a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "02cdfa83-3da3-4b21-b297-5885c45a3350" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:35.144+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:35 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 231 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 696 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Nov 24 20:23:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:35.426+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:35 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:35 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:35 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2114809728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:23:35 compute-0 ceph-mon[75677]: pgmap v1213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 231 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 696 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Nov 24 20:23:35 compute-0 nova_compute[257476]: 2025-11-24 20:23:35.836 257491 DEBUG oslo_concurrency.lockutils [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "4e9758ff-13d1-447b-9a2a-d6ae9f807143" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:35 compute-0 nova_compute[257476]: 2025-11-24 20:23:35.837 257491 DEBUG oslo_concurrency.lockutils [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "4e9758ff-13d1-447b-9a2a-d6ae9f807143" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:35 compute-0 nova_compute[257476]: 2025-11-24 20:23:35.837 257491 DEBUG oslo_concurrency.lockutils [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "4e9758ff-13d1-447b-9a2a-d6ae9f807143-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:23:35 compute-0 nova_compute[257476]: 2025-11-24 20:23:35.838 257491 DEBUG oslo_concurrency.lockutils [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "4e9758ff-13d1-447b-9a2a-d6ae9f807143-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:23:35 compute-0 nova_compute[257476]: 2025-11-24 20:23:35.839 257491 DEBUG oslo_concurrency.lockutils [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lock "4e9758ff-13d1-447b-9a2a-d6ae9f807143-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:23:35 compute-0 nova_compute[257476]: 2025-11-24 20:23:35.840 257491 INFO nova.compute.manager [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Terminating instance
Nov 24 20:23:35 compute-0 nova_compute[257476]: 2025-11-24 20:23:35.842 257491 DEBUG oslo_concurrency.lockutils [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquiring lock "refresh_cache-4e9758ff-13d1-447b-9a2a-d6ae9f807143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:23:35 compute-0 nova_compute[257476]: 2025-11-24 20:23:35.842 257491 DEBUG oslo_concurrency.lockutils [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Acquired lock "refresh_cache-4e9758ff-13d1-447b-9a2a-d6ae9f807143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:23:35 compute-0 nova_compute[257476]: 2025-11-24 20:23:35.843 257491 DEBUG nova.network.neutron [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 20:23:35 compute-0 podman[274549]: 2025-11-24 20:23:35.870034308 +0000 UTC m=+0.089315725 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:23:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:36.194+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:36 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:36.407+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:36 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:36 compute-0 nova_compute[257476]: 2025-11-24 20:23:36.421 257491 DEBUG nova.network.neutron [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:23:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:36 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:36 compute-0 nova_compute[257476]: 2025-11-24 20:23:36.743 257491 DEBUG nova.network.neutron [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:23:36 compute-0 nova_compute[257476]: 2025-11-24 20:23:36.761 257491 DEBUG oslo_concurrency.lockutils [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Releasing lock "refresh_cache-4e9758ff-13d1-447b-9a2a-d6ae9f807143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:23:36 compute-0 nova_compute[257476]: 2025-11-24 20:23:36.762 257491 DEBUG nova.compute.manager [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 20:23:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:37.228+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:37 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 558 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Nov 24 20:23:37 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:37.413+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1937 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:37 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:37 compute-0 ceph-mon[75677]: pgmap v1214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 558 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Nov 24 20:23:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:38.207+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:38 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:38 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:38.390+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:38 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:38 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1937 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:39.195+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:39 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 557 KiB/s rd, 2.9 MiB/s wr, 126 op/s
Nov 24 20:23:39 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:39.400+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:39 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:39 compute-0 ceph-mon[75677]: pgmap v1215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 557 KiB/s rd, 2.9 MiB/s wr, 126 op/s
Nov 24 20:23:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:40.245+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:40 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:40 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:40.397+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:23:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:23:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:23:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:23:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:23:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:40 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:41.200+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:41 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 426 KiB/s rd, 1.6 MiB/s wr, 84 op/s
Nov 24 20:23:41 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:41.432+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:41 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:41 compute-0 ceph-mon[75677]: pgmap v1216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 426 KiB/s rd, 1.6 MiB/s wr, 84 op/s
Nov 24 20:23:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:42.212+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:42 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:42 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:42.447+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:42 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:43.212+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:43 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 7.8 KiB/s wr, 38 op/s
Nov 24 20:23:43 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:43.490+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:43 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:43 compute-0 ceph-mon[75677]: pgmap v1217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 7.8 KiB/s wr, 38 op/s
Nov 24 20:23:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:44.206+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:44 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:44 compute-0 nova_compute[257476]: 2025-11-24 20:23:44.308 257491 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764015809.3072543, 02cdfa83-3da3-4b21-b297-5885c45a3350 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:23:44 compute-0 nova_compute[257476]: 2025-11-24 20:23:44.309 257491 INFO nova.compute.manager [-] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] VM Stopped (Lifecycle Event)
Nov 24 20:23:44 compute-0 nova_compute[257476]: 2025-11-24 20:23:44.327 257491 DEBUG nova.compute.manager [None req-d87c7555-b515-446e-b6d4-ffa328cecb43 - - - - - -] [instance: 02cdfa83-3da3-4b21-b297-5885c45a3350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:23:44 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:44.496+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:44 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:23:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:45.256+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:45 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 7.8 KiB/s wr, 38 op/s
Nov 24 20:23:45 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:45.540+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:45 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:45 compute-0 ceph-mon[75677]: pgmap v1218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 7.8 KiB/s wr, 38 op/s
Nov 24 20:23:46 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:46.301+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:46 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:46.584+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:46 compute-0 sudo[274568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:23:46 compute-0 sudo[274568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:46 compute-0 sudo[274568]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:46 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:46 compute-0 sudo[274593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:23:46 compute-0 sudo[274593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:46 compute-0 sudo[274593]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:46 compute-0 sudo[274618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:23:46 compute-0 sudo[274618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:46 compute-0 sudo[274618]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:46 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 24 20:23:46 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000005.scope: Consumed 12.687s CPU time.
Nov 24 20:23:46 compute-0 systemd-machined[218733]: Machine qemu-4-instance-00000005 terminated.
Nov 24 20:23:46 compute-0 sudo[274643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:23:46 compute-0 sudo[274643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1942 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:47 compute-0 nova_compute[257476]: 2025-11-24 20:23:47.015 257491 INFO nova.virt.libvirt.driver [-] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Instance destroyed successfully.
Nov 24 20:23:47 compute-0 nova_compute[257476]: 2025-11-24 20:23:47.016 257491 DEBUG nova.objects.instance [None req-f5ecc026-66b6-4551-9fdb-f4e77667049c a063a2ef868d49f69d65f7e71b5ba3c2 4f2eef9d4bd64678b12c95861d4f7f9e - - default default] Lazy-loading 'resources' on Instance uuid 4e9758ff-13d1-447b-9a2a-d6ae9f807143 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:23:47 compute-0 podman[274668]: 2025-11-24 20:23:47.075707545 +0000 UTC m=+0.117442820 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 20:23:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:47.259+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:47 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Nov 24 20:23:47 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:47.566+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:47 compute-0 sudo[274643]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:47 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:47 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1942 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:47 compute-0 ceph-mon[75677]: pgmap v1219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Nov 24 20:23:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:23:47 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:23:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:23:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:23:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:23:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:23:47 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev adb56beb-698b-4c3a-bce3-6195e98362ec does not exist
Nov 24 20:23:47 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 60884bc8-5eec-4977-baab-da70e0cae230 does not exist
Nov 24 20:23:47 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 92585a4b-f256-42d6-ad73-07e507bfc36d does not exist
Nov 24 20:23:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:23:47 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:23:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:23:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:23:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:23:47 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:23:47 compute-0 sudo[274741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:23:47 compute-0 sudo[274741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:47 compute-0 sudo[274741]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:47 compute-0 sudo[274766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:23:47 compute-0 sudo[274766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:47 compute-0 sudo[274766]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:48 compute-0 sudo[274791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:23:48 compute-0 sudo[274791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:48 compute-0 sudo[274791]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:48 compute-0 sudo[274816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:23:48 compute-0 sudo[274816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:48.237+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:48 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:48 compute-0 sshd-session[274710]: Invalid user work from 182.93.7.194 port 48600
Nov 24 20:23:48 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:48.529+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:48 compute-0 podman[274881]: 2025-11-24 20:23:48.617383236 +0000 UTC m=+0.078142320 container create f4b2f67725902a08bc431a7d1abbd9ed01cf2e9e411445666b3b368a738e4bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 20:23:48 compute-0 sshd-session[274710]: Received disconnect from 182.93.7.194 port 48600:11: Bye Bye [preauth]
Nov 24 20:23:48 compute-0 sshd-session[274710]: Disconnected from invalid user work 182.93.7.194 port 48600 [preauth]
Nov 24 20:23:48 compute-0 systemd[1]: Started libpod-conmon-f4b2f67725902a08bc431a7d1abbd9ed01cf2e9e411445666b3b368a738e4bed.scope.
Nov 24 20:23:48 compute-0 podman[274881]: 2025-11-24 20:23:48.585192278 +0000 UTC m=+0.045951392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:23:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:23:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:48 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:23:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:23:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:23:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:23:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:23:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:23:48 compute-0 podman[274881]: 2025-11-24 20:23:48.713691579 +0000 UTC m=+0.174450643 container init f4b2f67725902a08bc431a7d1abbd9ed01cf2e9e411445666b3b368a738e4bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 24 20:23:48 compute-0 podman[274881]: 2025-11-24 20:23:48.725029868 +0000 UTC m=+0.185788922 container start f4b2f67725902a08bc431a7d1abbd9ed01cf2e9e411445666b3b368a738e4bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 20:23:48 compute-0 podman[274881]: 2025-11-24 20:23:48.728720798 +0000 UTC m=+0.189479842 container attach f4b2f67725902a08bc431a7d1abbd9ed01cf2e9e411445666b3b368a738e4bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_curran, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:23:48 compute-0 sleepy_curran[274897]: 167 167
Nov 24 20:23:48 compute-0 systemd[1]: libpod-f4b2f67725902a08bc431a7d1abbd9ed01cf2e9e411445666b3b368a738e4bed.scope: Deactivated successfully.
Nov 24 20:23:48 compute-0 conmon[274897]: conmon f4b2f67725902a08bc43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4b2f67725902a08bc431a7d1abbd9ed01cf2e9e411445666b3b368a738e4bed.scope/container/memory.events
Nov 24 20:23:48 compute-0 podman[274881]: 2025-11-24 20:23:48.733118918 +0000 UTC m=+0.193877992 container died f4b2f67725902a08bc431a7d1abbd9ed01cf2e9e411445666b3b368a738e4bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_curran, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 20:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-cffae0a6e68d1f72563e3e861fdcda2529d2abf6004e7e7c53a224a7cc5a7f43-merged.mount: Deactivated successfully.
Nov 24 20:23:48 compute-0 podman[274881]: 2025-11-24 20:23:48.781429084 +0000 UTC m=+0.242188138 container remove f4b2f67725902a08bc431a7d1abbd9ed01cf2e9e411445666b3b368a738e4bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:23:48 compute-0 systemd[1]: libpod-conmon-f4b2f67725902a08bc431a7d1abbd9ed01cf2e9e411445666b3b368a738e4bed.scope: Deactivated successfully.
Nov 24 20:23:49 compute-0 podman[274920]: 2025-11-24 20:23:49.009904159 +0000 UTC m=+0.068648172 container create ead27e09c67706024f369510845f8d50066defad5b2f721f854bba01f0ea09c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 20:23:49 compute-0 systemd[1]: Started libpod-conmon-ead27e09c67706024f369510845f8d50066defad5b2f721f854bba01f0ea09c3.scope.
Nov 24 20:23:49 compute-0 podman[274920]: 2025-11-24 20:23:48.976207841 +0000 UTC m=+0.034951934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:23:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b93705d95aeb2e4449ef4fc1cef700d3bf54b4e9f7a8993e45e39522e88847/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b93705d95aeb2e4449ef4fc1cef700d3bf54b4e9f7a8993e45e39522e88847/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b93705d95aeb2e4449ef4fc1cef700d3bf54b4e9f7a8993e45e39522e88847/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b93705d95aeb2e4449ef4fc1cef700d3bf54b4e9f7a8993e45e39522e88847/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b93705d95aeb2e4449ef4fc1cef700d3bf54b4e9f7a8993e45e39522e88847/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:49 compute-0 podman[274920]: 2025-11-24 20:23:49.139196961 +0000 UTC m=+0.197941014 container init ead27e09c67706024f369510845f8d50066defad5b2f721f854bba01f0ea09c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:23:49 compute-0 podman[274920]: 2025-11-24 20:23:49.155147786 +0000 UTC m=+0.213891799 container start ead27e09c67706024f369510845f8d50066defad5b2f721f854bba01f0ea09c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:23:49 compute-0 podman[274920]: 2025-11-24 20:23:49.160555894 +0000 UTC m=+0.219299897 container attach ead27e09c67706024f369510845f8d50066defad5b2f721f854bba01f0ea09c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sammet, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:23:49 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:49.278+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2 op/s
Nov 24 20:23:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:49.549+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:49 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:49 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:49 compute-0 ceph-mon[75677]: pgmap v1220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2 op/s
Nov 24 20:23:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:50.268+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:50 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:50 compute-0 kind_sammet[274937]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:23:50 compute-0 kind_sammet[274937]: --> relative data size: 1.0
Nov 24 20:23:50 compute-0 kind_sammet[274937]: --> All data devices are unavailable
Nov 24 20:23:50 compute-0 systemd[1]: libpod-ead27e09c67706024f369510845f8d50066defad5b2f721f854bba01f0ea09c3.scope: Deactivated successfully.
Nov 24 20:23:50 compute-0 systemd[1]: libpod-ead27e09c67706024f369510845f8d50066defad5b2f721f854bba01f0ea09c3.scope: Consumed 1.258s CPU time.
Nov 24 20:23:50 compute-0 podman[274920]: 2025-11-24 20:23:50.447510954 +0000 UTC m=+1.506254947 container died ead27e09c67706024f369510845f8d50066defad5b2f721f854bba01f0ea09c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:23:50 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:50.501+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-41b93705d95aeb2e4449ef4fc1cef700d3bf54b4e9f7a8993e45e39522e88847-merged.mount: Deactivated successfully.
Nov 24 20:23:50 compute-0 podman[274920]: 2025-11-24 20:23:50.711215908 +0000 UTC m=+1.769959921 container remove ead27e09c67706024f369510845f8d50066defad5b2f721f854bba01f0ea09c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sammet, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 20:23:50 compute-0 systemd[1]: libpod-conmon-ead27e09c67706024f369510845f8d50066defad5b2f721f854bba01f0ea09c3.scope: Deactivated successfully.
Nov 24 20:23:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:50 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:50 compute-0 sudo[274816]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:50 compute-0 sudo[274980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:23:50 compute-0 sudo[274980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:50 compute-0 sudo[274980]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:50 compute-0 sudo[275005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:23:50 compute-0 sudo[275005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:51 compute-0 sudo[275005]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:51 compute-0 sudo[275036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:23:51 compute-0 sudo[275036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:51 compute-0 sudo[275036]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:51 compute-0 sudo[275075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:23:51 compute-0 sudo[275075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:51 compute-0 podman[275029]: 2025-11-24 20:23:51.201093314 +0000 UTC m=+0.186034299 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 24 20:23:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:51.303+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:51 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 2 op/s
Nov 24 20:23:51 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:51.529+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:51 compute-0 podman[275147]: 2025-11-24 20:23:51.739343177 +0000 UTC m=+0.082883458 container create 138e8f3e0f753f3faaa9538f4cffd37baca2c531b4149f212ffc36df118371cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:23:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:51 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:51 compute-0 ceph-mon[75677]: pgmap v1221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 2 op/s
Nov 24 20:23:51 compute-0 systemd[1]: Started libpod-conmon-138e8f3e0f753f3faaa9538f4cffd37baca2c531b4149f212ffc36df118371cd.scope.
Nov 24 20:23:51 compute-0 podman[275147]: 2025-11-24 20:23:51.705987668 +0000 UTC m=+0.049527999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:23:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:23:51 compute-0 podman[275147]: 2025-11-24 20:23:51.864421375 +0000 UTC m=+0.207961706 container init 138e8f3e0f753f3faaa9538f4cffd37baca2c531b4149f212ffc36df118371cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:23:51 compute-0 podman[275147]: 2025-11-24 20:23:51.878432357 +0000 UTC m=+0.221972618 container start 138e8f3e0f753f3faaa9538f4cffd37baca2c531b4149f212ffc36df118371cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:23:51 compute-0 podman[275147]: 2025-11-24 20:23:51.882056165 +0000 UTC m=+0.225596466 container attach 138e8f3e0f753f3faaa9538f4cffd37baca2c531b4149f212ffc36df118371cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:23:51 compute-0 intelligent_newton[275163]: 167 167
Nov 24 20:23:51 compute-0 systemd[1]: libpod-138e8f3e0f753f3faaa9538f4cffd37baca2c531b4149f212ffc36df118371cd.scope: Deactivated successfully.
Nov 24 20:23:51 compute-0 podman[275147]: 2025-11-24 20:23:51.888182912 +0000 UTC m=+0.231723213 container died 138e8f3e0f753f3faaa9538f4cffd37baca2c531b4149f212ffc36df118371cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6720b6c6db40f60f698ec1a9524c3d532532a1e0608b5f1f0d68f05334315303-merged.mount: Deactivated successfully.
Nov 24 20:23:51 compute-0 podman[275147]: 2025-11-24 20:23:51.940166898 +0000 UTC m=+0.283707179 container remove 138e8f3e0f753f3faaa9538f4cffd37baca2c531b4149f212ffc36df118371cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_newton, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:23:51 compute-0 systemd[1]: libpod-conmon-138e8f3e0f753f3faaa9538f4cffd37baca2c531b4149f212ffc36df118371cd.scope: Deactivated successfully.
Nov 24 20:23:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 1952 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:52 compute-0 podman[275185]: 2025-11-24 20:23:52.194548119 +0000 UTC m=+0.074005798 container create b1989b55eff376bbca8911476fda33ff4bf66ca9ea71ef48c575393b20bc222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 20:23:52 compute-0 systemd[1]: Started libpod-conmon-b1989b55eff376bbca8911476fda33ff4bf66ca9ea71ef48c575393b20bc222e.scope.
Nov 24 20:23:52 compute-0 podman[275185]: 2025-11-24 20:23:52.167639735 +0000 UTC m=+0.047097484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:23:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:52.257+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:52 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48793b8ce824bb04a7c7444496584467a85a7dbc28f80ff1b0ec89c25c10f1d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48793b8ce824bb04a7c7444496584467a85a7dbc28f80ff1b0ec89c25c10f1d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48793b8ce824bb04a7c7444496584467a85a7dbc28f80ff1b0ec89c25c10f1d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48793b8ce824bb04a7c7444496584467a85a7dbc28f80ff1b0ec89c25c10f1d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:52 compute-0 podman[275185]: 2025-11-24 20:23:52.30102869 +0000 UTC m=+0.180486439 container init b1989b55eff376bbca8911476fda33ff4bf66ca9ea71ef48c575393b20bc222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:23:52 compute-0 podman[275185]: 2025-11-24 20:23:52.309511591 +0000 UTC m=+0.188969280 container start b1989b55eff376bbca8911476fda33ff4bf66ca9ea71ef48c575393b20bc222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:23:52 compute-0 podman[275185]: 2025-11-24 20:23:52.313482688 +0000 UTC m=+0.192940427 container attach b1989b55eff376bbca8911476fda33ff4bf66ca9ea71ef48c575393b20bc222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 20:23:52 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:52.536+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:53.288+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:53 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:53 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:53 compute-0 ceph-mon[75677]: Health check update: 22 slow ops, oldest one blocked for 1952 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 2 op/s
Nov 24 20:23:53 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:53.526+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:53 compute-0 silly_euclid[275202]: {
Nov 24 20:23:53 compute-0 silly_euclid[275202]:     "0": [
Nov 24 20:23:53 compute-0 silly_euclid[275202]:         {
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "devices": [
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "/dev/loop3"
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             ],
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_name": "ceph_lv0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_size": "21470642176",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "name": "ceph_lv0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "tags": {
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.cluster_name": "ceph",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.crush_device_class": "",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.encrypted": "0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.osd_id": "0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.type": "block",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.vdo": "0"
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             },
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "type": "block",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "vg_name": "ceph_vg0"
Nov 24 20:23:53 compute-0 silly_euclid[275202]:         }
Nov 24 20:23:53 compute-0 silly_euclid[275202]:     ],
Nov 24 20:23:53 compute-0 silly_euclid[275202]:     "1": [
Nov 24 20:23:53 compute-0 silly_euclid[275202]:         {
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "devices": [
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "/dev/loop4"
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             ],
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_name": "ceph_lv1",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_size": "21470642176",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "name": "ceph_lv1",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "tags": {
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.cluster_name": "ceph",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.crush_device_class": "",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.encrypted": "0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.osd_id": "1",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.type": "block",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.vdo": "0"
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             },
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "type": "block",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "vg_name": "ceph_vg1"
Nov 24 20:23:53 compute-0 silly_euclid[275202]:         }
Nov 24 20:23:53 compute-0 silly_euclid[275202]:     ],
Nov 24 20:23:53 compute-0 silly_euclid[275202]:     "2": [
Nov 24 20:23:53 compute-0 silly_euclid[275202]:         {
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "devices": [
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "/dev/loop5"
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             ],
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_name": "ceph_lv2",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_size": "21470642176",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "name": "ceph_lv2",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "tags": {
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.cluster_name": "ceph",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.crush_device_class": "",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.encrypted": "0",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.osd_id": "2",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.type": "block",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:                 "ceph.vdo": "0"
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             },
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "type": "block",
Nov 24 20:23:53 compute-0 silly_euclid[275202]:             "vg_name": "ceph_vg2"
Nov 24 20:23:53 compute-0 silly_euclid[275202]:         }
Nov 24 20:23:53 compute-0 silly_euclid[275202]:     ]
Nov 24 20:23:53 compute-0 silly_euclid[275202]: }
Nov 24 20:23:53 compute-0 systemd[1]: libpod-b1989b55eff376bbca8911476fda33ff4bf66ca9ea71ef48c575393b20bc222e.scope: Deactivated successfully.
Nov 24 20:23:53 compute-0 podman[275185]: 2025-11-24 20:23:53.702990763 +0000 UTC m=+1.582448412 container died b1989b55eff376bbca8911476fda33ff4bf66ca9ea71ef48c575393b20bc222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-48793b8ce824bb04a7c7444496584467a85a7dbc28f80ff1b0ec89c25c10f1d9-merged.mount: Deactivated successfully.
Nov 24 20:23:54 compute-0 podman[275185]: 2025-11-24 20:23:54.195992204 +0000 UTC m=+2.075449903 container remove b1989b55eff376bbca8911476fda33ff4bf66ca9ea71ef48c575393b20bc222e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:23:54 compute-0 sudo[275075]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:54.270+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:54 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:54 compute-0 systemd[1]: libpod-conmon-b1989b55eff376bbca8911476fda33ff4bf66ca9ea71ef48c575393b20bc222e.scope: Deactivated successfully.
Nov 24 20:23:54 compute-0 sudo[275224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:23:54 compute-0 sudo[275224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:54 compute-0 sudo[275224]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:54 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:54 compute-0 ceph-mon[75677]: pgmap v1222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 2 op/s
Nov 24 20:23:54 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:23:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:23:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:23:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:23:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:23:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:23:54 compute-0 sudo[275250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:23:54 compute-0 sudo[275250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:54 compute-0 sudo[275250]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:54.560+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:54 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:54 compute-0 sudo[275275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:23:54 compute-0 sudo[275275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:54 compute-0 sudo[275275]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:54 compute-0 sudo[275300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:23:54 compute-0 sudo[275300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:55 compute-0 podman[275365]: 2025-11-24 20:23:55.173198736 +0000 UTC m=+0.074663035 container create 8aa6afb3ec4f0f1c50235d03c58d8c344c50e5386a73177a2460c1c5543608dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 24 20:23:55 compute-0 systemd[1]: Started libpod-conmon-8aa6afb3ec4f0f1c50235d03c58d8c344c50e5386a73177a2460c1c5543608dc.scope.
Nov 24 20:23:55 compute-0 podman[275365]: 2025-11-24 20:23:55.138927903 +0000 UTC m=+0.040392252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:23:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:55.234+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:55 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:23:55 compute-0 podman[275365]: 2025-11-24 20:23:55.285047143 +0000 UTC m=+0.186511482 container init 8aa6afb3ec4f0f1c50235d03c58d8c344c50e5386a73177a2460c1c5543608dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 20:23:55 compute-0 podman[275365]: 2025-11-24 20:23:55.298192422 +0000 UTC m=+0.199656711 container start 8aa6afb3ec4f0f1c50235d03c58d8c344c50e5386a73177a2460c1c5543608dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:23:55 compute-0 podman[275365]: 2025-11-24 20:23:55.302503939 +0000 UTC m=+0.203968288 container attach 8aa6afb3ec4f0f1c50235d03c58d8c344c50e5386a73177a2460c1c5543608dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:23:55 compute-0 happy_cohen[275381]: 167 167
Nov 24 20:23:55 compute-0 systemd[1]: libpod-8aa6afb3ec4f0f1c50235d03c58d8c344c50e5386a73177a2460c1c5543608dc.scope: Deactivated successfully.
Nov 24 20:23:55 compute-0 podman[275365]: 2025-11-24 20:23:55.307852584 +0000 UTC m=+0.209316883 container died 8aa6afb3ec4f0f1c50235d03c58d8c344c50e5386a73177a2460c1c5543608dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:23:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-77dda6f9024cfb363ca53fb98caa692810bcda69328fbfab0c018249d39529b3-merged.mount: Deactivated successfully.
Nov 24 20:23:55 compute-0 podman[275365]: 2025-11-24 20:23:55.352386538 +0000 UTC m=+0.253850807 container remove 8aa6afb3ec4f0f1c50235d03c58d8c344c50e5386a73177a2460c1c5543608dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cohen, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 20:23:55 compute-0 systemd[1]: libpod-conmon-8aa6afb3ec4f0f1c50235d03c58d8c344c50e5386a73177a2460c1c5543608dc.scope: Deactivated successfully.
Nov 24 20:23:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Nov 24 20:23:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:55 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:55.573+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:55 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:55 compute-0 podman[275405]: 2025-11-24 20:23:55.596614121 +0000 UTC m=+0.050775134 container create d138eb959f2159a0918680364e90f6f22373a3495673eef7ebc309d908dba281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bassi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 24 20:23:55 compute-0 systemd[1]: Started libpod-conmon-d138eb959f2159a0918680364e90f6f22373a3495673eef7ebc309d908dba281.scope.
Nov 24 20:23:55 compute-0 podman[275405]: 2025-11-24 20:23:55.575902747 +0000 UTC m=+0.030063810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:23:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a071f704a36774cc2fe6d54eef379865086ce3f9f6ee45b21322bd0e3fd6476/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a071f704a36774cc2fe6d54eef379865086ce3f9f6ee45b21322bd0e3fd6476/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a071f704a36774cc2fe6d54eef379865086ce3f9f6ee45b21322bd0e3fd6476/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a071f704a36774cc2fe6d54eef379865086ce3f9f6ee45b21322bd0e3fd6476/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:23:55 compute-0 podman[275405]: 2025-11-24 20:23:55.700683766 +0000 UTC m=+0.154844779 container init d138eb959f2159a0918680364e90f6f22373a3495673eef7ebc309d908dba281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:23:55 compute-0 podman[275405]: 2025-11-24 20:23:55.71441913 +0000 UTC m=+0.168580113 container start d138eb959f2159a0918680364e90f6f22373a3495673eef7ebc309d908dba281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 20:23:55 compute-0 podman[275405]: 2025-11-24 20:23:55.717836863 +0000 UTC m=+0.171997876 container attach d138eb959f2159a0918680364e90f6f22373a3495673eef7ebc309d908dba281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:23:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:56.264+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:56 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:56 compute-0 ceph-mon[75677]: pgmap v1223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Nov 24 20:23:56 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:56.542+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:56 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:56 compute-0 trusting_bassi[275421]: {
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "osd_id": 2,
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "type": "bluestore"
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:     },
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "osd_id": 1,
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "type": "bluestore"
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:     },
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "osd_id": 0,
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:         "type": "bluestore"
Nov 24 20:23:56 compute-0 trusting_bassi[275421]:     }
Nov 24 20:23:56 compute-0 trusting_bassi[275421]: }
Nov 24 20:23:56 compute-0 systemd[1]: libpod-d138eb959f2159a0918680364e90f6f22373a3495673eef7ebc309d908dba281.scope: Deactivated successfully.
Nov 24 20:23:56 compute-0 systemd[1]: libpod-d138eb959f2159a0918680364e90f6f22373a3495673eef7ebc309d908dba281.scope: Consumed 1.121s CPU time.
Nov 24 20:23:56 compute-0 podman[275454]: 2025-11-24 20:23:56.899789513 +0000 UTC m=+0.041119320 container died d138eb959f2159a0918680364e90f6f22373a3495673eef7ebc309d908dba281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 20:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a071f704a36774cc2fe6d54eef379865086ce3f9f6ee45b21322bd0e3fd6476-merged.mount: Deactivated successfully.
Nov 24 20:23:56 compute-0 podman[275454]: 2025-11-24 20:23:56.96316573 +0000 UTC m=+0.104495457 container remove d138eb959f2159a0918680364e90f6f22373a3495673eef7ebc309d908dba281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 20:23:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:23:56 compute-0 systemd[1]: libpod-conmon-d138eb959f2159a0918680364e90f6f22373a3495673eef7ebc309d908dba281.scope: Deactivated successfully.
Nov 24 20:23:57 compute-0 sudo[275300]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:23:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:23:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:23:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:23:57 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7fc1052d-8ff4-430d-ab97-c64403b38e6b does not exist
Nov 24 20:23:57 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9e357bed-f17a-40c6-b301-bef6db7fb37b does not exist
Nov 24 20:23:57 compute-0 sudo[275469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:23:57 compute-0 sudo[275469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:57 compute-0 sudo[275469]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:57 compute-0 sudo[275494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:23:57 compute-0 sudo[275494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:23:57 compute-0 sudo[275494]: pam_unix(sudo:session): session closed for user root
Nov 24 20:23:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:57.302+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:57 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Nov 24 20:23:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:57.536+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:57 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:58 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:23:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:23:58 compute-0 ceph-mon[75677]: pgmap v1224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Nov 24 20:23:58 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 1957 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:58.318+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:58 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:58.556+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:58 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:59 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:23:59 compute-0 ceph-mon[75677]: Health check update: 22 slow ops, oldest one blocked for 1957 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:23:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:23:59.362+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:59 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:23:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:23:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 3 op/s
Nov 24 20:23:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:23:59.522+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:59 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:23:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:24:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:00 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:24:00 compute-0 ceph-mon[75677]: pgmap v1225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 3 op/s
Nov 24 20:24:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:00.393+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:00 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:00.518+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:00 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:01 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 3 ])
Nov 24 20:24:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 20:24:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:01.436+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:01 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:01.545+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:01 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:02 compute-0 nova_compute[257476]: 2025-11-24 20:24:02.014 257491 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764015827.0116758, 4e9758ff-13d1-447b-9a2a-d6ae9f807143 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:24:02 compute-0 nova_compute[257476]: 2025-11-24 20:24:02.014 257491 INFO nova.compute.manager [-] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] VM Stopped (Lifecycle Event)
Nov 24 20:24:02 compute-0 nova_compute[257476]: 2025-11-24 20:24:02.039 257491 DEBUG nova.compute.manager [None req-c4c9996e-a67c-471a-ba90-87a1ed3a8919 - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:24:02 compute-0 nova_compute[257476]: 2025-11-24 20:24:02.045 257491 DEBUG nova.compute.manager [None req-c4c9996e-a67c-471a-ba90-87a1ed3a8919 - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:24:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:02 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:02 compute-0 ceph-mon[75677]: pgmap v1226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 20:24:02 compute-0 nova_compute[257476]: 2025-11-24 20:24:02.074 257491 INFO nova.compute.manager [None req-c4c9996e-a67c-471a-ba90-87a1ed3a8919 - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 24 20:24:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:02.467+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:02 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:02.516+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:02 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:03 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 20:24:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:03.478+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:03 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:03.506+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:03 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:04 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:04 compute-0 ceph-mon[75677]: pgmap v1227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 20:24:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:04.461+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:04 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:04.507+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:04 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:05 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 20:24:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:05.430+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:05 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:05.475+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:05 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 26 slow ops, oldest one blocked for 1962 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:06 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:06 compute-0 ceph-mon[75677]: pgmap v1228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 20:24:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:06.410+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:06 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:06.449+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:06 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:06 compute-0 podman[275519]: 2025-11-24 20:24:06.860753192 +0000 UTC m=+0.085542472 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 24 20:24:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:07 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:07 compute-0 ceph-mon[75677]: Health check update: 26 slow ops, oldest one blocked for 1962 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 20:24:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:07.427+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:07 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:07.431+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:07 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:08 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:08 compute-0 ceph-mon[75677]: pgmap v1229: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 20:24:08 compute-0 nova_compute[257476]: 2025-11-24 20:24:08.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:24:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:08.441+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:08 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:08.472+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:08 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:09 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:24:09.377 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:24:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:24:09.378 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:24:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:24:09.378 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:24:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 20:24:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:09.429+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:09 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:09.456+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:09 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:10 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:10 compute-0 ceph-mon[75677]: pgmap v1230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 20:24:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:10.410+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:10 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:10.471+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:10 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:11 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:11 compute-0 nova_compute[257476]: 2025-11-24 20:24:11.149 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:24:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 20:24:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:11.459+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:11 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:11.492+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:11 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 26 slow ops, oldest one blocked for 1967 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:12 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:12 compute-0 ceph-mon[75677]: pgmap v1231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 20:24:12 compute-0 ceph-mon[75677]: Health check update: 26 slow ops, oldest one blocked for 1967 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:12.504+0000 7f2ca3ee7640 -1 osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:12 compute-0 ceph-osd[88624]: osd.0 137 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:12.507+0000 7f1a67169640 -1 osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:12 compute-0 ceph-osd[89640]: osd.1 137 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.100:0/2253366057"} v 0) v1
Nov 24 20:24:12 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/500399796' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.100:0/2253366057"}]: dispatch
Nov 24 20:24:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 24 20:24:13 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/500399796' entity='client.openstack' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.100:0/2253366057"}]': finished
Nov 24 20:24:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 24 20:24:13 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 24 20:24:13 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:13 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/500399796' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.100:0/2253366057"}]: dispatch
Nov 24 20:24:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Nov 24 20:24:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:13.532+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:13 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:13.537+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:13 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:14 compute-0 ceph-mon[75677]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 7 ])
Nov 24 20:24:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:14 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/500399796' entity='client.openstack' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.100:0/2253366057"}]': finished
Nov 24 20:24:14 compute-0 ceph-mon[75677]: osdmap e138: 3 total, 3 up, 3 in
Nov 24 20:24:14 compute-0 ceph-mon[75677]: pgmap v1233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 203 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Nov 24 20:24:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:14.518+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:14 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:14.532+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:14 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.146 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.149 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.149 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.149 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.183 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.183 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.184 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.184 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.184 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:24:15 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 185 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 204 B/s wr, 11 op/s
Nov 24 20:24:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:15.531+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:15 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:15.537+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:15 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:24:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644600025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.695 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.796 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.796 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.802 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:24:15 compute-0 nova_compute[257476]: 2025-11-24 20:24:15.802 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.002 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.003 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4913MB free_disk=59.90910720825195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.004 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.005 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.112 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 43bc955c-77ee-42d8-98e2-84163217d1aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.113 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.113 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 4e9758ff-13d1-447b-9a2a-d6ae9f807143 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.113 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance db8c22d1-e16d-49f8-b4a5-ba8e87849ea3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.114 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.114 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:24:16 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:16 compute-0 ceph-mon[75677]: pgmap v1234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 185 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 204 B/s wr, 11 op/s
Nov 24 20:24:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/644600025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.240 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:24:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:24:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3309629027' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:24:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:24:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3309629027' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:24:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:16.491+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:16 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:16.551+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:16 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:24:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/813429153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.660 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.669 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.683 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.703 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:24:16 compute-0 nova_compute[257476]: 2025-11-24 20:24:16.703 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:24:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1972 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:16.982064) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015856982107, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1549, "num_deletes": 507, "total_data_size": 1456821, "memory_usage": 1495560, "flush_reason": "Manual Compaction"}
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015856995480, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1379802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33244, "largest_seqno": 34792, "table_properties": {"data_size": 1373169, "index_size": 3003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21110, "raw_average_key_size": 20, "raw_value_size": 1356515, "raw_average_value_size": 1337, "num_data_blocks": 131, "num_entries": 1014, "num_filter_entries": 1014, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764015777, "oldest_key_time": 1764015777, "file_creation_time": 1764015856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 13569 microseconds, and 8717 cpu microseconds.
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:16.995564) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1379802 bytes OK
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:16.995662) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:16.997370) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:16.997400) EVENT_LOG_v1 {"time_micros": 1764015856997390, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:16.997431) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1448471, prev total WAL file size 1448471, number of live WAL files 2.
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:16.998679) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323630' seq:72057594037927935, type:22 .. '6C6F676D0031353133' seq:0, type:0; will stop at (end)
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1347KB)], [68(10174KB)]
Nov 24 20:24:16 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015856998742, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11798686, "oldest_snapshot_seqno": -1}
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 8806 keys, 8320439 bytes, temperature: kUnknown
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015857074164, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8320439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8269966, "index_size": 27385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 236936, "raw_average_key_size": 26, "raw_value_size": 8116986, "raw_average_value_size": 921, "num_data_blocks": 1054, "num_entries": 8806, "num_filter_entries": 8806, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764015856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:17.074566) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8320439 bytes
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:17.075831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.2 rd, 110.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.9 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(14.6) write-amplify(6.0) OK, records in: 9833, records dropped: 1027 output_compression: NoCompression
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:17.075860) EVENT_LOG_v1 {"time_micros": 1764015857075846, "job": 38, "event": "compaction_finished", "compaction_time_micros": 75530, "compaction_time_cpu_micros": 47000, "output_level": 6, "num_output_files": 1, "total_output_size": 8320439, "num_input_records": 9833, "num_output_records": 8806, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015857076525, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015857080420, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:16.998534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:17.080522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:17.080530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:17.080532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:17.080534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:24:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:24:17.080537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:24:17 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3309629027' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:24:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3309629027' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:24:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/813429153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:24:17 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1972 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 511 B/s wr, 14 op/s
Nov 24 20:24:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:17.518+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:17 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:17.555+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:17 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:17 compute-0 nova_compute[257476]: 2025-11-24 20:24:17.705 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:24:17 compute-0 nova_compute[257476]: 2025-11-24 20:24:17.706 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:24:17 compute-0 nova_compute[257476]: 2025-11-24 20:24:17.707 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:24:17 compute-0 nova_compute[257476]: 2025-11-24 20:24:17.737 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:24:17 compute-0 nova_compute[257476]: 2025-11-24 20:24:17.737 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:24:17 compute-0 nova_compute[257476]: 2025-11-24 20:24:17.737 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:24:17 compute-0 nova_compute[257476]: 2025-11-24 20:24:17.738 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:24:17 compute-0 nova_compute[257476]: 2025-11-24 20:24:17.738 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:24:17 compute-0 nova_compute[257476]: 2025-11-24 20:24:17.739 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:24:17 compute-0 nova_compute[257476]: 2025-11-24 20:24:17.739 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:24:17 compute-0 podman[275583]: 2025-11-24 20:24:17.865295038 +0000 UTC m=+0.091887245 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 20:24:18 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:18 compute-0 ceph-mon[75677]: pgmap v1235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 511 B/s wr, 14 op/s
Nov 24 20:24:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:18.549+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:18.602+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:18 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:19 compute-0 nova_compute[257476]: 2025-11-24 20:24:19.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:24:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 13 op/s
Nov 24 20:24:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:19.674+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:19 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:19.675+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:19 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:19 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:19 compute-0 ceph-mon[75677]: pgmap v1236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 13 op/s
Nov 24 20:24:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:20.636+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:20 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:20.652+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:20 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:20 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:20 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 13 op/s
Nov 24 20:24:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:21.674+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:21.695+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:21 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:21 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:21 compute-0 ceph-mon[75677]: pgmap v1237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 13 op/s
Nov 24 20:24:21 compute-0 podman[275603]: 2025-11-24 20:24:21.902503702 +0000 UTC m=+0.134091814 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:24:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1982 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:22.652+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:22 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:22.652+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:22 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:22 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:22 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1982 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 499 B/s wr, 13 op/s
Nov 24 20:24:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:23.640+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:23 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:23.657+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:23 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:23 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:23 compute-0 ceph-mon[75677]: pgmap v1238: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 499 B/s wr, 13 op/s
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:24:24
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.log', 'volumes', 'images', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root']
Nov 24 20:24:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:24:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:24.615+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:24 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:24.626+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:24 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:24 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 426 B/s wr, 11 op/s
Nov 24 20:24:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:25.606+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:25 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:25.631+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:25 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:25 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:25 compute-0 ceph-mon[75677]: pgmap v1239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 426 B/s wr, 11 op/s
Nov 24 20:24:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:26.575+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:26 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:26.648+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:26 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:26 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 2 op/s
Nov 24 20:24:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:27.563+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:27 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:27.657+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:27 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1987 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:27 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:27 compute-0 ceph-mon[75677]: pgmap v1240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 2 op/s
Nov 24 20:24:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:28.551+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:28 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:28.701+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:28 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:28 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:28 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1987 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:29.509+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:29 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:29.726+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:29 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:29 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:29 compute-0 ceph-mon[75677]: pgmap v1241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:30.548+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:30 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:30.699+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:30 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:30 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:31.564+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:31 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:31.714+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:31 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:31 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:31 compute-0 ceph-mon[75677]: pgmap v1242: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:32.540+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:32 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:32.722+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:32 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:32 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:33.537+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:33 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:33.702+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:33 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:33 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:33 compute-0 ceph-mon[75677]: pgmap v1243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:34.508+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:34 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:34.705+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:34 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005666567973888129 of space, bias 1.0, pg target 0.16999703921664386 quantized to 32 (current 32)
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:24:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:24:34 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:35.471+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:35 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:35.666+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:35 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:35 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:35 compute-0 ceph-mon[75677]: pgmap v1244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:36.482+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:36 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:36.626+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:36 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1992 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:36 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:36 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1992 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:37.504+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:37 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:37.662+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:37 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:37 compute-0 podman[275629]: 2025-11-24 20:24:37.898637223 +0000 UTC m=+0.095725438 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 20:24:38 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:38 compute-0 ceph-mon[75677]: pgmap v1245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:38.494+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:38 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:38.699+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:38 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:39 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:39.530+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:39 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:39.732+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:39 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:40 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:40 compute-0 ceph-mon[75677]: pgmap v1246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:24:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:24:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:24:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:24:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:24:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:40.554+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:40 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:40.758+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:40 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:41 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:41.549+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:41.766+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:41 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 21 slow ops, oldest one blocked for 1997 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:42 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:42 compute-0 ceph-mon[75677]: pgmap v1247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:42 compute-0 ceph-mon[75677]: Health check update: 21 slow ops, oldest one blocked for 1997 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:42.592+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:42 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:42.811+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:42 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:43 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:43.548+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:43 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:43.838+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:43 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:44 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:24:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:44 compute-0 ceph-mon[75677]: pgmap v1248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:44.573+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:44 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:44.843+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:44 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:45 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:45.559+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:45 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:45.828+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:45 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:46 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:46 compute-0 ceph-mon[75677]: pgmap v1249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:46.542+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:46 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:46.845+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:46 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2002 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:47 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:47 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2002 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:47.522+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:47 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:47.822+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:47 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:48 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:48 compute-0 ceph-mon[75677]: pgmap v1250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:48.549+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:48 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:48 compute-0 podman[275648]: 2025-11-24 20:24:48.830691368 +0000 UTC m=+0.066352029 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:24:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:48.865+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:48 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:49 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:49.513+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:49 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:49.879+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:49 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:50 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:50 compute-0 ceph-mon[75677]: pgmap v1251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:50.532+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:50 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:50.836+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:50 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:51 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:51.500+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:51 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:51.837+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:51 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2007 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:52 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:52 compute-0 ceph-mon[75677]: pgmap v1252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:52 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2007 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:52.511+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:52.807+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:52 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:52 compute-0 podman[275668]: 2025-11-24 20:24:52.895878424 +0000 UTC m=+0.119881937 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 20:24:53 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:53.531+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:53 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:53.761+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:53 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:54 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:54 compute-0 ceph-mon[75677]: pgmap v1253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:24:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:24:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:24:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:24:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:24:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:24:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:54.489+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:54 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:54.730+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:54 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:55 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:55.478+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:55 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:55.737+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:55 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:56 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:56 compute-0 ceph-mon[75677]: pgmap v1254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:56.472+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:56 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:56.771+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:56 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2012 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:24:57 compute-0 sudo[275695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:24:57 compute-0 sudo[275695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:24:57 compute-0 sudo[275695]: pam_unix(sudo:session): session closed for user root
Nov 24 20:24:57 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:57 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2012 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:24:57 compute-0 sudo[275720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:24:57 compute-0 sudo[275720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:24:57 compute-0 sudo[275720]: pam_unix(sudo:session): session closed for user root
Nov 24 20:24:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:57.444+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:57 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:57 compute-0 sudo[275745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:24:57 compute-0 sudo[275745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:24:57 compute-0 sudo[275745]: pam_unix(sudo:session): session closed for user root
Nov 24 20:24:57 compute-0 sudo[275770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:24:57 compute-0 sudo[275770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:24:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:57.731+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:57 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:58 compute-0 sudo[275770]: pam_unix(sudo:session): session closed for user root
Nov 24 20:24:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:24:58 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:24:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:24:58 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:24:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:24:58 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:24:58 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev b1056a8a-713b-4b08-af4a-5cee8326caae does not exist
Nov 24 20:24:58 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1edcf177-72c6-4596-8f15-3af33bdb878e does not exist
Nov 24 20:24:58 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2fbce51f-2803-45b7-9708-09e389ca1d65 does not exist
Nov 24 20:24:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:24:58 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:24:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:24:58 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:24:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:24:58 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:24:58 compute-0 sudo[275826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:24:58 compute-0 sudo[275826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:24:58 compute-0 sudo[275826]: pam_unix(sudo:session): session closed for user root
Nov 24 20:24:58 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:58 compute-0 ceph-mon[75677]: pgmap v1255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:58 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:24:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:24:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:24:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:24:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:24:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:24:58 compute-0 sudo[275851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:24:58 compute-0 sudo[275851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:24:58 compute-0 sudo[275851]: pam_unix(sudo:session): session closed for user root
Nov 24 20:24:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:58.447+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:58 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:58 compute-0 sudo[275876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:24:58 compute-0 sudo[275876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:24:58 compute-0 sudo[275876]: pam_unix(sudo:session): session closed for user root
Nov 24 20:24:58 compute-0 sudo[275901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:24:58 compute-0 sudo[275901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:24:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:58.722+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:58 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:59 compute-0 podman[275967]: 2025-11-24 20:24:59.007155485 +0000 UTC m=+0.085151070 container create 96119da345ababda2f608a4cb5c720534c36f1711769fe679c482972abad73fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:24:59 compute-0 systemd[1]: Started libpod-conmon-96119da345ababda2f608a4cb5c720534c36f1711769fe679c482972abad73fd.scope.
Nov 24 20:24:59 compute-0 podman[275967]: 2025-11-24 20:24:58.970916999 +0000 UTC m=+0.048912624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:24:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:24:59 compute-0 podman[275967]: 2025-11-24 20:24:59.103769868 +0000 UTC m=+0.181765533 container init 96119da345ababda2f608a4cb5c720534c36f1711769fe679c482972abad73fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:24:59 compute-0 podman[275967]: 2025-11-24 20:24:59.115291211 +0000 UTC m=+0.193286826 container start 96119da345ababda2f608a4cb5c720534c36f1711769fe679c482972abad73fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:24:59 compute-0 podman[275967]: 2025-11-24 20:24:59.119352433 +0000 UTC m=+0.197348098 container attach 96119da345ababda2f608a4cb5c720534c36f1711769fe679c482972abad73fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 24 20:24:59 compute-0 gallant_rubin[275983]: 167 167
Nov 24 20:24:59 compute-0 systemd[1]: libpod-96119da345ababda2f608a4cb5c720534c36f1711769fe679c482972abad73fd.scope: Deactivated successfully.
Nov 24 20:24:59 compute-0 podman[275967]: 2025-11-24 20:24:59.124017369 +0000 UTC m=+0.202012954 container died 96119da345ababda2f608a4cb5c720534c36f1711769fe679c482972abad73fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-058a14baa74d68adaa9a448391aa6ed0443f1fce85fd2308688748fecd38abe9-merged.mount: Deactivated successfully.
Nov 24 20:24:59 compute-0 podman[275967]: 2025-11-24 20:24:59.178145714 +0000 UTC m=+0.256141319 container remove 96119da345ababda2f608a4cb5c720534c36f1711769fe679c482972abad73fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:24:59 compute-0 systemd[1]: libpod-conmon-96119da345ababda2f608a4cb5c720534c36f1711769fe679c482972abad73fd.scope: Deactivated successfully.
Nov 24 20:24:59 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:24:59 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:24:59 compute-0 podman[276007]: 2025-11-24 20:24:59.40192913 +0000 UTC m=+0.070118461 container create bb4f6713004378bdae0e627216b2f2bc8effd4b5ff5bda027eb9ee4a8deee6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ganguly, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:24:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:24:59.399+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:24:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:24:59 compute-0 systemd[1]: Started libpod-conmon-bb4f6713004378bdae0e627216b2f2bc8effd4b5ff5bda027eb9ee4a8deee6c5.scope.
Nov 24 20:24:59 compute-0 podman[276007]: 2025-11-24 20:24:59.373400803 +0000 UTC m=+0.041590184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:24:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada1660b8d7c2f30effe5528bee541d81306721634591bef5ad733bbf761e5c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada1660b8d7c2f30effe5528bee541d81306721634591bef5ad733bbf761e5c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada1660b8d7c2f30effe5528bee541d81306721634591bef5ad733bbf761e5c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada1660b8d7c2f30effe5528bee541d81306721634591bef5ad733bbf761e5c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada1660b8d7c2f30effe5528bee541d81306721634591bef5ad733bbf761e5c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:24:59 compute-0 podman[276007]: 2025-11-24 20:24:59.516938313 +0000 UTC m=+0.185127724 container init bb4f6713004378bdae0e627216b2f2bc8effd4b5ff5bda027eb9ee4a8deee6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:24:59 compute-0 podman[276007]: 2025-11-24 20:24:59.530327088 +0000 UTC m=+0.198516459 container start bb4f6713004378bdae0e627216b2f2bc8effd4b5ff5bda027eb9ee4a8deee6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 24 20:24:59 compute-0 podman[276007]: 2025-11-24 20:24:59.534624106 +0000 UTC m=+0.202813487 container attach bb4f6713004378bdae0e627216b2f2bc8effd4b5ff5bda027eb9ee4a8deee6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ganguly, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:24:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:24:59.722+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:59 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:24:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:00 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:00 compute-0 ceph-mon[75677]: pgmap v1256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:00 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:00.439+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:00 compute-0 compassionate_ganguly[276024]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:25:00 compute-0 compassionate_ganguly[276024]: --> relative data size: 1.0
Nov 24 20:25:00 compute-0 compassionate_ganguly[276024]: --> All data devices are unavailable
Nov 24 20:25:00 compute-0 systemd[1]: libpod-bb4f6713004378bdae0e627216b2f2bc8effd4b5ff5bda027eb9ee4a8deee6c5.scope: Deactivated successfully.
Nov 24 20:25:00 compute-0 systemd[1]: libpod-bb4f6713004378bdae0e627216b2f2bc8effd4b5ff5bda027eb9ee4a8deee6c5.scope: Consumed 1.074s CPU time.
Nov 24 20:25:00 compute-0 podman[276007]: 2025-11-24 20:25:00.666772059 +0000 UTC m=+1.334961430 container died bb4f6713004378bdae0e627216b2f2bc8effd4b5ff5bda027eb9ee4a8deee6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ganguly, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 20:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ada1660b8d7c2f30effe5528bee541d81306721634591bef5ad733bbf761e5c6-merged.mount: Deactivated successfully.
Nov 24 20:25:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:00.733+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:00 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:00 compute-0 podman[276007]: 2025-11-24 20:25:00.754130198 +0000 UTC m=+1.422319559 container remove bb4f6713004378bdae0e627216b2f2bc8effd4b5ff5bda027eb9ee4a8deee6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:25:00 compute-0 systemd[1]: libpod-conmon-bb4f6713004378bdae0e627216b2f2bc8effd4b5ff5bda027eb9ee4a8deee6c5.scope: Deactivated successfully.
Nov 24 20:25:00 compute-0 sudo[275901]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:00 compute-0 sudo[276067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:25:00 compute-0 sudo[276067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:25:00 compute-0 sudo[276067]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:01 compute-0 sudo[276092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:25:01 compute-0 sudo[276092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:25:01 compute-0 sudo[276092]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:01 compute-0 sudo[276117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:25:01 compute-0 sudo[276117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:25:01 compute-0 sudo[276117]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:01 compute-0 sudo[276142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:25:01 compute-0 sudo[276142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:25:01 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:01 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:01.427+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:01 compute-0 podman[276208]: 2025-11-24 20:25:01.542890527 +0000 UTC m=+0.053226671 container create 78b156236e562f70d649247335ec233f9229f0ca704d48ceda4f627e3426a585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:25:01 compute-0 systemd[1]: Started libpod-conmon-78b156236e562f70d649247335ec233f9229f0ca704d48ceda4f627e3426a585.scope.
Nov 24 20:25:01 compute-0 podman[276208]: 2025-11-24 20:25:01.514620607 +0000 UTC m=+0.024956831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:25:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:25:01 compute-0 podman[276208]: 2025-11-24 20:25:01.638676176 +0000 UTC m=+0.149012400 container init 78b156236e562f70d649247335ec233f9229f0ca704d48ceda4f627e3426a585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:25:01 compute-0 podman[276208]: 2025-11-24 20:25:01.649717167 +0000 UTC m=+0.160053341 container start 78b156236e562f70d649247335ec233f9229f0ca704d48ceda4f627e3426a585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:25:01 compute-0 podman[276208]: 2025-11-24 20:25:01.65314014 +0000 UTC m=+0.163476324 container attach 78b156236e562f70d649247335ec233f9229f0ca704d48ceda4f627e3426a585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:25:01 compute-0 confident_hopper[276224]: 167 167
Nov 24 20:25:01 compute-0 systemd[1]: libpod-78b156236e562f70d649247335ec233f9229f0ca704d48ceda4f627e3426a585.scope: Deactivated successfully.
Nov 24 20:25:01 compute-0 conmon[276224]: conmon 78b156236e562f70d649 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78b156236e562f70d649247335ec233f9229f0ca704d48ceda4f627e3426a585.scope/container/memory.events
Nov 24 20:25:01 compute-0 podman[276208]: 2025-11-24 20:25:01.657834218 +0000 UTC m=+0.168170392 container died 78b156236e562f70d649247335ec233f9229f0ca704d48ceda4f627e3426a585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fed6ac636b19c57cf1b467a79b1f8f9503382835a34ba5b7223840bf1759555c-merged.mount: Deactivated successfully.
Nov 24 20:25:01 compute-0 podman[276208]: 2025-11-24 20:25:01.705165918 +0000 UTC m=+0.215502092 container remove 78b156236e562f70d649247335ec233f9229f0ca704d48ceda4f627e3426a585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hopper, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:25:01 compute-0 systemd[1]: libpod-conmon-78b156236e562f70d649247335ec233f9229f0ca704d48ceda4f627e3426a585.scope: Deactivated successfully.
Nov 24 20:25:01 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:01.747+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:01 compute-0 podman[276248]: 2025-11-24 20:25:01.969895719 +0000 UTC m=+0.078009936 container create 50f22fb21a459ff86ac1d67408ecaf06e7767a68b49558dbd16ed0ee161ef0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 20:25:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2017 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:02 compute-0 systemd[1]: Started libpod-conmon-50f22fb21a459ff86ac1d67408ecaf06e7767a68b49558dbd16ed0ee161ef0d3.scope.
Nov 24 20:25:02 compute-0 podman[276248]: 2025-11-24 20:25:01.939502921 +0000 UTC m=+0.047617168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:25:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1481ac6b8bc5bd0766b45ea7b2240634e0dbba6c7c18925ccb50c3e4aeb83036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1481ac6b8bc5bd0766b45ea7b2240634e0dbba6c7c18925ccb50c3e4aeb83036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1481ac6b8bc5bd0766b45ea7b2240634e0dbba6c7c18925ccb50c3e4aeb83036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1481ac6b8bc5bd0766b45ea7b2240634e0dbba6c7c18925ccb50c3e4aeb83036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:25:02 compute-0 podman[276248]: 2025-11-24 20:25:02.079009832 +0000 UTC m=+0.187124059 container init 50f22fb21a459ff86ac1d67408ecaf06e7767a68b49558dbd16ed0ee161ef0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jennings, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:25:02 compute-0 podman[276248]: 2025-11-24 20:25:02.093403074 +0000 UTC m=+0.201517281 container start 50f22fb21a459ff86ac1d67408ecaf06e7767a68b49558dbd16ed0ee161ef0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 20:25:02 compute-0 podman[276248]: 2025-11-24 20:25:02.097876616 +0000 UTC m=+0.205990873 container attach 50f22fb21a459ff86ac1d67408ecaf06e7767a68b49558dbd16ed0ee161ef0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jennings, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 20:25:02 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:02 compute-0 ceph-mon[75677]: pgmap v1257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:02 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2017 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:02 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:02.424+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:02.705+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:02 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:02 compute-0 romantic_jennings[276265]: {
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:     "0": [
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:         {
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "devices": [
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "/dev/loop3"
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             ],
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_name": "ceph_lv0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_size": "21470642176",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "name": "ceph_lv0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "tags": {
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.cluster_name": "ceph",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.crush_device_class": "",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.encrypted": "0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.osd_id": "0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.type": "block",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.vdo": "0"
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             },
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "type": "block",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "vg_name": "ceph_vg0"
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:         }
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:     ],
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:     "1": [
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:         {
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "devices": [
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "/dev/loop4"
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             ],
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_name": "ceph_lv1",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_size": "21470642176",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "name": "ceph_lv1",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "tags": {
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.cluster_name": "ceph",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.crush_device_class": "",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.encrypted": "0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.osd_id": "1",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.type": "block",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.vdo": "0"
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             },
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "type": "block",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "vg_name": "ceph_vg1"
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:         }
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:     ],
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:     "2": [
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:         {
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "devices": [
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "/dev/loop5"
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             ],
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_name": "ceph_lv2",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_size": "21470642176",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "name": "ceph_lv2",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "tags": {
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.cluster_name": "ceph",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.crush_device_class": "",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.encrypted": "0",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.osd_id": "2",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.type": "block",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:                 "ceph.vdo": "0"
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             },
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "type": "block",
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:             "vg_name": "ceph_vg2"
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:         }
Nov 24 20:25:02 compute-0 romantic_jennings[276265]:     ]
Nov 24 20:25:02 compute-0 romantic_jennings[276265]: }
Nov 24 20:25:02 compute-0 systemd[1]: libpod-50f22fb21a459ff86ac1d67408ecaf06e7767a68b49558dbd16ed0ee161ef0d3.scope: Deactivated successfully.
Nov 24 20:25:02 compute-0 podman[276248]: 2025-11-24 20:25:02.887070616 +0000 UTC m=+0.995184793 container died 50f22fb21a459ff86ac1d67408ecaf06e7767a68b49558dbd16ed0ee161ef0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1481ac6b8bc5bd0766b45ea7b2240634e0dbba6c7c18925ccb50c3e4aeb83036-merged.mount: Deactivated successfully.
Nov 24 20:25:02 compute-0 podman[276248]: 2025-11-24 20:25:02.966687595 +0000 UTC m=+1.074801782 container remove 50f22fb21a459ff86ac1d67408ecaf06e7767a68b49558dbd16ed0ee161ef0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jennings, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 20:25:02 compute-0 systemd[1]: libpod-conmon-50f22fb21a459ff86ac1d67408ecaf06e7767a68b49558dbd16ed0ee161ef0d3.scope: Deactivated successfully.
Nov 24 20:25:03 compute-0 sudo[276142]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:03 compute-0 sudo[276288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:25:03 compute-0 sudo[276288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:25:03 compute-0 sudo[276288]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:03 compute-0 sudo[276313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:25:03 compute-0 sudo[276313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:25:03 compute-0 sudo[276313]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:03 compute-0 sudo[276338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:25:03 compute-0 sudo[276338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:25:03 compute-0 sudo[276338]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:03 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:03.414+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:03 compute-0 sudo[276363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:25:03 compute-0 sudo[276363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:25:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:03.674+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:03 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:03 compute-0 podman[276430]: 2025-11-24 20:25:03.882110094 +0000 UTC m=+0.066377529 container create 8089f49d08dff1f62b0713dec850babfef6e4ec6dc0c3d5c404c693bd1c1e10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:25:03 compute-0 systemd[1]: Started libpod-conmon-8089f49d08dff1f62b0713dec850babfef6e4ec6dc0c3d5c404c693bd1c1e10a.scope.
Nov 24 20:25:03 compute-0 podman[276430]: 2025-11-24 20:25:03.855741775 +0000 UTC m=+0.040009250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:25:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:25:03 compute-0 podman[276430]: 2025-11-24 20:25:03.982360935 +0000 UTC m=+0.166628430 container init 8089f49d08dff1f62b0713dec850babfef6e4ec6dc0c3d5c404c693bd1c1e10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_taussig, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 20:25:03 compute-0 podman[276430]: 2025-11-24 20:25:03.994055744 +0000 UTC m=+0.178323179 container start 8089f49d08dff1f62b0713dec850babfef6e4ec6dc0c3d5c404c693bd1c1e10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:25:03 compute-0 podman[276430]: 2025-11-24 20:25:03.998349181 +0000 UTC m=+0.182616686 container attach 8089f49d08dff1f62b0713dec850babfef6e4ec6dc0c3d5c404c693bd1c1e10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:25:04 compute-0 festive_taussig[276447]: 167 167
Nov 24 20:25:04 compute-0 systemd[1]: libpod-8089f49d08dff1f62b0713dec850babfef6e4ec6dc0c3d5c404c693bd1c1e10a.scope: Deactivated successfully.
Nov 24 20:25:04 compute-0 podman[276430]: 2025-11-24 20:25:04.003262015 +0000 UTC m=+0.187529470 container died 8089f49d08dff1f62b0713dec850babfef6e4ec6dc0c3d5c404c693bd1c1e10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bf141d5e607e4a3378a1b10b425d86aad374225f5340a24a12800e2038da33b-merged.mount: Deactivated successfully.
Nov 24 20:25:04 compute-0 podman[276430]: 2025-11-24 20:25:04.057215574 +0000 UTC m=+0.241483009 container remove 8089f49d08dff1f62b0713dec850babfef6e4ec6dc0c3d5c404c693bd1c1e10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_taussig, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 20:25:04 compute-0 systemd[1]: libpod-conmon-8089f49d08dff1f62b0713dec850babfef6e4ec6dc0c3d5c404c693bd1c1e10a.scope: Deactivated successfully.
Nov 24 20:25:04 compute-0 podman[276471]: 2025-11-24 20:25:04.273682331 +0000 UTC m=+0.066515362 container create c86a98ea826caae8b4533428320b497ba165b47e6debc26e310f14a093b33426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:25:04 compute-0 systemd[1]: Started libpod-conmon-c86a98ea826caae8b4533428320b497ba165b47e6debc26e310f14a093b33426.scope.
Nov 24 20:25:04 compute-0 podman[276471]: 2025-11-24 20:25:04.24537505 +0000 UTC m=+0.038208121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:25:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cca807c1c99c24e4b7566d572d22926c9496cf1b5fe8855e0cfffed2225fc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cca807c1c99c24e4b7566d572d22926c9496cf1b5fe8855e0cfffed2225fc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cca807c1c99c24e4b7566d572d22926c9496cf1b5fe8855e0cfffed2225fc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cca807c1c99c24e4b7566d572d22926c9496cf1b5fe8855e0cfffed2225fc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:25:04 compute-0 podman[276471]: 2025-11-24 20:25:04.381458318 +0000 UTC m=+0.174291389 container init c86a98ea826caae8b4533428320b497ba165b47e6debc26e310f14a093b33426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:25:04 compute-0 podman[276471]: 2025-11-24 20:25:04.401829373 +0000 UTC m=+0.194662394 container start c86a98ea826caae8b4533428320b497ba165b47e6debc26e310f14a093b33426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_faraday, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:25:04 compute-0 podman[276471]: 2025-11-24 20:25:04.406737886 +0000 UTC m=+0.199570967 container attach c86a98ea826caae8b4533428320b497ba165b47e6debc26e310f14a093b33426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_faraday, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:25:04 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:04.405+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:04 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:04 compute-0 ceph-mon[75677]: pgmap v1258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:04.701+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:04 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:05.376+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:05 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:05 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:05 compute-0 confident_faraday[276488]: {
Nov 24 20:25:05 compute-0 confident_faraday[276488]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "osd_id": 2,
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "type": "bluestore"
Nov 24 20:25:05 compute-0 confident_faraday[276488]:     },
Nov 24 20:25:05 compute-0 confident_faraday[276488]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "osd_id": 1,
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "type": "bluestore"
Nov 24 20:25:05 compute-0 confident_faraday[276488]:     },
Nov 24 20:25:05 compute-0 confident_faraday[276488]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "osd_id": 0,
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:25:05 compute-0 confident_faraday[276488]:         "type": "bluestore"
Nov 24 20:25:05 compute-0 confident_faraday[276488]:     }
Nov 24 20:25:05 compute-0 confident_faraday[276488]: }
Nov 24 20:25:05 compute-0 systemd[1]: libpod-c86a98ea826caae8b4533428320b497ba165b47e6debc26e310f14a093b33426.scope: Deactivated successfully.
Nov 24 20:25:05 compute-0 podman[276471]: 2025-11-24 20:25:05.562704149 +0000 UTC m=+1.355537170 container died c86a98ea826caae8b4533428320b497ba165b47e6debc26e310f14a093b33426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_faraday, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:25:05 compute-0 systemd[1]: libpod-c86a98ea826caae8b4533428320b497ba165b47e6debc26e310f14a093b33426.scope: Consumed 1.170s CPU time.
Nov 24 20:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-66cca807c1c99c24e4b7566d572d22926c9496cf1b5fe8855e0cfffed2225fc5-merged.mount: Deactivated successfully.
Nov 24 20:25:05 compute-0 podman[276471]: 2025-11-24 20:25:05.633809046 +0000 UTC m=+1.426642057 container remove c86a98ea826caae8b4533428320b497ba165b47e6debc26e310f14a093b33426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:25:05 compute-0 systemd[1]: libpod-conmon-c86a98ea826caae8b4533428320b497ba165b47e6debc26e310f14a093b33426.scope: Deactivated successfully.
Nov 24 20:25:05 compute-0 sudo[276363]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:25:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:25:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:25:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:25:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:05.708+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:05 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0883caa4-71fc-4572-b230-ccb95460a968 does not exist
Nov 24 20:25:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev b8f6f915-ba05-43e5-8a01-81e2a2716db5 does not exist
Nov 24 20:25:05 compute-0 sudo[276534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:25:05 compute-0 sudo[276534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:25:05 compute-0 sudo[276534]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:05 compute-0 sudo[276559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:25:05 compute-0 sudo[276559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:25:05 compute-0 sudo[276559]: pam_unix(sudo:session): session closed for user root
Nov 24 20:25:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:06.400+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:06 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:06 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:06 compute-0 ceph-mon[75677]: pgmap v1259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:25:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:25:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:06.682+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:06 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2022 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:07.446+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:07 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:07 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:07 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2022 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:07.709+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:07 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:08 compute-0 ceph-mon[75677]: pgmap v1260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:08 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:08.484+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:08 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:08.754+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:08 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:08 compute-0 podman[276584]: 2025-11-24 20:25:08.862090643 +0000 UTC m=+0.087564806 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 20:25:09 compute-0 nova_compute[257476]: 2025-11-24 20:25:09.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:25:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:25:09.378 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:25:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:25:09.379 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:25:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:25:09.379 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:25:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:09 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:09.473+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:09 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:09.719+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:09 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:10.438+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:10 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:10 compute-0 ceph-mon[75677]: pgmap v1261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:10 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:10.739+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:10 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:11.434+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:11 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:11 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:11 compute-0 ceph-mon[75677]: pgmap v1262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:11.761+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:11 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2032 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:12 compute-0 nova_compute[257476]: 2025-11-24 20:25:12.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:25:12 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:12.419+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:12 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:12 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2032 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:12.738+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:12 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:13 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:13.441+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:13 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:13 compute-0 ceph-mon[75677]: pgmap v1263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:13.773+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:13 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:14 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:14.415+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:14 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:14.759+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:14 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:15 compute-0 nova_compute[257476]: 2025-11-24 20:25:15.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:25:15 compute-0 nova_compute[257476]: 2025-11-24 20:25:15.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:25:15 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:15.378+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:15 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:15 compute-0 ceph-mon[75677]: pgmap v1264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:15.767+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:15 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:16 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:16.370+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:25:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/239387216' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:25:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:25:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/239387216' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:25:16 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/239387216' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:25:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/239387216' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:25:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:16.790+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:16 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.146 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.149 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.183 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.184 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.184 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.184 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.185 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:25:17 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:17.383+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2037 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:17 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:17 compute-0 ceph-mon[75677]: pgmap v1265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:25:17 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3324133023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.626 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.694 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.696 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.701 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.702 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:25:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:17.820+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:17 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.910 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.913 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4903MB free_disk=59.954288482666016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.913 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:25:17 compute-0 nova_compute[257476]: 2025-11-24 20:25:17.914 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.038 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 43bc955c-77ee-42d8-98e2-84163217d1aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.038 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.039 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 4e9758ff-13d1-447b-9a2a-d6ae9f807143 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.039 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance db8c22d1-e16d-49f8-b4a5-ba8e87849ea3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.039 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.040 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.122 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:25:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:18.397+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:18 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:18 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2037 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:18 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3324133023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:25:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:25:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005406772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.597 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.605 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.629 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.632 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:25:18 compute-0 nova_compute[257476]: 2025-11-24 20:25:18.632 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:25:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:18.867+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:18 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:19 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:19.348+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:19 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:19 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4005406772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:25:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:19 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:19 compute-0 ceph-mon[75677]: pgmap v1266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.629 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.659 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.659 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.659 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.687 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.688 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.689 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.689 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.689 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.690 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:25:19 compute-0 nova_compute[257476]: 2025-11-24 20:25:19.691 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:25:19 compute-0 podman[276647]: 2025-11-24 20:25:19.878685198 +0000 UTC m=+0.100313024 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:25:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:19.905+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:19 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:20 compute-0 nova_compute[257476]: 2025-11-24 20:25:20.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:25:20 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:20.305+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:20 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:20.891+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:20 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:21.259+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:21 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:21 compute-0 ceph-mon[75677]: pgmap v1267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:21.911+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:21 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:22 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:22.256+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:22 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:22.956+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:22 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:23 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:23.275+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:23.908+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:23 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:24 compute-0 podman[276667]: 2025-11-24 20:25:24.112426209 +0000 UTC m=+0.338626377 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:25:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:24 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:24 compute-0 ceph-mon[75677]: pgmap v1268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:24.283+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:24 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:25:24
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Nov 24 20:25:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:25:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:24.869+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:24 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:25 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:25.330+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:25 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:25.829+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:25 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:26.335+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:26 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:26 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:26 compute-0 ceph-mon[75677]: pgmap v1269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:26.791+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:26 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2042 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:27 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:27.351+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:27 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:27 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2042 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:27 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:27.757+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:27 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:27 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 20:25:28 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:28.317+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:28.740+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:28 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:28 compute-0 ceph-mon[75677]: pgmap v1270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:28 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:29 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:29.344+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:29.710+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:29 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:29 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:29 compute-0 ceph-mon[75677]: pgmap v1271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:30 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:30.371+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:30.745+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:30 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:31 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:31.393+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:31.732+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:31 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:31 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:31 compute-0 ceph-mon[75677]: pgmap v1272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2052 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:32 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:32.408+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:32.702+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:32 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:32 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:32 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2052 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:33 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:33.440+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:33.658+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:33 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:33 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:33 compute-0 ceph-mon[75677]: pgmap v1273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:34 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:34.456+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:34.689+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:34 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005666567973888129 of space, bias 1.0, pg target 0.16999703921664386 quantized to 32 (current 32)
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:25:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:25:34 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:35 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:35.441+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:35.655+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:35 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:35 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:35 compute-0 ceph-mon[75677]: pgmap v1274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:36 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:36.452+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:36.644+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:36 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:36 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:37 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:37.495+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:37.604+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:37 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2057 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:37 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:37 compute-0 ceph-mon[75677]: pgmap v1275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:38 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:38.515+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:38.627+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:38 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:38 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:38 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2057 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:39.518+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:39 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:39.624+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:39 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:39 compute-0 podman[276694]: 2025-11-24 20:25:39.867384462 +0000 UTC m=+0.077256625 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:25:40 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:40 compute-0 ceph-mon[75677]: pgmap v1276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.100735) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015940100787, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1301, "num_deletes": 251, "total_data_size": 1459956, "memory_usage": 1487576, "flush_reason": "Manual Compaction"}
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015940126282, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1436743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34793, "largest_seqno": 36093, "table_properties": {"data_size": 1430796, "index_size": 2957, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16428, "raw_average_key_size": 21, "raw_value_size": 1417390, "raw_average_value_size": 1869, "num_data_blocks": 129, "num_entries": 758, "num_filter_entries": 758, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764015857, "oldest_key_time": 1764015857, "file_creation_time": 1764015940, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 25624 microseconds, and 4706 cpu microseconds.
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.126331) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1436743 bytes OK
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.126383) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.130063) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.130078) EVENT_LOG_v1 {"time_micros": 1764015940130073, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.130101) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1453646, prev total WAL file size 1453646, number of live WAL files 2.
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.130887) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1403KB)], [71(8125KB)]
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015940130997, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9757182, "oldest_snapshot_seqno": -1}
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 9050 keys, 8225619 bytes, temperature: kUnknown
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015940196315, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8225619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8174091, "index_size": 27800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 243937, "raw_average_key_size": 26, "raw_value_size": 8017337, "raw_average_value_size": 885, "num_data_blocks": 1062, "num_entries": 9050, "num_filter_entries": 9050, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764015940, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.196853) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8225619 bytes
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.202109) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.1 rd, 125.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.9 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(12.5) write-amplify(5.7) OK, records in: 9564, records dropped: 514 output_compression: NoCompression
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.202153) EVENT_LOG_v1 {"time_micros": 1764015940202133, "job": 40, "event": "compaction_finished", "compaction_time_micros": 65435, "compaction_time_cpu_micros": 41408, "output_level": 6, "num_output_files": 1, "total_output_size": 8225619, "num_input_records": 9564, "num_output_records": 9050, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015940203057, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764015940207240, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.130713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.207353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.207361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.207365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.207368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:25:40 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:25:40.207380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:25:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:40.477+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:40 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:25:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:25:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:25:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:25:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:25:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:40.646+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:40 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:41 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:41.508+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:41 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:41.676+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:42 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:42 compute-0 ceph-mon[75677]: pgmap v1277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:42.557+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:42 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:42.717+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:42 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:43 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:43.580+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:43 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:43.746+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:43 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:44 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:44 compute-0 ceph-mon[75677]: pgmap v1278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:44.614+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:44 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:44.746+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:44 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:45 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:45.605+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:45 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:45.777+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:45 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:46 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:46 compute-0 ceph-mon[75677]: pgmap v1279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:46.625+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:46 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:46 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:46.808+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2062 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:47 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:47 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2062 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:47.634+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:47 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:47 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:47.810+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:48 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:48 compute-0 ceph-mon[75677]: pgmap v1280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:48.648+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:48 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:48.765+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:48 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:49 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:49.657+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:49 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:49 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:49.765+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:50 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:50 compute-0 ceph-mon[75677]: pgmap v1281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:50.665+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:50 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:50.724+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:50 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:50 compute-0 podman[276715]: 2025-11-24 20:25:50.884114687 +0000 UTC m=+0.113450388 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Nov 24 20:25:51 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:51.649+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:51 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:51.683+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:51 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2067 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:52 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:52 compute-0 ceph-mon[75677]: pgmap v1282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:52 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2067 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:52.622+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:52.642+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:52 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:53 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:53 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:53 compute-0 ceph-mon[75677]: pgmap v1283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:53.576+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:53 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:53.641+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:53 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:25:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:25:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:25:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:25:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:25:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:25:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:54.602+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:54 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:54.604+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:54 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:54 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:54 compute-0 podman[276735]: 2025-11-24 20:25:54.969047071 +0000 UTC m=+0.190740697 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 24 20:25:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:55.576+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:55 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:55.599+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:55 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:55 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:55 compute-0 ceph-mon[75677]: pgmap v1284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:56.566+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:56 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:56.581+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:56 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:56 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2072 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:25:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:57.552+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:57 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:57.601+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:57 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:58 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:58 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2072 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:25:58 compute-0 ceph-mon[75677]: pgmap v1285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:58.532+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:58 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:58.587+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:58 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:59 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:25:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:25:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:25:59.505+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:59 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:25:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:25:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:25:59.548+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:59 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:25:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:00 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:00 compute-0 ceph-mon[75677]: pgmap v1286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:00.461+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:00 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:00.590+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:00 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:01 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:01.493+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:01 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:01.613+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:01 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2077 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:02 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:02 compute-0 ceph-mon[75677]: pgmap v1287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:02 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2077 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:02.480+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:02 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:02.577+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:02 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:03 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:03 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:03.469+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:03.583+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:03 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:04 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:04 compute-0 ceph-mon[75677]: pgmap v1288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:04.426+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:04 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:04.538+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:04 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:05 compute-0 nova_compute[257476]: 2025-11-24 20:26:05.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:05 compute-0 nova_compute[257476]: 2025-11-24 20:26:05.152 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 20:26:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:05.395+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:05 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:05 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:05.581+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:05 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:06 compute-0 sudo[276762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:26:06 compute-0 sudo[276762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:06 compute-0 sudo[276762]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:06 compute-0 sudo[276787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:26:06 compute-0 sudo[276787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:06 compute-0 sudo[276787]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:06 compute-0 sudo[276812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:26:06 compute-0 sudo[276812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:06 compute-0 sudo[276812]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:06 compute-0 sudo[276837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:26:06 compute-0 sudo[276837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:06.432+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:06 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:06 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:06 compute-0 ceph-mon[75677]: pgmap v1289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:06.541+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:06 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:06 compute-0 sudo[276837]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:26:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:26:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:26:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:26:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:26:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:26:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1f24ab82-ab94-4649-9646-0ffa61fb3534 does not exist
Nov 24 20:26:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2666ed4d-0b57-40ab-99dd-7be09eb5cf6f does not exist
Nov 24 20:26:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d41aa68e-ee65-4a9a-9995-c93b026eab23 does not exist
Nov 24 20:26:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:26:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:26:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:26:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:26:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:26:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:26:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2082 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:07 compute-0 sudo[276892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:26:07 compute-0 sudo[276892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:07 compute-0 sudo[276892]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:07 compute-0 sudo[276917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:26:07 compute-0 sudo[276917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:07 compute-0 sudo[276917]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:07 compute-0 sudo[276942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:26:07 compute-0 sudo[276942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:07 compute-0 sudo[276942]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:07 compute-0 sudo[276967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:26:07 compute-0 sudo[276967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:07.407+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:07 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:07.504+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:07 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:07 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:26:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:26:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:26:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:26:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:26:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:26:07 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2082 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:07 compute-0 ceph-mon[75677]: pgmap v1290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:07 compute-0 podman[277031]: 2025-11-24 20:26:07.88388717 +0000 UTC m=+0.077169036 container create 11bf1a8469c30f819794b7ccdf259470abfc38d3ed73112625bf2193ac82c872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:26:07 compute-0 systemd[1]: Started libpod-conmon-11bf1a8469c30f819794b7ccdf259470abfc38d3ed73112625bf2193ac82c872.scope.
Nov 24 20:26:07 compute-0 podman[277031]: 2025-11-24 20:26:07.855485505 +0000 UTC m=+0.048767401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:26:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:26:08 compute-0 podman[277031]: 2025-11-24 20:26:08.071051879 +0000 UTC m=+0.264333745 container init 11bf1a8469c30f819794b7ccdf259470abfc38d3ed73112625bf2193ac82c872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendel, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:26:08 compute-0 podman[277031]: 2025-11-24 20:26:08.083393685 +0000 UTC m=+0.276675551 container start 11bf1a8469c30f819794b7ccdf259470abfc38d3ed73112625bf2193ac82c872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:26:08 compute-0 stoic_mendel[277047]: 167 167
Nov 24 20:26:08 compute-0 systemd[1]: libpod-11bf1a8469c30f819794b7ccdf259470abfc38d3ed73112625bf2193ac82c872.scope: Deactivated successfully.
Nov 24 20:26:08 compute-0 podman[277031]: 2025-11-24 20:26:08.162252528 +0000 UTC m=+0.355534404 container attach 11bf1a8469c30f819794b7ccdf259470abfc38d3ed73112625bf2193ac82c872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:26:08 compute-0 podman[277031]: 2025-11-24 20:26:08.163891673 +0000 UTC m=+0.357173539 container died 11bf1a8469c30f819794b7ccdf259470abfc38d3ed73112625bf2193ac82c872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 24 20:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ad0f18a1b5863d796d45228f39a7ef0c0cb7b03325e15c607d74f08019dad8e-merged.mount: Deactivated successfully.
Nov 24 20:26:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:08.369+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:08 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:08.484+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:08 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:08 compute-0 podman[277031]: 2025-11-24 20:26:08.530461348 +0000 UTC m=+0.723743214 container remove 11bf1a8469c30f819794b7ccdf259470abfc38d3ed73112625bf2193ac82c872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:26:08 compute-0 systemd[1]: libpod-conmon-11bf1a8469c30f819794b7ccdf259470abfc38d3ed73112625bf2193ac82c872.scope: Deactivated successfully.
Nov 24 20:26:08 compute-0 podman[277071]: 2025-11-24 20:26:08.80968653 +0000 UTC m=+0.083705966 container create 20961cb9b6ee5d1d83ed5c6b2d76f933fd96c01924a606e9dfd42dfd3c976b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 24 20:26:08 compute-0 systemd[1]: Started libpod-conmon-20961cb9b6ee5d1d83ed5c6b2d76f933fd96c01924a606e9dfd42dfd3c976b2e.scope.
Nov 24 20:26:08 compute-0 podman[277071]: 2025-11-24 20:26:08.773160703 +0000 UTC m=+0.047180189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:26:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594575c663077194fb4ce9a8c38f6cb083e221917b7a00ae3896e5a90f0dfb2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594575c663077194fb4ce9a8c38f6cb083e221917b7a00ae3896e5a90f0dfb2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594575c663077194fb4ce9a8c38f6cb083e221917b7a00ae3896e5a90f0dfb2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594575c663077194fb4ce9a8c38f6cb083e221917b7a00ae3896e5a90f0dfb2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594575c663077194fb4ce9a8c38f6cb083e221917b7a00ae3896e5a90f0dfb2c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:08 compute-0 podman[277071]: 2025-11-24 20:26:08.925940832 +0000 UTC m=+0.199960318 container init 20961cb9b6ee5d1d83ed5c6b2d76f933fd96c01924a606e9dfd42dfd3c976b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:26:08 compute-0 podman[277071]: 2025-11-24 20:26:08.94306637 +0000 UTC m=+0.217085806 container start 20961cb9b6ee5d1d83ed5c6b2d76f933fd96c01924a606e9dfd42dfd3c976b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:26:08 compute-0 podman[277071]: 2025-11-24 20:26:08.948784126 +0000 UTC m=+0.222803602 container attach 20961cb9b6ee5d1d83ed5c6b2d76f933fd96c01924a606e9dfd42dfd3c976b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:26:09 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:09.378+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:26:09.381 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:26:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:26:09.382 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:26:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:26:09.382 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:26:09 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:09.515+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:09 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:10 compute-0 gifted_bohr[277088]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:26:10 compute-0 gifted_bohr[277088]: --> relative data size: 1.0
Nov 24 20:26:10 compute-0 gifted_bohr[277088]: --> All data devices are unavailable
Nov 24 20:26:10 compute-0 systemd[1]: libpod-20961cb9b6ee5d1d83ed5c6b2d76f933fd96c01924a606e9dfd42dfd3c976b2e.scope: Deactivated successfully.
Nov 24 20:26:10 compute-0 systemd[1]: libpod-20961cb9b6ee5d1d83ed5c6b2d76f933fd96c01924a606e9dfd42dfd3c976b2e.scope: Consumed 1.137s CPU time.
Nov 24 20:26:10 compute-0 podman[277071]: 2025-11-24 20:26:10.125621156 +0000 UTC m=+1.399640592 container died 20961cb9b6ee5d1d83ed5c6b2d76f933fd96c01924a606e9dfd42dfd3c976b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 20:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-594575c663077194fb4ce9a8c38f6cb083e221917b7a00ae3896e5a90f0dfb2c-merged.mount: Deactivated successfully.
Nov 24 20:26:10 compute-0 podman[277071]: 2025-11-24 20:26:10.220743693 +0000 UTC m=+1.494763089 container remove 20961cb9b6ee5d1d83ed5c6b2d76f933fd96c01924a606e9dfd42dfd3c976b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bohr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 20:26:10 compute-0 systemd[1]: libpod-conmon-20961cb9b6ee5d1d83ed5c6b2d76f933fd96c01924a606e9dfd42dfd3c976b2e.scope: Deactivated successfully.
Nov 24 20:26:10 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:10 compute-0 podman[277118]: 2025-11-24 20:26:10.246335461 +0000 UTC m=+0.079390618 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 20:26:10 compute-0 sudo[276967]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:10 compute-0 sudo[277145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:26:10 compute-0 sudo[277145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:10 compute-0 sudo[277145]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:10.364+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:10 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:10 compute-0 sudo[277171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:26:10 compute-0 sudo[277171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:10 compute-0 sudo[277171]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:10 compute-0 sudo[277196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:26:10 compute-0 sudo[277196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:10 compute-0 sudo[277196]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:10.532+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:10 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:10 compute-0 sudo[277221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:26:10 compute-0 sudo[277221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:11 compute-0 podman[277289]: 2025-11-24 20:26:11.026718781 +0000 UTC m=+0.080463057 container create ebfa17180e0ae2d547e5f04fb3f719bd840a007147282f38c99d6e918aab5ae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:26:11 compute-0 podman[277289]: 2025-11-24 20:26:10.977351593 +0000 UTC m=+0.031095849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:26:11 compute-0 systemd[1]: Started libpod-conmon-ebfa17180e0ae2d547e5f04fb3f719bd840a007147282f38c99d6e918aab5ae0.scope.
Nov 24 20:26:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:26:11 compute-0 podman[277289]: 2025-11-24 20:26:11.158157349 +0000 UTC m=+0.211901645 container init ebfa17180e0ae2d547e5f04fb3f719bd840a007147282f38c99d6e918aab5ae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jones, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 20:26:11 compute-0 podman[277289]: 2025-11-24 20:26:11.165496389 +0000 UTC m=+0.219240655 container start ebfa17180e0ae2d547e5f04fb3f719bd840a007147282f38c99d6e918aab5ae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jones, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:26:11 compute-0 nova_compute[257476]: 2025-11-24 20:26:11.163 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:11 compute-0 podman[277289]: 2025-11-24 20:26:11.169775906 +0000 UTC m=+0.223520192 container attach ebfa17180e0ae2d547e5f04fb3f719bd840a007147282f38c99d6e918aab5ae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:26:11 compute-0 gifted_jones[277305]: 167 167
Nov 24 20:26:11 compute-0 systemd[1]: libpod-ebfa17180e0ae2d547e5f04fb3f719bd840a007147282f38c99d6e918aab5ae0.scope: Deactivated successfully.
Nov 24 20:26:11 compute-0 conmon[277305]: conmon ebfa17180e0ae2d547e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebfa17180e0ae2d547e5f04fb3f719bd840a007147282f38c99d6e918aab5ae0.scope/container/memory.events
Nov 24 20:26:11 compute-0 podman[277289]: 2025-11-24 20:26:11.174948297 +0000 UTC m=+0.228692593 container died ebfa17180e0ae2d547e5f04fb3f719bd840a007147282f38c99d6e918aab5ae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jones, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 20:26:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf042544a91a9caa0225aa1c63519c3593256647770b74df728c3cefe8e74b93-merged.mount: Deactivated successfully.
Nov 24 20:26:11 compute-0 podman[277289]: 2025-11-24 20:26:11.227084721 +0000 UTC m=+0.280828997 container remove ebfa17180e0ae2d547e5f04fb3f719bd840a007147282f38c99d6e918aab5ae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jones, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 20:26:11 compute-0 systemd[1]: libpod-conmon-ebfa17180e0ae2d547e5f04fb3f719bd840a007147282f38c99d6e918aab5ae0.scope: Deactivated successfully.
Nov 24 20:26:11 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:11 compute-0 ceph-mon[75677]: pgmap v1291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:11 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:11.402+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:11 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:11 compute-0 podman[277327]: 2025-11-24 20:26:11.45827659 +0000 UTC m=+0.066099295 container create 31a49389a4f17d220ac5616130988af8413f049681615da3873729cb70a6d720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 20:26:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:11 compute-0 systemd[1]: Started libpod-conmon-31a49389a4f17d220ac5616130988af8413f049681615da3873729cb70a6d720.scope.
Nov 24 20:26:11 compute-0 podman[277327]: 2025-11-24 20:26:11.421488576 +0000 UTC m=+0.029311341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:26:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4067f7939e9c441f76241d7f7949d5ab49fd028486e87b71a7eb8abf508e76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4067f7939e9c441f76241d7f7949d5ab49fd028486e87b71a7eb8abf508e76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4067f7939e9c441f76241d7f7949d5ab49fd028486e87b71a7eb8abf508e76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4067f7939e9c441f76241d7f7949d5ab49fd028486e87b71a7eb8abf508e76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:11.556+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:11 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:11 compute-0 podman[277327]: 2025-11-24 20:26:11.573905986 +0000 UTC m=+0.181728761 container init 31a49389a4f17d220ac5616130988af8413f049681615da3873729cb70a6d720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:26:11 compute-0 podman[277327]: 2025-11-24 20:26:11.595227998 +0000 UTC m=+0.203050673 container start 31a49389a4f17d220ac5616130988af8413f049681615da3873729cb70a6d720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 20:26:11 compute-0 podman[277327]: 2025-11-24 20:26:11.59969059 +0000 UTC m=+0.207513345 container attach 31a49389a4f17d220ac5616130988af8413f049681615da3873729cb70a6d720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 20:26:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2087 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:12 compute-0 nova_compute[257476]: 2025-11-24 20:26:12.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:12 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2087 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:12 compute-0 confident_lamarr[277343]: {
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:     "0": [
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:         {
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "devices": [
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "/dev/loop3"
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             ],
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_name": "ceph_lv0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_size": "21470642176",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "name": "ceph_lv0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "tags": {
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.cluster_name": "ceph",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.crush_device_class": "",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.encrypted": "0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.osd_id": "0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.type": "block",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.vdo": "0"
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             },
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "type": "block",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "vg_name": "ceph_vg0"
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:         }
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:     ],
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:     "1": [
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:         {
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "devices": [
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "/dev/loop4"
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             ],
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_name": "ceph_lv1",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_size": "21470642176",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "name": "ceph_lv1",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "tags": {
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.cluster_name": "ceph",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.crush_device_class": "",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.encrypted": "0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.osd_id": "1",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.type": "block",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.vdo": "0"
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             },
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "type": "block",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "vg_name": "ceph_vg1"
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:         }
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:     ],
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:     "2": [
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:         {
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "devices": [
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "/dev/loop5"
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             ],
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_name": "ceph_lv2",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_size": "21470642176",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "name": "ceph_lv2",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "tags": {
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.cluster_name": "ceph",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.crush_device_class": "",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.encrypted": "0",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.osd_id": "2",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.type": "block",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:                 "ceph.vdo": "0"
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             },
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "type": "block",
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:             "vg_name": "ceph_vg2"
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:         }
Nov 24 20:26:12 compute-0 confident_lamarr[277343]:     ]
Nov 24 20:26:12 compute-0 confident_lamarr[277343]: }
Nov 24 20:26:12 compute-0 systemd[1]: libpod-31a49389a4f17d220ac5616130988af8413f049681615da3873729cb70a6d720.scope: Deactivated successfully.
Nov 24 20:26:12 compute-0 podman[277327]: 2025-11-24 20:26:12.376085501 +0000 UTC m=+0.983908206 container died 31a49389a4f17d220ac5616130988af8413f049681615da3873729cb70a6d720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:26:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:12.382+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:12 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e4067f7939e9c441f76241d7f7949d5ab49fd028486e87b71a7eb8abf508e76-merged.mount: Deactivated successfully.
Nov 24 20:26:12 compute-0 podman[277327]: 2025-11-24 20:26:12.450297816 +0000 UTC m=+1.058120491 container remove 31a49389a4f17d220ac5616130988af8413f049681615da3873729cb70a6d720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 20:26:12 compute-0 systemd[1]: libpod-conmon-31a49389a4f17d220ac5616130988af8413f049681615da3873729cb70a6d720.scope: Deactivated successfully.
Nov 24 20:26:12 compute-0 sudo[277221]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:12.551+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:12 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:12 compute-0 sudo[277364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:26:12 compute-0 sudo[277364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:12 compute-0 sudo[277364]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:12 compute-0 sudo[277389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:26:12 compute-0 sudo[277389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:12 compute-0 sudo[277389]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:12 compute-0 sudo[277414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:26:12 compute-0 sudo[277414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:12 compute-0 sudo[277414]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:12 compute-0 sudo[277439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:26:12 compute-0 sudo[277439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:13 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:13 compute-0 ceph-mon[75677]: pgmap v1292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:13 compute-0 podman[277506]: 2025-11-24 20:26:13.333567884 +0000 UTC m=+0.056438521 container create a20b60529b2022b3ef59dbb889dac4a2fbe69cb9105242fa97ddb530b68571a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:26:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:13.351+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:13 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:13 compute-0 systemd[1]: Started libpod-conmon-a20b60529b2022b3ef59dbb889dac4a2fbe69cb9105242fa97ddb530b68571a0.scope.
Nov 24 20:26:13 compute-0 podman[277506]: 2025-11-24 20:26:13.30445544 +0000 UTC m=+0.027326137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:26:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:26:13 compute-0 podman[277506]: 2025-11-24 20:26:13.442643381 +0000 UTC m=+0.165514058 container init a20b60529b2022b3ef59dbb889dac4a2fbe69cb9105242fa97ddb530b68571a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 20:26:13 compute-0 podman[277506]: 2025-11-24 20:26:13.455236256 +0000 UTC m=+0.178106853 container start a20b60529b2022b3ef59dbb889dac4a2fbe69cb9105242fa97ddb530b68571a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:26:13 compute-0 busy_lamport[277523]: 167 167
Nov 24 20:26:13 compute-0 systemd[1]: libpod-a20b60529b2022b3ef59dbb889dac4a2fbe69cb9105242fa97ddb530b68571a0.scope: Deactivated successfully.
Nov 24 20:26:13 compute-0 podman[277506]: 2025-11-24 20:26:13.467211882 +0000 UTC m=+0.190082649 container attach a20b60529b2022b3ef59dbb889dac4a2fbe69cb9105242fa97ddb530b68571a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:26:13 compute-0 podman[277506]: 2025-11-24 20:26:13.468307243 +0000 UTC m=+0.191177870 container died a20b60529b2022b3ef59dbb889dac4a2fbe69cb9105242fa97ddb530b68571a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 20:26:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-af8c8ffecc96705f0683038d0adb6a9dee98787b7ca20910f02897a85b8d72bb-merged.mount: Deactivated successfully.
Nov 24 20:26:13 compute-0 podman[277506]: 2025-11-24 20:26:13.549847568 +0000 UTC m=+0.272718195 container remove a20b60529b2022b3ef59dbb889dac4a2fbe69cb9105242fa97ddb530b68571a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:26:13 compute-0 systemd[1]: libpod-conmon-a20b60529b2022b3ef59dbb889dac4a2fbe69cb9105242fa97ddb530b68571a0.scope: Deactivated successfully.
Nov 24 20:26:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:13.596+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:13 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:13 compute-0 podman[277547]: 2025-11-24 20:26:13.801167207 +0000 UTC m=+0.065134659 container create 3d5e32a2f44090f77f5f48c3d6f17a523de16e43e3ef75a7984aaae13a3049db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 20:26:13 compute-0 systemd[1]: Started libpod-conmon-3d5e32a2f44090f77f5f48c3d6f17a523de16e43e3ef75a7984aaae13a3049db.scope.
Nov 24 20:26:13 compute-0 podman[277547]: 2025-11-24 20:26:13.774386247 +0000 UTC m=+0.038353739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:26:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533e30b18982ec28fa7eaa311df05fcc49002e4859794efc71a48fc3dabacf35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533e30b18982ec28fa7eaa311df05fcc49002e4859794efc71a48fc3dabacf35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533e30b18982ec28fa7eaa311df05fcc49002e4859794efc71a48fc3dabacf35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533e30b18982ec28fa7eaa311df05fcc49002e4859794efc71a48fc3dabacf35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:26:13 compute-0 podman[277547]: 2025-11-24 20:26:13.932909423 +0000 UTC m=+0.196876905 container init 3d5e32a2f44090f77f5f48c3d6f17a523de16e43e3ef75a7984aaae13a3049db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:26:13 compute-0 podman[277547]: 2025-11-24 20:26:13.949894226 +0000 UTC m=+0.213861668 container start 3d5e32a2f44090f77f5f48c3d6f17a523de16e43e3ef75a7984aaae13a3049db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatelet, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:26:13 compute-0 podman[277547]: 2025-11-24 20:26:13.954625546 +0000 UTC m=+0.218593048 container attach 3d5e32a2f44090f77f5f48c3d6f17a523de16e43e3ef75a7984aaae13a3049db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatelet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 20:26:14 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:14 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:14.307+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:14 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:14.599+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:14 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]: {
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "osd_id": 2,
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "type": "bluestore"
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:     },
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "osd_id": 1,
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "type": "bluestore"
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:     },
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "osd_id": 0,
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:         "type": "bluestore"
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]:     }
Nov 24 20:26:15 compute-0 frosty_chatelet[277564]: }
Nov 24 20:26:15 compute-0 systemd[1]: libpod-3d5e32a2f44090f77f5f48c3d6f17a523de16e43e3ef75a7984aaae13a3049db.scope: Deactivated successfully.
Nov 24 20:26:15 compute-0 podman[277547]: 2025-11-24 20:26:15.051933016 +0000 UTC m=+1.315900438 container died 3d5e32a2f44090f77f5f48c3d6f17a523de16e43e3ef75a7984aaae13a3049db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:26:15 compute-0 systemd[1]: libpod-3d5e32a2f44090f77f5f48c3d6f17a523de16e43e3ef75a7984aaae13a3049db.scope: Consumed 1.108s CPU time.
Nov 24 20:26:15 compute-0 nova_compute[257476]: 2025-11-24 20:26:15.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:15 compute-0 nova_compute[257476]: 2025-11-24 20:26:15.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-533e30b18982ec28fa7eaa311df05fcc49002e4859794efc71a48fc3dabacf35-merged.mount: Deactivated successfully.
Nov 24 20:26:15 compute-0 podman[277547]: 2025-11-24 20:26:15.284940125 +0000 UTC m=+1.548907597 container remove 3d5e32a2f44090f77f5f48c3d6f17a523de16e43e3ef75a7984aaae13a3049db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatelet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:26:15 compute-0 systemd[1]: libpod-conmon-3d5e32a2f44090f77f5f48c3d6f17a523de16e43e3ef75a7984aaae13a3049db.scope: Deactivated successfully.
Nov 24 20:26:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:15.302+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:15 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:15 compute-0 ceph-mon[75677]: pgmap v1293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:15 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:15 compute-0 sudo[277439]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:26:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:26:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:26:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:26:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev aa680a52-cd8d-47a8-92a5-ccd46f2c49ba does not exist
Nov 24 20:26:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 91c04c16-641d-4f64-ac15-df58206101da does not exist
Nov 24 20:26:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:15 compute-0 sudo[277612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:26:15 compute-0 sudo[277612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:15 compute-0 sudo[277612]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:15 compute-0 sudo[277637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:26:15 compute-0 sudo[277637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:26:15 compute-0 sudo[277637]: pam_unix(sudo:session): session closed for user root
Nov 24 20:26:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:15.597+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:15 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:16.331+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:16 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:16 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:26:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:26:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:26:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/178199543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:26:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:26:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/178199543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:26:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:16.644+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:16 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2092 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:17 compute-0 nova_compute[257476]: 2025-11-24 20:26:17.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:17 compute-0 nova_compute[257476]: 2025-11-24 20:26:17.152 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 20:26:17 compute-0 nova_compute[257476]: 2025-11-24 20:26:17.180 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 20:26:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:17.353+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:17 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:17 compute-0 ceph-mon[75677]: pgmap v1294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:17 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/178199543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:26:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/178199543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:26:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:17 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2092 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:17.620+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:17 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.181 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.181 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.209 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.210 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.210 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.210 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.210 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:26:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:18.403+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:18 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:26:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/21238504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:26:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:18.669+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:18 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.690 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.780 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.781 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.785 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:26:18 compute-0 nova_compute[257476]: 2025-11-24 20:26:18.786 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.018 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.020 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4947MB free_disk=59.954288482666016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.020 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.021 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.293 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 43bc955c-77ee-42d8-98e2-84163217d1aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.293 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.294 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 4e9758ff-13d1-447b-9a2a-d6ae9f807143 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.294 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance db8c22d1-e16d-49f8-b4a5-ba8e87849ea3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.294 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.294 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:26:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:19.366+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:19 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:19 compute-0 nova_compute[257476]: 2025-11-24 20:26:19.561 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:26:19 compute-0 ceph-mon[75677]: pgmap v1295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:19 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:19 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/21238504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:26:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:19 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:19.642+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:19 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:26:20 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1934428452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:26:20 compute-0 nova_compute[257476]: 2025-11-24 20:26:20.335 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:26:20 compute-0 nova_compute[257476]: 2025-11-24 20:26:20.343 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:26:20 compute-0 nova_compute[257476]: 2025-11-24 20:26:20.363 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:26:20 compute-0 nova_compute[257476]: 2025-11-24 20:26:20.366 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:26:20 compute-0 nova_compute[257476]: 2025-11-24 20:26:20.367 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:26:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:20.394+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:20 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:20.627+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:20 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:20 compute-0 ceph-mon[75677]: pgmap v1296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:20 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1934428452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.333 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.334 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.334 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.335 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.356 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.356 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.356 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.357 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.357 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.357 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:21 compute-0 nova_compute[257476]: 2025-11-24 20:26:21.358 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:21.380+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:21.653+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:21 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:21 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:21 compute-0 podman[277706]: 2025-11-24 20:26:21.85832381 +0000 UTC m=+0.088791545 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd)
Nov 24 20:26:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2102 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:22.365+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:22 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:22.700+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:22 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:22 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:22 compute-0 ceph-mon[75677]: pgmap v1297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:22 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2102 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:22 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:23.332+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:23 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:23.663+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:23 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:24.357+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:24 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:26:24
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta']
Nov 24 20:26:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:26:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:24 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:24.654+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:24 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:25 compute-0 nova_compute[257476]: 2025-11-24 20:26:25.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:26:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:25.349+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:25 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:25 compute-0 ceph-mon[75677]: pgmap v1298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:25 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:25 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:25.703+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:25 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:25 compute-0 podman[277726]: 2025-11-24 20:26:25.88853507 +0000 UTC m=+0.116751417 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 20:26:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:26.381+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:26 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:26 compute-0 ceph-mon[75677]: pgmap v1299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:26.657+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:26 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:27.360+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:27 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2107 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:27 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:27 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:27.671+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:27 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:28.385+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:28 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:28 compute-0 ceph-mon[75677]: pgmap v1300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:28 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2107 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:28.658+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:28 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:29.347+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:29 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:29.633+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:29 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:29 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:29 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:30.321+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:30 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:30.600+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:30 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:30 compute-0 ceph-mon[75677]: pgmap v1301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:30 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:31.311+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:31 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:31.649+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:31 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:31 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:32.286+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:32 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:32.679+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:32 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:32 compute-0 ceph-mon[75677]: pgmap v1302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:32 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:33.316+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:33 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:33.684+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:33 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:33 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:34.348+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:34 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:34.719+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:34 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:34 compute-0 ceph-mon[75677]: pgmap v1303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:34 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005666567973888129 of space, bias 1.0, pg target 0.16999703921664386 quantized to 32 (current 32)
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:26:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:26:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:35.346+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:35 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:35.754+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:35 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:35 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:36.314+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:36 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:36.792+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:36 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:36 compute-0 ceph-mon[75677]: pgmap v1304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:36 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2112 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:37.323+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:37 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:37.818+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:37 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:37 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2112 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:37 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:38.323+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:38 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:38.852+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:38 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:38 compute-0 ceph-mon[75677]: pgmap v1305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:38 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:39.344+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:39 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:39.872+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:39 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:39 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:40.296+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:40 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:26:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:26:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:26:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:26:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:26:40 compute-0 podman[277752]: 2025-11-24 20:26:40.860166678 +0000 UTC m=+0.081819745 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 20:26:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:40.893+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:40 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:40 compute-0 ceph-mon[75677]: pgmap v1306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:40 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:41.293+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:26:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7331 writes, 36K keys, 7331 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 7331 writes, 7331 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1879 writes, 9540 keys, 1879 commit groups, 1.0 writes per commit group, ingest: 10.42 MB, 0.02 MB/s
                                           Interval WAL: 1879 writes, 1879 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     68.9      0.51              0.16        20    0.026       0      0       0.0       0.0
                                             L6      1/0    7.84 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   4.1    123.5    104.0      1.41              0.62        19    0.074    135K    10K       0.0       0.0
                                            Sum      1/0    7.84 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.1     90.5     94.6      1.92              0.78        39    0.049    135K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.7    129.1    132.4      0.47              0.27        12    0.039     54K   4126       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    123.5    104.0      1.41              0.62        19    0.074    135K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     69.1      0.51              0.16        19    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.035, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.18 GB write, 0.08 MB/s write, 0.17 GB read, 0.07 MB/s read, 1.9 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b73e09f1f0#2 capacity: 304.00 MB usage: 17.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000237 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1183,16.60 MB,5.4599%) FilterBlock(40,392.36 KB,0.126041%) IndexBlock(40,568.25 KB,0.182543%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 20:26:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:41.895+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:41 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:41 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2122 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:42.314+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:42 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:42.917+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:42 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:43 compute-0 ceph-mon[75677]: pgmap v1307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:43 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2122 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:43 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:43.299+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:43 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:43.872+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:43 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:44 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:44.292+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:44 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:44.840+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:44 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:45 compute-0 ceph-mon[75677]: pgmap v1308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:45 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:45.336+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:45 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:45.865+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:45 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:46 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:46.347+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:46 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:46.872+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:46 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:47 compute-0 ceph-mon[75677]: pgmap v1309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:47 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:47.333+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:47 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:47.895+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:47 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:48 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2127 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:48 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:48.295+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:48 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:48 compute-0 sshd-session[277773]: Invalid user dspace from 182.93.7.194 port 54244
Nov 24 20:26:48 compute-0 sshd-session[277773]: Received disconnect from 182.93.7.194 port 54244:11: Bye Bye [preauth]
Nov 24 20:26:48 compute-0 sshd-session[277773]: Disconnected from invalid user dspace 182.93.7.194 port 54244 [preauth]
Nov 24 20:26:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:48.898+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:48 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:49 compute-0 ceph-mon[75677]: pgmap v1310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:49 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2127 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:49 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:49.338+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:49 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:49 compute-0 nova_compute[257476]: 2025-11-24 20:26:49.427 257491 DEBUG oslo_concurrency.lockutils [None req-99c4dee0-260d-4932-9c2f-75ff84a507ba fdcce01fe61847e0972b7d8925fc4984 c56e6d5c1eae48bfa49e12800a76eaa4 - - default default] Acquiring lock "43bc955c-77ee-42d8-98e2-84163217d1aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:26:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:49.889+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:49 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:50 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:50.367+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:50 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:50.873+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:50 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:51 compute-0 ceph-mon[75677]: pgmap v1311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:51 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:51.341+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:51 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:51.923+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:51 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:52 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:52.388+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:52 compute-0 podman[277775]: 2025-11-24 20:26:52.86284065 +0000 UTC m=+0.084474197 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:26:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:52.949+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:52 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:53 compute-0 ceph-mon[75677]: pgmap v1312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:53.418+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:53 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:53.988+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:53 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:54.384+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:54 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:54 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:26:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:26:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:26:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:26:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:26:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:26:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:55.009+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:55 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:55.359+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:55 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:55 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:55 compute-0 ceph-mon[75677]: pgmap v1313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:55 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:55 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:56.048+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:56 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:56.328+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:56 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:56 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:56 compute-0 podman[277796]: 2025-11-24 20:26:56.906859198 +0000 UTC m=+0.136157667 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:26:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:57.040+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:57 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2132 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:26:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:57.328+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:57 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:57 compute-0 ceph-mon[75677]: pgmap v1314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:57 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2132 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:26:57 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:58.061+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:58 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:58.367+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:58 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:58 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:26:59.044+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:59 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:26:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:26:59.392+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:59 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:26:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:26:59 compute-0 ceph-mon[75677]: pgmap v1315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:26:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:26:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:00.090+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:00 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:00.423+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:00 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:00 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:01.112+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:01 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:01.422+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:01 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:01 compute-0 ceph-mon[75677]: pgmap v1316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:01 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:02.066+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:02 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2137 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:02.378+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:02 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:02 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:02 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2137 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:02 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:03.080+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:03 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:03.369+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:03 compute-0 ceph-mon[75677]: pgmap v1317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:03 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:04.059+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:04 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:04.393+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:04 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:05.037+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:05 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:05.406+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:05 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:05 compute-0 ceph-mon[75677]: pgmap v1318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:05 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:06.075+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:06 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:06.419+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:06 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:06 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:07.059+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:07 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2142 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:07.458+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:07 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:07 compute-0 ceph-mon[75677]: pgmap v1319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:07 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:07 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2142 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:08.091+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:08 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:08.478+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:08 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:08 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:08 compute-0 ceph-mon[75677]: pgmap v1320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:09.115+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:09 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:27:09.383 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:27:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:27:09.383 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:27:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:27:09.384 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:27:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:09.502+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:09 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:09 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:10.090+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:10 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:10.502+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:10 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:10 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:10 compute-0 ceph-mon[75677]: pgmap v1321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:11.083+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:11 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:11.548+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:11 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:11 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:11 compute-0 podman[277822]: 2025-11-24 20:27:11.836669004 +0000 UTC m=+0.070774752 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:27:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:12.081+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:12 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2152 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:12.594+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:12 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:12 compute-0 ceph-mon[75677]: pgmap v1322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:12 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:12 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2152 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:13.105+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:13 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:13 compute-0 nova_compute[257476]: 2025-11-24 20:27:13.166 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:27:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:13 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:13.642+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:13 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:14.064+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:14 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:14 compute-0 nova_compute[257476]: 2025-11-24 20:27:14.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:27:14 compute-0 ceph-mon[75677]: pgmap v1323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:14 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:14.644+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:14 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:15.110+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:15 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:15 compute-0 nova_compute[257476]: 2025-11-24 20:27:15.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:27:15 compute-0 nova_compute[257476]: 2025-11-24 20:27:15.150 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:27:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:15 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:15.656+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:15 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:15 compute-0 sudo[277842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:27:15 compute-0 sudo[277842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:15 compute-0 sudo[277842]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:15 compute-0 sudo[277867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:27:15 compute-0 sudo[277867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:15 compute-0 sudo[277867]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:15 compute-0 sudo[277892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:27:15 compute-0 sudo[277892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:15 compute-0 sudo[277892]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:15 compute-0 sudo[277917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:27:15 compute-0 sudo[277917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:16.071+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:16 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:16 compute-0 sudo[277917]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:27:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:27:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:27:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:27:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c88d67eb-831d-4b0f-a002-5f04be04f353 does not exist
Nov 24 20:27:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6323dd80-b056-4b48-9683-faa98e73e6a8 does not exist
Nov 24 20:27:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 484e86a6-c806-4769-94fe-0e31af3cc10b does not exist
Nov 24 20:27:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:27:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:27:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:27:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:27:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2008034121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:27:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2008034121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:27:16 compute-0 sudo[277972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:27:16 compute-0 sudo[277972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:16 compute-0 sudo[277972]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:16 compute-0 sudo[277997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:27:16 compute-0 sudo[277997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:16 compute-0 sudo[277997]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:16 compute-0 sudo[278022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:27:16 compute-0 sudo[278022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:16 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:27:16 compute-0 sudo[278022]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:16 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:27:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:16.651+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:16 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:16 compute-0 ceph-mon[75677]: pgmap v1324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:16 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:27:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2008034121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:27:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2008034121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:27:16 compute-0 sudo[278048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:27:16 compute-0 sudo[278048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:17 compute-0 podman[278114]: 2025-11-24 20:27:17.062745966 +0000 UTC m=+0.049073181 container create 0549d72223cadb02e477842e7b2bad6dda43c012a8e5df6b010147e46c3a578c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:27:17 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:17.084+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:17 compute-0 systemd[1]: Started libpod-conmon-0549d72223cadb02e477842e7b2bad6dda43c012a8e5df6b010147e46c3a578c.scope.
Nov 24 20:27:17 compute-0 podman[278114]: 2025-11-24 20:27:17.041064094 +0000 UTC m=+0.027391339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:27:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:27:17 compute-0 podman[278114]: 2025-11-24 20:27:17.155748664 +0000 UTC m=+0.142075919 container init 0549d72223cadb02e477842e7b2bad6dda43c012a8e5df6b010147e46c3a578c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gould, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:27:17 compute-0 podman[278114]: 2025-11-24 20:27:17.164303328 +0000 UTC m=+0.150630533 container start 0549d72223cadb02e477842e7b2bad6dda43c012a8e5df6b010147e46c3a578c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gould, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:27:17 compute-0 podman[278114]: 2025-11-24 20:27:17.169618043 +0000 UTC m=+0.155945258 container attach 0549d72223cadb02e477842e7b2bad6dda43c012a8e5df6b010147e46c3a578c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gould, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:27:17 compute-0 boring_gould[278129]: 167 167
Nov 24 20:27:17 compute-0 systemd[1]: libpod-0549d72223cadb02e477842e7b2bad6dda43c012a8e5df6b010147e46c3a578c.scope: Deactivated successfully.
Nov 24 20:27:17 compute-0 podman[278114]: 2025-11-24 20:27:17.173605711 +0000 UTC m=+0.159932936 container died 0549d72223cadb02e477842e7b2bad6dda43c012a8e5df6b010147e46c3a578c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2530a01a0bdd7e3fc34c3b7cf7f4a45fdbafbec93738dc3bba24e6240a05e2a5-merged.mount: Deactivated successfully.
Nov 24 20:27:17 compute-0 podman[278114]: 2025-11-24 20:27:17.226254269 +0000 UTC m=+0.212581474 container remove 0549d72223cadb02e477842e7b2bad6dda43c012a8e5df6b010147e46c3a578c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 20:27:17 compute-0 systemd[1]: libpod-conmon-0549d72223cadb02e477842e7b2bad6dda43c012a8e5df6b010147e46c3a578c.scope: Deactivated successfully.
Nov 24 20:27:17 compute-0 podman[278152]: 2025-11-24 20:27:17.414753913 +0000 UTC m=+0.048298569 container create e7bb54da8ec3697865240ddd3643d7f260e2f2dac6a4c2272ee63e10237740a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bohr, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:27:17 compute-0 systemd[1]: Started libpod-conmon-e7bb54da8ec3697865240ddd3643d7f260e2f2dac6a4c2272ee63e10237740a5.scope.
Nov 24 20:27:17 compute-0 podman[278152]: 2025-11-24 20:27:17.395454666 +0000 UTC m=+0.028999322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:27:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcf578eb38c10d06d6ad6b4a1df4476ca1ce05afdeb8b6c2d01bd7c9947a972/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcf578eb38c10d06d6ad6b4a1df4476ca1ce05afdeb8b6c2d01bd7c9947a972/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcf578eb38c10d06d6ad6b4a1df4476ca1ce05afdeb8b6c2d01bd7c9947a972/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcf578eb38c10d06d6ad6b4a1df4476ca1ce05afdeb8b6c2d01bd7c9947a972/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcf578eb38c10d06d6ad6b4a1df4476ca1ce05afdeb8b6c2d01bd7c9947a972/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:17 compute-0 podman[278152]: 2025-11-24 20:27:17.531028007 +0000 UTC m=+0.164572653 container init e7bb54da8ec3697865240ddd3643d7f260e2f2dac6a4c2272ee63e10237740a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bohr, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 20:27:17 compute-0 podman[278152]: 2025-11-24 20:27:17.543502747 +0000 UTC m=+0.177047423 container start e7bb54da8ec3697865240ddd3643d7f260e2f2dac6a4c2272ee63e10237740a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bohr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:27:17 compute-0 podman[278152]: 2025-11-24 20:27:17.552533564 +0000 UTC m=+0.186078220 container attach e7bb54da8ec3697865240ddd3643d7f260e2f2dac6a4c2272ee63e10237740a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:27:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:17.611+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:17 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2157 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:17 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:18 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:18.049+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:18 compute-0 nova_compute[257476]: 2025-11-24 20:27:18.147 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:27:18 compute-0 nova_compute[257476]: 2025-11-24 20:27:18.171 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:27:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:18.649+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:18 compute-0 boring_bohr[278169]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:27:18 compute-0 boring_bohr[278169]: --> relative data size: 1.0
Nov 24 20:27:18 compute-0 boring_bohr[278169]: --> All data devices are unavailable
Nov 24 20:27:18 compute-0 systemd[1]: libpod-e7bb54da8ec3697865240ddd3643d7f260e2f2dac6a4c2272ee63e10237740a5.scope: Deactivated successfully.
Nov 24 20:27:18 compute-0 podman[278152]: 2025-11-24 20:27:18.706947563 +0000 UTC m=+1.340492239 container died e7bb54da8ec3697865240ddd3643d7f260e2f2dac6a4c2272ee63e10237740a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bohr, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:27:18 compute-0 systemd[1]: libpod-e7bb54da8ec3697865240ddd3643d7f260e2f2dac6a4c2272ee63e10237740a5.scope: Consumed 1.102s CPU time.
Nov 24 20:27:18 compute-0 ceph-mon[75677]: pgmap v1325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:18 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:18 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2157 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-edcf578eb38c10d06d6ad6b4a1df4476ca1ce05afdeb8b6c2d01bd7c9947a972-merged.mount: Deactivated successfully.
Nov 24 20:27:19 compute-0 podman[278152]: 2025-11-24 20:27:19.00470935 +0000 UTC m=+1.638254016 container remove e7bb54da8ec3697865240ddd3643d7f260e2f2dac6a4c2272ee63e10237740a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 20:27:19 compute-0 systemd[1]: libpod-conmon-e7bb54da8ec3697865240ddd3643d7f260e2f2dac6a4c2272ee63e10237740a5.scope: Deactivated successfully.
Nov 24 20:27:19 compute-0 sudo[278048]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:19 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:19.049+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:19 compute-0 sudo[278211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:27:19 compute-0 sudo[278211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:19 compute-0 sudo[278211]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.150 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.175 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.175 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.176 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.176 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.176 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.177 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.200 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.200 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.200 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.201 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.201 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:27:19 compute-0 sudo[278236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:27:19 compute-0 sudo[278236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:19 compute-0 sudo[278236]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:19 compute-0 sudo[278262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:27:19 compute-0 sudo[278262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:19 compute-0 sudo[278262]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:19 compute-0 sudo[278287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:27:19 compute-0 sudo[278287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:19.608+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:19 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:27:19 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/938656201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.678 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.744 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.745 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.748 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.748 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:27:19 compute-0 podman[278373]: 2025-11-24 20:27:19.764412375 +0000 UTC m=+0.061240402 container create 2a214db3bc9a685c9bb01b81b7a81ee6e8a54379ade4cf957a86854e0ba9e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 20:27:19 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:19 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/938656201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:27:19 compute-0 systemd[1]: Started libpod-conmon-2a214db3bc9a685c9bb01b81b7a81ee6e8a54379ade4cf957a86854e0ba9e0c0.scope.
Nov 24 20:27:19 compute-0 podman[278373]: 2025-11-24 20:27:19.725173264 +0000 UTC m=+0.022001321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.891 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.892 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4922MB free_disk=59.954288482666016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.893 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.893 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:27:19 compute-0 podman[278373]: 2025-11-24 20:27:19.912911618 +0000 UTC m=+0.209739645 container init 2a214db3bc9a685c9bb01b81b7a81ee6e8a54379ade4cf957a86854e0ba9e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 20:27:19 compute-0 podman[278373]: 2025-11-24 20:27:19.919981002 +0000 UTC m=+0.216809009 container start 2a214db3bc9a685c9bb01b81b7a81ee6e8a54379ade4cf957a86854e0ba9e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:27:19 compute-0 cool_williamson[278389]: 167 167
Nov 24 20:27:19 compute-0 systemd[1]: libpod-2a214db3bc9a685c9bb01b81b7a81ee6e8a54379ade4cf957a86854e0ba9e0c0.scope: Deactivated successfully.
Nov 24 20:27:19 compute-0 podman[278373]: 2025-11-24 20:27:19.925259525 +0000 UTC m=+0.222087552 container attach 2a214db3bc9a685c9bb01b81b7a81ee6e8a54379ade4cf957a86854e0ba9e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:27:19 compute-0 podman[278373]: 2025-11-24 20:27:19.925543664 +0000 UTC m=+0.222371661 container died 2a214db3bc9a685c9bb01b81b7a81ee6e8a54379ade4cf957a86854e0ba9e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-74cbbbae28f75ff2e8989ba8abbedf83e5861a12592a07899250b2eca037b361-merged.mount: Deactivated successfully.
Nov 24 20:27:19 compute-0 podman[278373]: 2025-11-24 20:27:19.960886788 +0000 UTC m=+0.257714795 container remove 2a214db3bc9a685c9bb01b81b7a81ee6e8a54379ade4cf957a86854e0ba9e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:27:19 compute-0 systemd[1]: libpod-conmon-2a214db3bc9a685c9bb01b81b7a81ee6e8a54379ade4cf957a86854e0ba9e0c0.scope: Deactivated successfully.
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.986 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 43bc955c-77ee-42d8-98e2-84163217d1aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.986 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.987 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 4e9758ff-13d1-447b-9a2a-d6ae9f807143 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.987 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance db8c22d1-e16d-49f8-b4a5-ba8e87849ea3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.987 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:27:19 compute-0 nova_compute[257476]: 2025-11-24 20:27:19.987 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.008 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Refreshing inventories for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.024 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Updating ProviderTree inventory for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.024 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Updating inventory in ProviderTree for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.039 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Refreshing aggregate associations for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.057 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Refreshing trait associations for resource provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66, traits: HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 20:27:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:20.086+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:20 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.144 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:27:20 compute-0 podman[278413]: 2025-11-24 20:27:20.204359953 +0000 UTC m=+0.073365833 container create 618165f76be90bd762743254f99d0bd0755d976d04686eba4c4d9e24202110b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:27:20 compute-0 systemd[1]: Started libpod-conmon-618165f76be90bd762743254f99d0bd0755d976d04686eba4c4d9e24202110b0.scope.
Nov 24 20:27:20 compute-0 podman[278413]: 2025-11-24 20:27:20.174413465 +0000 UTC m=+0.043419385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:27:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce6af79fa76a872113affa100a08c11e76435acd3f1abb9f357ad638ea600ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce6af79fa76a872113affa100a08c11e76435acd3f1abb9f357ad638ea600ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce6af79fa76a872113affa100a08c11e76435acd3f1abb9f357ad638ea600ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce6af79fa76a872113affa100a08c11e76435acd3f1abb9f357ad638ea600ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:20 compute-0 podman[278413]: 2025-11-24 20:27:20.348006964 +0000 UTC m=+0.217012864 container init 618165f76be90bd762743254f99d0bd0755d976d04686eba4c4d9e24202110b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 20:27:20 compute-0 podman[278413]: 2025-11-24 20:27:20.362917991 +0000 UTC m=+0.231923871 container start 618165f76be90bd762743254f99d0bd0755d976d04686eba4c4d9e24202110b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 20:27:20 compute-0 podman[278413]: 2025-11-24 20:27:20.372694078 +0000 UTC m=+0.241699998 container attach 618165f76be90bd762743254f99d0bd0755d976d04686eba4c4d9e24202110b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:27:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:27:20 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174305145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.592 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.601 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.614 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:27:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:20.614+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:20 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.645 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:27:20 compute-0 nova_compute[257476]: 2025-11-24 20:27:20.646 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:27:20 compute-0 ceph-mon[75677]: pgmap v1326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:20 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:20 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1174305145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:27:21 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:21.045+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]: {
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:     "0": [
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:         {
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "devices": [
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "/dev/loop3"
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             ],
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_name": "ceph_lv0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_size": "21470642176",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "name": "ceph_lv0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "tags": {
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.cluster_name": "ceph",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.crush_device_class": "",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.encrypted": "0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.osd_id": "0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.type": "block",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.vdo": "0"
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             },
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "type": "block",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "vg_name": "ceph_vg0"
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:         }
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:     ],
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:     "1": [
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:         {
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "devices": [
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "/dev/loop4"
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             ],
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_name": "ceph_lv1",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_size": "21470642176",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "name": "ceph_lv1",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "tags": {
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.cluster_name": "ceph",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.crush_device_class": "",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.encrypted": "0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.osd_id": "1",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.type": "block",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.vdo": "0"
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             },
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "type": "block",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "vg_name": "ceph_vg1"
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:         }
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:     ],
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:     "2": [
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:         {
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "devices": [
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "/dev/loop5"
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             ],
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_name": "ceph_lv2",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_size": "21470642176",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "name": "ceph_lv2",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "tags": {
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.cluster_name": "ceph",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.crush_device_class": "",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.encrypted": "0",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.osd_id": "2",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.type": "block",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:                 "ceph.vdo": "0"
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             },
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "type": "block",
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:             "vg_name": "ceph_vg2"
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:         }
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]:     ]
Nov 24 20:27:21 compute-0 zealous_aryabhata[278431]: }
Nov 24 20:27:21 compute-0 systemd[1]: libpod-618165f76be90bd762743254f99d0bd0755d976d04686eba4c4d9e24202110b0.scope: Deactivated successfully.
Nov 24 20:27:21 compute-0 conmon[278431]: conmon 618165f76be90bd76274 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-618165f76be90bd762743254f99d0bd0755d976d04686eba4c4d9e24202110b0.scope/container/memory.events
Nov 24 20:27:21 compute-0 podman[278413]: 2025-11-24 20:27:21.22312293 +0000 UTC m=+1.092128810 container died 618165f76be90bd762743254f99d0bd0755d976d04686eba4c4d9e24202110b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 20:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ce6af79fa76a872113affa100a08c11e76435acd3f1abb9f357ad638ea600ae-merged.mount: Deactivated successfully.
Nov 24 20:27:21 compute-0 podman[278413]: 2025-11-24 20:27:21.338078357 +0000 UTC m=+1.207084267 container remove 618165f76be90bd762743254f99d0bd0755d976d04686eba4c4d9e24202110b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:27:21 compute-0 systemd[1]: libpod-conmon-618165f76be90bd762743254f99d0bd0755d976d04686eba4c4d9e24202110b0.scope: Deactivated successfully.
Nov 24 20:27:21 compute-0 sudo[278287]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:21 compute-0 sudo[278475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:27:21 compute-0 sudo[278475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:21 compute-0 sudo[278475]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:21.571+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:21 compute-0 sudo[278500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:27:21 compute-0 sudo[278500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:21 compute-0 sudo[278500]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:21 compute-0 nova_compute[257476]: 2025-11-24 20:27:21.619 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:27:21 compute-0 nova_compute[257476]: 2025-11-24 20:27:21.620 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:27:21 compute-0 nova_compute[257476]: 2025-11-24 20:27:21.620 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:27:21 compute-0 sudo[278525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:27:21 compute-0 sudo[278525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:21 compute-0 sudo[278525]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:21 compute-0 sudo[278550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:27:21 compute-0 sudo[278550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:21 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:22 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:22.072+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:22 compute-0 podman[278615]: 2025-11-24 20:27:22.167722941 +0000 UTC m=+0.046808828 container create fef3cfc2b810c0d7b0f047a3b7bb01abc33f3a3bed13955e70f1684a6fe5d5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_booth, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:27:22 compute-0 systemd[1]: Started libpod-conmon-fef3cfc2b810c0d7b0f047a3b7bb01abc33f3a3bed13955e70f1684a6fe5d5e2.scope.
Nov 24 20:27:22 compute-0 podman[278615]: 2025-11-24 20:27:22.146004818 +0000 UTC m=+0.025090735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:27:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:27:22 compute-0 podman[278615]: 2025-11-24 20:27:22.262368255 +0000 UTC m=+0.141454152 container init fef3cfc2b810c0d7b0f047a3b7bb01abc33f3a3bed13955e70f1684a6fe5d5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_booth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 20:27:22 compute-0 podman[278615]: 2025-11-24 20:27:22.269181921 +0000 UTC m=+0.148267808 container start fef3cfc2b810c0d7b0f047a3b7bb01abc33f3a3bed13955e70f1684a6fe5d5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:27:22 compute-0 compassionate_booth[278631]: 167 167
Nov 24 20:27:22 compute-0 podman[278615]: 2025-11-24 20:27:22.274479515 +0000 UTC m=+0.153565442 container attach fef3cfc2b810c0d7b0f047a3b7bb01abc33f3a3bed13955e70f1684a6fe5d5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:27:22 compute-0 systemd[1]: libpod-fef3cfc2b810c0d7b0f047a3b7bb01abc33f3a3bed13955e70f1684a6fe5d5e2.scope: Deactivated successfully.
Nov 24 20:27:22 compute-0 podman[278615]: 2025-11-24 20:27:22.275309038 +0000 UTC m=+0.154394925 container died fef3cfc2b810c0d7b0f047a3b7bb01abc33f3a3bed13955e70f1684a6fe5d5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_booth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d5ee1ea5db3b2ec45bde50c8f78940b520653792d6999a4c7617d948a09298b-merged.mount: Deactivated successfully.
Nov 24 20:27:22 compute-0 podman[278615]: 2025-11-24 20:27:22.328288314 +0000 UTC m=+0.207374211 container remove fef3cfc2b810c0d7b0f047a3b7bb01abc33f3a3bed13955e70f1684a6fe5d5e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 20:27:22 compute-0 systemd[1]: libpod-conmon-fef3cfc2b810c0d7b0f047a3b7bb01abc33f3a3bed13955e70f1684a6fe5d5e2.scope: Deactivated successfully.
Nov 24 20:27:22 compute-0 podman[278655]: 2025-11-24 20:27:22.516394428 +0000 UTC m=+0.042891562 container create 3175a2161a8297f95bcec0692aeefcfbac57609554d871fb4dce65d4847a58e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:27:22 compute-0 systemd[1]: Started libpod-conmon-3175a2161a8297f95bcec0692aeefcfbac57609554d871fb4dce65d4847a58e6.scope.
Nov 24 20:27:22 compute-0 podman[278655]: 2025-11-24 20:27:22.497987676 +0000 UTC m=+0.024484830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:27:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287f90e0f1d9859829ee07edea1a0da10203cb3627153f10d1cdce850e8e982f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287f90e0f1d9859829ee07edea1a0da10203cb3627153f10d1cdce850e8e982f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287f90e0f1d9859829ee07edea1a0da10203cb3627153f10d1cdce850e8e982f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287f90e0f1d9859829ee07edea1a0da10203cb3627153f10d1cdce850e8e982f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:27:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:22.612+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:22 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:22 compute-0 podman[278655]: 2025-11-24 20:27:22.619733588 +0000 UTC m=+0.146230812 container init 3175a2161a8297f95bcec0692aeefcfbac57609554d871fb4dce65d4847a58e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:27:22 compute-0 podman[278655]: 2025-11-24 20:27:22.636203948 +0000 UTC m=+0.162701112 container start 3175a2161a8297f95bcec0692aeefcfbac57609554d871fb4dce65d4847a58e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 20:27:22 compute-0 podman[278655]: 2025-11-24 20:27:22.639837497 +0000 UTC m=+0.166334731 container attach 3175a2161a8297f95bcec0692aeefcfbac57609554d871fb4dce65d4847a58e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:27:22 compute-0 ceph-mon[75677]: pgmap v1327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:22 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:23 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:23.060+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:23.637+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:23 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]: {
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "osd_id": 2,
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "type": "bluestore"
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:     },
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "osd_id": 1,
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "type": "bluestore"
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:     },
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "osd_id": 0,
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:         "type": "bluestore"
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]:     }
Nov 24 20:27:23 compute-0 trusting_vaughan[278672]: }
Nov 24 20:27:23 compute-0 systemd[1]: libpod-3175a2161a8297f95bcec0692aeefcfbac57609554d871fb4dce65d4847a58e6.scope: Deactivated successfully.
Nov 24 20:27:23 compute-0 podman[278655]: 2025-11-24 20:27:23.724172894 +0000 UTC m=+1.250670028 container died 3175a2161a8297f95bcec0692aeefcfbac57609554d871fb4dce65d4847a58e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:27:23 compute-0 systemd[1]: libpod-3175a2161a8297f95bcec0692aeefcfbac57609554d871fb4dce65d4847a58e6.scope: Consumed 1.093s CPU time.
Nov 24 20:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-287f90e0f1d9859829ee07edea1a0da10203cb3627153f10d1cdce850e8e982f-merged.mount: Deactivated successfully.
Nov 24 20:27:23 compute-0 podman[278655]: 2025-11-24 20:27:23.798129692 +0000 UTC m=+1.324626826 container remove 3175a2161a8297f95bcec0692aeefcfbac57609554d871fb4dce65d4847a58e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 20:27:23 compute-0 systemd[1]: libpod-conmon-3175a2161a8297f95bcec0692aeefcfbac57609554d871fb4dce65d4847a58e6.scope: Deactivated successfully.
Nov 24 20:27:23 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:23 compute-0 sudo[278550]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:27:23 compute-0 podman[278706]: 2025-11-24 20:27:23.848883377 +0000 UTC m=+0.085543385 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:27:23 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:27:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:27:23 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:27:23 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 94e3184e-0fdf-4996-94ee-615e202f5296 does not exist
Nov 24 20:27:23 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev fe8a7afa-d708-4c7d-87c3-d8438af6f9de does not exist
Nov 24 20:27:23 compute-0 sudo[278737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:27:23 compute-0 sudo[278737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:23 compute-0 sudo[278737]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:24 compute-0 sudo[278763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:27:24 compute-0 sudo[278763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:27:24 compute-0 sudo[278763]: pam_unix(sudo:session): session closed for user root
Nov 24 20:27:24 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:24.108+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:27:24
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'vms', 'default.rgw.log', 'backups', 'default.rgw.control']
Nov 24 20:27:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:27:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:24.625+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:24 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:24 compute-0 ceph-mon[75677]: pgmap v1328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:24 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:27:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:27:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:25 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:25.156+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:25.664+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:25 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:25 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:26.136+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:26 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:26.700+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:26 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:26 compute-0 ceph-mon[75677]: pgmap v1329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:26 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2162 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:27.177+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:27 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:27.733+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:27 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:27 compute-0 podman[278788]: 2025-11-24 20:27:27.908002227 +0000 UTC m=+0.126384860 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:27:27 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:27 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2162 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:28.166+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:28 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:28.691+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:28 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:29 compute-0 ceph-mon[75677]: pgmap v1330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:29 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:29.145+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:29 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:29.661+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:29 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:30 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:30.103+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:30 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:30.625+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:30 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:31 compute-0 ceph-mon[75677]: pgmap v1331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:31 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:31.151+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:31 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:31.594+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:31 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:32 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2172 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:32.160+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:32 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:32.604+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:32 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:33 compute-0 ceph-mon[75677]: pgmap v1332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:33 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:33 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2172 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:33.177+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:33 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:33.590+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:33 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:34 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:34.220+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:34 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:34.556+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:34 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005666567973888129 of space, bias 1.0, pg target 0.16999703921664386 quantized to 32 (current 32)
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:27:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:27:35 compute-0 ceph-mon[75677]: pgmap v1333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:35 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:35.225+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:35 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:35.578+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:35 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:36 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:36.271+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:36 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:36.591+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:36 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:37 compute-0 ceph-mon[75677]: pgmap v1334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:37 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:37.286+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:37 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:37.641+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:37 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:38 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2177 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:38 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:38.242+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:38 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:38.631+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:38 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:38 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:27:38.970 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:27:38 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:27:38.971 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:27:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:39 compute-0 ceph-mon[75677]: pgmap v1335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:39 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:39 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2177 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:39.204+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:39 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:39.581+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:39 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:40 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:40.234+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:40 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:27:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:27:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:27:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:27:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:27:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:40.575+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:40 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:41 compute-0 ceph-mon[75677]: pgmap v1336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:41 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:41.283+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:41 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:41.564+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:42.297+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:42 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:42 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:42.585+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:42 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:42 compute-0 podman[278818]: 2025-11-24 20:27:42.85274978 +0000 UTC m=+0.080541680 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 24 20:27:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:43.248+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:43 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:43 compute-0 ceph-mon[75677]: pgmap v1337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:43 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:43 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:43.561+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:43 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:44.262+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:44 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:44 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:44.553+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:44 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:44 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:27:44.974 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:27:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:45.225+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:45 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:45 compute-0 ceph-mon[75677]: pgmap v1338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:45 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:45.564+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:45 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:46.274+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:46 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:46 compute-0 ceph-mon[75677]: pgmap v1339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:46 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:46.554+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:46 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2182 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:47.261+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:47 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:47.505+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:47 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:47 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:47 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2182 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:47 compute-0 nova_compute[257476]: 2025-11-24 20:27:47.857 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Acquiring lock "aea00e91-e556-48c7-bb32-ad48fdb1b4a7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:27:47 compute-0 nova_compute[257476]: 2025-11-24 20:27:47.858 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "aea00e91-e556-48c7-bb32-ad48fdb1b4a7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:27:47 compute-0 nova_compute[257476]: 2025-11-24 20:27:47.874 257491 DEBUG nova.compute.manager [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 20:27:47 compute-0 nova_compute[257476]: 2025-11-24 20:27:47.940 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:27:47 compute-0 nova_compute[257476]: 2025-11-24 20:27:47.941 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:27:47 compute-0 nova_compute[257476]: 2025-11-24 20:27:47.950 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 20:27:47 compute-0 nova_compute[257476]: 2025-11-24 20:27:47.951 257491 INFO nova.compute.claims [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Claim successful on node compute-0.ctlplane.example.com
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.140 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:27:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:48.302+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:48 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:48.475+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:48 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:48 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:48 compute-0 ceph-mon[75677]: pgmap v1340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:27:48 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/273558055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.600 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.607 257491 DEBUG nova.compute.provider_tree [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.620 257491 DEBUG nova.scheduler.client.report [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.643 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.644 257491 DEBUG nova.compute.manager [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.687 257491 DEBUG nova.compute.manager [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.688 257491 DEBUG nova.network.neutron [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.709 257491 INFO nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.726 257491 DEBUG nova.compute.manager [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.811 257491 DEBUG nova.compute.manager [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.813 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.813 257491 INFO nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Creating image(s)
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.844 257491 DEBUG nova.storage.rbd_utils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] rbd image aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.867 257491 DEBUG nova.storage.rbd_utils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] rbd image aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.884 257491 DEBUG nova.storage.rbd_utils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] rbd image aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.887 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:27:48 compute-0 sshd-session[278816]: Received disconnect from 14.63.196.175 port 59286:11: Bye Bye [preauth]
Nov 24 20:27:48 compute-0 sshd-session[278816]: Disconnected from authenticating user root 14.63.196.175 port 59286 [preauth]
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.966 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.968 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Acquiring lock "218f8903fd6674ce56e8c19056c812cf16f46909" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.969 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.969 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:27:48 compute-0 nova_compute[257476]: 2025-11-24 20:27:48.996 257491 DEBUG nova.storage.rbd_utils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] rbd image aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.003 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.036 257491 DEBUG nova.network.neutron [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.037 257491 DEBUG nova.compute.manager [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.320 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:27:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:49.337+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:49 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.405 257491 DEBUG nova.storage.rbd_utils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] resizing rbd image aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 20:27:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:49.465+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:49 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.517 257491 DEBUG nova.objects.instance [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lazy-loading 'migration_context' on Instance uuid aea00e91-e556-48c7-bb32-ad48fdb1b4a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.534 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.535 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Ensure instance console log exists: /var/lib/nova/instances/aea00e91-e556-48c7-bb32-ad48fdb1b4a7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.536 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.536 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.537 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:27:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.540 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '7b556eea-44a0-401c-a3e5-213a835e1fc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.547 257491 WARNING nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.553 257491 DEBUG nova.virt.libvirt.host [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.554 257491 DEBUG nova.virt.libvirt.host [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.556 257491 DEBUG nova.virt.libvirt.host [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.557 257491 DEBUG nova.virt.libvirt.host [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.557 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.557 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T20:21:07Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='67120476-40a0-42ea-948d-218bf9a62474',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T20:21:08Z,direct_url=<?>,disk_format='qcow2',id=7b556eea-44a0-401c-a3e5-213a835e1fc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4b895bcffb3c4d43b10b1af37264e971',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T20:21:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.558 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.558 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.559 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.559 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.559 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.560 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.560 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.560 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.561 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.561 257491 DEBUG nova.virt.hardware [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 20:27:49 compute-0 nova_compute[257476]: 2025-11-24 20:27:49.564 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:27:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:49 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:49 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/273558055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:27:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:27:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/973541028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.025 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.049 257491 DEBUG nova.storage.rbd_utils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] rbd image aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.055 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:27:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:50.342+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:50 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:50.436+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:50 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 20:27:50 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2920871879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.496 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.499 257491 DEBUG nova.objects.instance [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid aea00e91-e556-48c7-bb32-ad48fdb1b4a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.514 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] End _get_guest_xml xml=<domain type="kvm">
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <uuid>aea00e91-e556-48c7-bb32-ad48fdb1b4a7</uuid>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <name>instance-00000008</name>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <memory>131072</memory>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <vcpu>1</vcpu>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <metadata>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <nova:name>tempest-ServerExternalEventsTest-server-375522184</nova:name>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <nova:creationTime>2025-11-24 20:27:49</nova:creationTime>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <nova:flavor name="m1.nano">
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <nova:memory>128</nova:memory>
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <nova:disk>1</nova:disk>
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <nova:swap>0</nova:swap>
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <nova:vcpus>1</nova:vcpus>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       </nova:flavor>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <nova:owner>
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <nova:user uuid="72b885b7a83b4edebaad4164c1a561b4">tempest-ServerExternalEventsTest-1845364251-project-member</nova:user>
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <nova:project uuid="09d2713e9235451f85cb9e45799887c2">tempest-ServerExternalEventsTest-1845364251</nova:project>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       </nova:owner>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <nova:root type="image" uuid="7b556eea-44a0-401c-a3e5-213a835e1fc5"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <nova:ports/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     </nova:instance>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   </metadata>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <sysinfo type="smbios">
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <system>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <entry name="manufacturer">RDO</entry>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <entry name="product">OpenStack Compute</entry>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <entry name="serial">aea00e91-e556-48c7-bb32-ad48fdb1b4a7</entry>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <entry name="uuid">aea00e91-e556-48c7-bb32-ad48fdb1b4a7</entry>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <entry name="family">Virtual Machine</entry>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     </system>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   </sysinfo>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <os>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <boot dev="hd"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <smbios mode="sysinfo"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   </os>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <features>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <acpi/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <apic/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <vmcoreinfo/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   </features>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <clock offset="utc">
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <timer name="hpet" present="no"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   </clock>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <cpu mode="host-model" match="exact">
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   </cpu>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   <devices>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <disk type="network" device="disk">
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk">
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       </source>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <target dev="vda" bus="virtio"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <disk type="network" device="cdrom">
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <driver type="raw" cache="none"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <source protocol="rbd" name="vms/aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk.config">
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <host name="192.168.122.100" port="6789"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       </source>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <auth username="openstack">
Nov 24 20:27:50 compute-0 nova_compute[257476]:         <secret type="ceph" uuid="05e060a3-406b-57f0-89d2-ec35f5b09305"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       </auth>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <target dev="sda" bus="sata"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     </disk>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <serial type="pty">
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <log file="/var/lib/nova/instances/aea00e91-e556-48c7-bb32-ad48fdb1b4a7/console.log" append="off"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     </serial>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <video>
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <model type="virtio"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     </video>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <input type="tablet" bus="usb"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <rng model="virtio">
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <backend model="random">/dev/urandom</backend>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     </rng>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <controller type="usb" index="0"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     <memballoon model="virtio">
Nov 24 20:27:50 compute-0 nova_compute[257476]:       <stats period="10"/>
Nov 24 20:27:50 compute-0 nova_compute[257476]:     </memballoon>
Nov 24 20:27:50 compute-0 nova_compute[257476]:   </devices>
Nov 24 20:27:50 compute-0 nova_compute[257476]: </domain>
Nov 24 20:27:50 compute-0 nova_compute[257476]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.563 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.563 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.563 257491 INFO nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Using config drive
Nov 24 20:27:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:50 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:50 compute-0 ceph-mon[75677]: pgmap v1341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 281 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:27:50 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/973541028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:27:50 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2920871879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.587 257491 DEBUG nova.storage.rbd_utils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] rbd image aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.795 257491 INFO nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Creating config drive at /var/lib/nova/instances/aea00e91-e556-48c7-bb32-ad48fdb1b4a7/disk.config
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.804 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aea00e91-e556-48c7-bb32-ad48fdb1b4a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_lpre0h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.954 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aea00e91-e556-48c7-bb32-ad48fdb1b4a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_lpre0h" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.987 257491 DEBUG nova.storage.rbd_utils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] rbd image aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:27:50 compute-0 nova_compute[257476]: 2025-11-24 20:27:50.991 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aea00e91-e556-48c7-bb32-ad48fdb1b4a7/disk.config aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.143 257491 DEBUG oslo_concurrency.processutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aea00e91-e556-48c7-bb32-ad48fdb1b4a7/disk.config aea00e91-e556-48c7-bb32-ad48fdb1b4a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.144 257491 INFO nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Deleting local config drive /var/lib/nova/instances/aea00e91-e556-48c7-bb32-ad48fdb1b4a7/disk.config because it was imported into RBD.
Nov 24 20:27:51 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 24 20:27:51 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 24 20:27:51 compute-0 systemd-machined[218733]: New machine qemu-6-instance-00000008.
Nov 24 20:27:51 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000008.
Nov 24 20:27:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:51.365+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:51 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:51.419+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:51 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 165 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Nov 24 20:27:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:51 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.783 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764016071.7829967, aea00e91-e556-48c7-bb32-ad48fdb1b4a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.784 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] VM Resumed (Lifecycle Event)
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.787 257491 DEBUG nova.compute.manager [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.787 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.791 257491 INFO nova.virt.libvirt.driver [-] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Instance spawned successfully.
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.791 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.809 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.814 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.823 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.824 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.825 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.826 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.826 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.827 257491 DEBUG nova.virt.libvirt.driver [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.835 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.836 257491 DEBUG nova.virt.driver [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] Emitting event <LifecycleEvent: 1764016071.7856991, aea00e91-e556-48c7-bb32-ad48fdb1b4a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.836 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] VM Started (Lifecycle Event)
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.864 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.868 257491 DEBUG nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.889 257491 INFO nova.compute.manager [None req-e5abdbf6-501f-4ef1-b20a-99e07496a1ab - - - - - -] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.898 257491 INFO nova.compute.manager [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Took 3.09 seconds to spawn the instance on the hypervisor.
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.899 257491 DEBUG nova.compute.manager [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.958 257491 INFO nova.compute.manager [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Took 4.05 seconds to build instance.
Nov 24 20:27:51 compute-0 nova_compute[257476]: 2025-11-24 20:27:51.976 257491 DEBUG oslo_concurrency.lockutils [None req-36414a11-3f36-43fd-a107-590f7d19c983 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "aea00e91-e556-48c7-bb32-ad48fdb1b4a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:27:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2192 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:52.366+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:52 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:52.371+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:52 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:52 compute-0 ceph-mon[75677]: pgmap v1342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 165 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Nov 24 20:27:52 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2192 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:52 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.063 257491 DEBUG nova.compute.manager [None req-ee11c8da-9b60-47e1-8005-596311cc9f83 5ae98a58e36047bcb771c600b0dae600 aafdc9f9671c4992ab376cdf2dfa82c4 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.064 257491 DEBUG nova.compute.manager [None req-ee11c8da-9b60-47e1-8005-596311cc9f83 5ae98a58e36047bcb771c600b0dae600 aafdc9f9671c4992ab376cdf2dfa82c4 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.064 257491 DEBUG oslo_concurrency.lockutils [None req-ee11c8da-9b60-47e1-8005-596311cc9f83 5ae98a58e36047bcb771c600b0dae600 aafdc9f9671c4992ab376cdf2dfa82c4 - - default default] Acquiring lock "refresh_cache-aea00e91-e556-48c7-bb32-ad48fdb1b4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.064 257491 DEBUG oslo_concurrency.lockutils [None req-ee11c8da-9b60-47e1-8005-596311cc9f83 5ae98a58e36047bcb771c600b0dae600 aafdc9f9671c4992ab376cdf2dfa82c4 - - default default] Acquired lock "refresh_cache-aea00e91-e556-48c7-bb32-ad48fdb1b4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.064 257491 DEBUG nova.network.neutron [None req-ee11c8da-9b60-47e1-8005-596311cc9f83 5ae98a58e36047bcb771c600b0dae600 aafdc9f9671c4992ab376cdf2dfa82c4 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.233 257491 DEBUG nova.network.neutron [None req-ee11c8da-9b60-47e1-8005-596311cc9f83 5ae98a58e36047bcb771c600b0dae600 aafdc9f9671c4992ab376cdf2dfa82c4 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.305 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Acquiring lock "aea00e91-e556-48c7-bb32-ad48fdb1b4a7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.306 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "aea00e91-e556-48c7-bb32-ad48fdb1b4a7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.307 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Acquiring lock "aea00e91-e556-48c7-bb32-ad48fdb1b4a7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.308 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "aea00e91-e556-48c7-bb32-ad48fdb1b4a7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.308 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "aea00e91-e556-48c7-bb32-ad48fdb1b4a7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.310 257491 INFO nova.compute.manager [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Terminating instance
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.312 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Acquiring lock "refresh_cache-aea00e91-e556-48c7-bb32-ad48fdb1b4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 20:27:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:53.390+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:53 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:53.396+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:53 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.446 257491 DEBUG nova.network.neutron [None req-ee11c8da-9b60-47e1-8005-596311cc9f83 5ae98a58e36047bcb771c600b0dae600 aafdc9f9671c4992ab376cdf2dfa82c4 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.461 257491 DEBUG oslo_concurrency.lockutils [None req-ee11c8da-9b60-47e1-8005-596311cc9f83 5ae98a58e36047bcb771c600b0dae600 aafdc9f9671c4992ab376cdf2dfa82c4 - - default default] Releasing lock "refresh_cache-aea00e91-e556-48c7-bb32-ad48fdb1b4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.462 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Acquired lock "refresh_cache-aea00e91-e556-48c7-bb32-ad48fdb1b4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.463 257491 DEBUG nova.network.neutron [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 20:27:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 165 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Nov 24 20:27:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:53 compute-0 nova_compute[257476]: 2025-11-24 20:27:53.631 257491 DEBUG nova.network.neutron [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:27:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:27:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:27:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:27:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:27:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:27:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:27:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:54.428+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:54 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:54.439+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:54 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:54 compute-0 nova_compute[257476]: 2025-11-24 20:27:54.497 257491 DEBUG nova.network.neutron [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:27:54 compute-0 nova_compute[257476]: 2025-11-24 20:27:54.527 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Releasing lock "refresh_cache-aea00e91-e556-48c7-bb32-ad48fdb1b4a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 20:27:54 compute-0 nova_compute[257476]: 2025-11-24 20:27:54.528 257491 DEBUG nova.compute.manager [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 20:27:54 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 24 20:27:54 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000008.scope: Consumed 3.409s CPU time.
Nov 24 20:27:54 compute-0 systemd-machined[218733]: Machine qemu-6-instance-00000008 terminated.
Nov 24 20:27:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:54 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:54 compute-0 ceph-mon[75677]: pgmap v1343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 165 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Nov 24 20:27:54 compute-0 podman[279225]: 2025-11-24 20:27:54.734307552 +0000 UTC m=+0.090490872 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 24 20:27:54 compute-0 nova_compute[257476]: 2025-11-24 20:27:54.757 257491 INFO nova.virt.libvirt.driver [-] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Instance destroyed successfully.
Nov 24 20:27:54 compute-0 nova_compute[257476]: 2025-11-24 20:27:54.757 257491 DEBUG nova.objects.instance [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lazy-loading 'resources' on Instance uuid aea00e91-e556-48c7-bb32-ad48fdb1b4a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.155 257491 INFO nova.virt.libvirt.driver [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Deleting instance files /var/lib/nova/instances/aea00e91-e556-48c7-bb32-ad48fdb1b4a7_del
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.157 257491 INFO nova.virt.libvirt.driver [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Deletion of /var/lib/nova/instances/aea00e91-e556-48c7-bb32-ad48fdb1b4a7_del complete
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.240 257491 INFO nova.compute.manager [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.241 257491 DEBUG oslo.service.loopingcall [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.242 257491 DEBUG nova.compute.manager [-] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.242 257491 DEBUG nova.network.neutron [-] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 20:27:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:55.422+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:55 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:55.438+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:55 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.484 257491 DEBUG nova.network.neutron [-] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.495 257491 DEBUG nova.network.neutron [-] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.518 257491 INFO nova.compute.manager [-] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Took 0.28 seconds to deallocate network for instance.
Nov 24 20:27:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 173 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 468 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.580 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.581 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:27:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:55 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:55 compute-0 nova_compute[257476]: 2025-11-24 20:27:55.686 257491 DEBUG oslo_concurrency.processutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:27:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:27:56 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321540361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:27:56 compute-0 nova_compute[257476]: 2025-11-24 20:27:56.148 257491 DEBUG oslo_concurrency.processutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:27:56 compute-0 nova_compute[257476]: 2025-11-24 20:27:56.157 257491 DEBUG nova.compute.provider_tree [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:27:56 compute-0 nova_compute[257476]: 2025-11-24 20:27:56.180 257491 DEBUG nova.scheduler.client.report [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:27:56 compute-0 nova_compute[257476]: 2025-11-24 20:27:56.216 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:27:56 compute-0 nova_compute[257476]: 2025-11-24 20:27:56.253 257491 INFO nova.scheduler.client.report [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Deleted allocations for instance aea00e91-e556-48c7-bb32-ad48fdb1b4a7
Nov 24 20:27:56 compute-0 nova_compute[257476]: 2025-11-24 20:27:56.330 257491 DEBUG oslo_concurrency.lockutils [None req-c849ad91-7a9e-4368-ad96-e312606bd969 72b885b7a83b4edebaad4164c1a561b4 09d2713e9235451f85cb9e45799887c2 - - default default] Lock "aea00e91-e556-48c7-bb32-ad48fdb1b4a7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:27:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:56.469+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:56 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:56.471+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:56 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:56 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:56 compute-0 ceph-mon[75677]: pgmap v1344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 173 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 468 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 24 20:27:56 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/321540361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:27:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:27:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:57.444+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:57 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:57.511+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:57 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 133 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 24 20:27:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2197 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:57 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:58.462+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:58 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:58.533+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:58 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:58 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:58 compute-0 ceph-mon[75677]: pgmap v1345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 133 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 24 20:27:58 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2197 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:27:58 compute-0 podman[279288]: 2025-11-24 20:27:58.887660312 +0000 UTC m=+0.110231361 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 20:27:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:27:59.414+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:59 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:27:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:27:59.486+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:59 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:27:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:27:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 24 20:27:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:27:59 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:00.385+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:00 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:00.533+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:00 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:00 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:00 compute-0 ceph-mon[75677]: pgmap v1346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 24 20:28:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:01.372+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:01 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 24 20:28:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:01.550+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:01 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:01 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:02.359+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:02 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:02.561+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:02 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:02 compute-0 ceph-mon[75677]: pgmap v1347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 24 20:28:02 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:03.335+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:03 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 446 KiB/s wr, 102 op/s
Nov 24 20:28:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:03.610+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:03 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:04.308+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:04 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:04.572+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:04 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:04 compute-0 ceph-mon[75677]: pgmap v1348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 446 KiB/s wr, 102 op/s
Nov 24 20:28:04 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:05.285+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:05 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 446 KiB/s wr, 102 op/s
Nov 24 20:28:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:05.548+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:05 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:05 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:06.318+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:06 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:06.557+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:06 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:07 compute-0 ceph-mon[75677]: pgmap v1349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 446 KiB/s wr, 102 op/s
Nov 24 20:28:07 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2202 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:07.354+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:07 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 KiB/s wr, 83 op/s
Nov 24 20:28:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:07.557+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:07 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:08 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:08 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2202 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:08.323+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:08 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:08.545+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:08 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:09 compute-0 ceph-mon[75677]: pgmap v1350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 KiB/s wr, 83 op/s
Nov 24 20:28:09 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:09.361+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:09 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:28:09.384 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:28:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:28:09.384 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:28:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:28:09.385 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:28:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Nov 24 20:28:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:09.587+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:09 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:09 compute-0 nova_compute[257476]: 2025-11-24 20:28:09.754 257491 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764016074.753698, aea00e91-e556-48c7-bb32-ad48fdb1b4a7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 20:28:09 compute-0 nova_compute[257476]: 2025-11-24 20:28:09.755 257491 INFO nova.compute.manager [-] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] VM Stopped (Lifecycle Event)
Nov 24 20:28:09 compute-0 nova_compute[257476]: 2025-11-24 20:28:09.780 257491 DEBUG nova.compute.manager [None req-d0676c30-3054-4277-8dd2-9e6eeaeb1a9a - - - - - -] [instance: aea00e91-e556-48c7-bb32-ad48fdb1b4a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 20:28:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:10 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:10.329+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:10 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:10.538+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:10 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:10 compute-0 nova_compute[257476]: 2025-11-24 20:28:10.956 257491 DEBUG oslo_concurrency.lockutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Acquiring lock "664ca0de-5d04-41c8-ada6-32391f471c42" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:28:10 compute-0 nova_compute[257476]: 2025-11-24 20:28:10.956 257491 DEBUG oslo_concurrency.lockutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Lock "664ca0de-5d04-41c8-ada6-32391f471c42" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.014 257491 DEBUG nova.compute.manager [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 20:28:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:11 compute-0 ceph-mon[75677]: pgmap v1351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Nov 24 20:28:11 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.211 257491 DEBUG oslo_concurrency.lockutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.211 257491 DEBUG oslo_concurrency.lockutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.222 257491 DEBUG nova.virt.hardware [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.223 257491 INFO nova.compute.claims [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Claim successful on node compute-0.ctlplane.example.com
Nov 24 20:28:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:11.290+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:11 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.417 257491 DEBUG oslo_concurrency.processutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:28:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:11.538+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:11 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:28:11 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3910383618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.907 257491 DEBUG oslo_concurrency.processutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.913 257491 DEBUG nova.compute.provider_tree [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.926 257491 DEBUG nova.scheduler.client.report [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.945 257491 DEBUG oslo_concurrency.lockutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:28:11 compute-0 nova_compute[257476]: 2025-11-24 20:28:11.946 257491 DEBUG nova.compute.manager [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.002 257491 DEBUG nova.compute.manager [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.025 257491 INFO nova.virt.libvirt.driver [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.042 257491 DEBUG nova.compute.manager [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 20:28:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 28 slow ops, oldest one blocked for 2207 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.136 257491 DEBUG nova.compute.manager [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.139 257491 DEBUG nova.virt.libvirt.driver [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.139 257491 INFO nova.virt.libvirt.driver [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Creating image(s)
Nov 24 20:28:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:12 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:12 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3910383618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.167 257491 DEBUG nova.storage.rbd_utils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] rbd image 664ca0de-5d04-41c8-ada6-32391f471c42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.189 257491 DEBUG nova.storage.rbd_utils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] rbd image 664ca0de-5d04-41c8-ada6-32391f471c42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.209 257491 DEBUG nova.storage.rbd_utils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] rbd image 664ca0de-5d04-41c8-ada6-32391f471c42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.212 257491 DEBUG oslo_concurrency.processutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:28:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:12.256+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:12 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.263 257491 DEBUG oslo_concurrency.processutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.264 257491 DEBUG oslo_concurrency.lockutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Acquiring lock "218f8903fd6674ce56e8c19056c812cf16f46909" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.265 257491 DEBUG oslo_concurrency.lockutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.265 257491 DEBUG oslo_concurrency.lockutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Lock "218f8903fd6674ce56e8c19056c812cf16f46909" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.287 257491 DEBUG nova.storage.rbd_utils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] rbd image 664ca0de-5d04-41c8-ada6-32391f471c42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 20:28:12 compute-0 nova_compute[257476]: 2025-11-24 20:28:12.290 257491 DEBUG oslo_concurrency.processutils [None req-2c5045b3-3068-4ec0-bedd-64c119dd69d5 3b669e46d47243069c3c044263f15f43 3dd62163a589483c8cfa397cee6491e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/218f8903fd6674ce56e8c19056c812cf16f46909 664ca0de-5d04-41c8-ada6-32391f471c42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:28:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:12.557+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:12 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:13 compute-0 ceph-mon[75677]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 9 ])
Nov 24 20:28:13 compute-0 ceph-mon[75677]: pgmap v1352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:13 compute-0 ceph-mon[75677]: Health check update: 28 slow ops, oldest one blocked for 2207 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:13 compute-0 nova_compute[257476]: 2025-11-24 20:28:13.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:28:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:13.268+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:13 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:13.516+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:13 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:13 compute-0 podman[279431]: 2025-11-24 20:28:13.889904749 +0000 UTC m=+0.108700910 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 20:28:14 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:14.281+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:14 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:14.486+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:14 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:15 compute-0 nova_compute[257476]: 2025-11-24 20:28:15.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:28:15 compute-0 nova_compute[257476]: 2025-11-24 20:28:15.150 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:28:15 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:15 compute-0 ceph-mon[75677]: pgmap v1353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 278 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:15.274+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:15 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:15.450+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:15 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 142 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 501 KiB/s wr, 1 op/s
Nov 24 20:28:16 compute-0 nova_compute[257476]: 2025-11-24 20:28:16.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:28:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:16 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:16.270+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:16 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:16 compute-0 nova_compute[257476]: 2025-11-24 20:28:16.396 257491 DEBUG oslo_concurrency.lockutils [None req-3dcc7e7c-6d7d-451f-872a-74a0ac2a61bf 0143f386bac846eab04e312327825f6f 873c3bd5e5d04a9e9c5488d3c51dc34c - - default default] Acquiring lock "db8c22d1-e16d-49f8-b4a5-ba8e87849ea3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:28:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:28:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2341007036' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:28:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:28:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2341007036' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:28:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:16.460+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:16 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2212 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:17 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:17 compute-0 ceph-mon[75677]: pgmap v1354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 142 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 501 KiB/s wr, 1 op/s
Nov 24 20:28:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2341007036' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:28:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2341007036' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:28:17 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2212 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:17.311+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:17 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:17.499+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:17 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Nov 24 20:28:18 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:18.346+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:18 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:18.455+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:19 compute-0 nova_compute[257476]: 2025-11-24 20:28:19.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:28:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:19 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:19 compute-0 ceph-mon[75677]: pgmap v1355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Nov 24 20:28:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:19.357+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:19 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:19.451+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:19 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Nov 24 20:28:20 compute-0 nova_compute[257476]: 2025-11-24 20:28:20.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:28:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:20.334+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:20 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:20 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:20.492+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:20 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.150 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.171 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.171 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.171 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.172 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.172 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.172 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.172 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.172 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.190 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.190 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.190 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.190 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.191 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:28:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:21 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:21 compute-0 ceph-mon[75677]: pgmap v1356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Nov 24 20:28:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:21.383+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:21 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:21.513+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Nov 24 20:28:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:28:21 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403573418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.681 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.782 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.783 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.787 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.787 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.993 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.995 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4893MB free_disk=59.936466217041016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.995 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:28:21 compute-0 nova_compute[257476]: 2025-11-24 20:28:21.995 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.111 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 43bc955c-77ee-42d8-98e2-84163217d1aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.111 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.113 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 4e9758ff-13d1-447b-9a2a-d6ae9f807143 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.113 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance db8c22d1-e16d-49f8-b4a5-ba8e87849ea3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.113 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 664ca0de-5d04-41c8-ada6-32391f471c42 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.114 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.114 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:28:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2217 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.246 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:28:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:22.384+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:22 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:22 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:22 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2403573418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:28:22 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2217 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:22.480+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:22 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:28:22 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1338514350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.742 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.749 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.772 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.803 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:28:22 compute-0 nova_compute[257476]: 2025-11-24 20:28:22.804 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:28:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:28:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 7070 writes, 29K keys, 7070 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7070 writes, 1454 syncs, 4.86 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1464 writes, 5271 keys, 1464 commit groups, 1.0 writes per commit group, ingest: 5.41 MB, 0.01 MB/s
                                           Interval WAL: 1464 writes, 593 syncs, 2.47 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:28:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:23.398+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:23 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:23.489+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:23 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:23 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:23 compute-0 ceph-mon[75677]: pgmap v1357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Nov 24 20:28:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:23 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:23 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1338514350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:28:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Nov 24 20:28:24 compute-0 sudo[279496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:28:24 compute-0 sudo[279496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:24 compute-0 sudo[279496]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:24 compute-0 sudo[279521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:28:24 compute-0 sudo[279521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:24 compute-0 sudo[279521]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:24 compute-0 sudo[279546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:28:24 compute-0 sudo[279546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:24 compute-0 sudo[279546]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:24 compute-0 sudo[279571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:28:24 compute-0 sudo[279571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:28:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:24.438+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:24 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:28:24
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Nov 24 20:28:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:28:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:24.474+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:24 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:24 compute-0 sudo[279571]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:24 compute-0 nova_compute[257476]: 2025-11-24 20:28:24.801 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:28:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:24 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:24 compute-0 ceph-mon[75677]: pgmap v1358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Nov 24 20:28:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:28:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:28:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:28:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:28:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:28:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:28:25 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e47384a7-e072-4176-b3c9-e8c707bc5606 does not exist
Nov 24 20:28:25 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 043e5b45-16a1-4025-a0e1-7bd8ab040654 does not exist
Nov 24 20:28:25 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d7bd5e78-f983-4624-901f-bb7313d8c871 does not exist
Nov 24 20:28:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:28:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:28:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:28:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:28:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:28:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:28:25 compute-0 sudo[279627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:28:25 compute-0 sudo[279627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:25 compute-0 sudo[279627]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:25 compute-0 sudo[279659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:28:25 compute-0 sudo[279659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:25 compute-0 sudo[279659]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:25 compute-0 podman[279651]: 2025-11-24 20:28:25.367486407 +0000 UTC m=+0.099538389 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 20:28:25 compute-0 sudo[279696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:28:25 compute-0 sudo[279696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:25 compute-0 sudo[279696]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:25.448+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:25 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:25.475+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:25 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:25 compute-0 sudo[279722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:28:25 compute-0 sudo[279722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Nov 24 20:28:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:25 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:28:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:28:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:28:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:28:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:28:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:28:26 compute-0 podman[279785]: 2025-11-24 20:28:25.978630556 +0000 UTC m=+0.046213311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:28:26 compute-0 podman[279785]: 2025-11-24 20:28:26.089702178 +0000 UTC m=+0.157284863 container create 614bf6dbafc9c4305d6a8fb44426daa8481ed4e2a5332fdd3f2430f1b56cb39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:28:26 compute-0 systemd[1]: Started libpod-conmon-614bf6dbafc9c4305d6a8fb44426daa8481ed4e2a5332fdd3f2430f1b56cb39b.scope.
Nov 24 20:28:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:28:26 compute-0 podman[279785]: 2025-11-24 20:28:26.259039666 +0000 UTC m=+0.326622381 container init 614bf6dbafc9c4305d6a8fb44426daa8481ed4e2a5332fdd3f2430f1b56cb39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:28:26 compute-0 podman[279785]: 2025-11-24 20:28:26.274541443 +0000 UTC m=+0.342124128 container start 614bf6dbafc9c4305d6a8fb44426daa8481ed4e2a5332fdd3f2430f1b56cb39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:28:26 compute-0 condescending_napier[279801]: 167 167
Nov 24 20:28:26 compute-0 systemd[1]: libpod-614bf6dbafc9c4305d6a8fb44426daa8481ed4e2a5332fdd3f2430f1b56cb39b.scope: Deactivated successfully.
Nov 24 20:28:26 compute-0 podman[279785]: 2025-11-24 20:28:26.353528394 +0000 UTC m=+0.421111079 container attach 614bf6dbafc9c4305d6a8fb44426daa8481ed4e2a5332fdd3f2430f1b56cb39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:28:26 compute-0 podman[279785]: 2025-11-24 20:28:26.356028229 +0000 UTC m=+0.423610904 container died 614bf6dbafc9c4305d6a8fb44426daa8481ed4e2a5332fdd3f2430f1b56cb39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 20:28:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:26.450+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:26 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:26.474+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:26 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-178b4bc76a9acf33b9d0a0564c38a26b6e0e78b35a3689a7b863a1b9576babf3-merged.mount: Deactivated successfully.
Nov 24 20:28:26 compute-0 podman[279785]: 2025-11-24 20:28:26.717650558 +0000 UTC m=+0.785233243 container remove 614bf6dbafc9c4305d6a8fb44426daa8481ed4e2a5332fdd3f2430f1b56cb39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:28:26 compute-0 systemd[1]: libpod-conmon-614bf6dbafc9c4305d6a8fb44426daa8481ed4e2a5332fdd3f2430f1b56cb39b.scope: Deactivated successfully.
Nov 24 20:28:26 compute-0 podman[279828]: 2025-11-24 20:28:26.951350755 +0000 UTC m=+0.072331078 container create 7f3a2dd6ae9f2d746cae17ff6fd37bc9598f815207da5598ce90a60de8a96a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:28:26 compute-0 systemd[1]: Started libpod-conmon-7f3a2dd6ae9f2d746cae17ff6fd37bc9598f815207da5598ce90a60de8a96a3a.scope.
Nov 24 20:28:27 compute-0 podman[279828]: 2025-11-24 20:28:26.918099823 +0000 UTC m=+0.039080236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:28:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565670b1c3077f5aec402f3be16f77891d8eef8d4d106500e8d030da3ef06c59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565670b1c3077f5aec402f3be16f77891d8eef8d4d106500e8d030da3ef06c59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565670b1c3077f5aec402f3be16f77891d8eef8d4d106500e8d030da3ef06c59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565670b1c3077f5aec402f3be16f77891d8eef8d4d106500e8d030da3ef06c59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565670b1c3077f5aec402f3be16f77891d8eef8d4d106500e8d030da3ef06c59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:27 compute-0 podman[279828]: 2025-11-24 20:28:27.059035257 +0000 UTC m=+0.180015610 container init 7f3a2dd6ae9f2d746cae17ff6fd37bc9598f815207da5598ce90a60de8a96a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:28:27 compute-0 podman[279828]: 2025-11-24 20:28:27.071128944 +0000 UTC m=+0.192109287 container start 7f3a2dd6ae9f2d746cae17ff6fd37bc9598f815207da5598ce90a60de8a96a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:28:27 compute-0 podman[279828]: 2025-11-24 20:28:27.07554183 +0000 UTC m=+0.196522183 container attach 7f3a2dd6ae9f2d746cae17ff6fd37bc9598f815207da5598ce90a60de8a96a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:28:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2222 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:27 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:27 compute-0 ceph-mon[75677]: pgmap v1359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Nov 24 20:28:27 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2222 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.182034) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016107182136, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2335, "num_deletes": 251, "total_data_size": 2882285, "memory_usage": 2933600, "flush_reason": "Manual Compaction"}
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016107202572, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2824896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36094, "largest_seqno": 38428, "table_properties": {"data_size": 2814808, "index_size": 5878, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 27900, "raw_average_key_size": 22, "raw_value_size": 2791736, "raw_average_value_size": 2233, "num_data_blocks": 255, "num_entries": 1250, "num_filter_entries": 1250, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764015941, "oldest_key_time": 1764015941, "file_creation_time": 1764016107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 20609 microseconds, and 11668 cpu microseconds.
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.202663) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2824896 bytes OK
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.202689) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.204498) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.204523) EVENT_LOG_v1 {"time_micros": 1764016107204515, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.204547) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2871713, prev total WAL file size 2871713, number of live WAL files 2.
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.206200) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2758KB)], [74(8032KB)]
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016107206253, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11050515, "oldest_snapshot_seqno": -1}
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 9786 keys, 9547174 bytes, temperature: kUnknown
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016107285145, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9547174, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9490262, "index_size": 31328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 262410, "raw_average_key_size": 26, "raw_value_size": 9319776, "raw_average_value_size": 952, "num_data_blocks": 1203, "num_entries": 9786, "num_filter_entries": 9786, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.286154) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9547174 bytes
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.287652) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.8 rd, 119.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.8 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 10300, records dropped: 514 output_compression: NoCompression
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.287694) EVENT_LOG_v1 {"time_micros": 1764016107287675, "job": 42, "event": "compaction_finished", "compaction_time_micros": 79616, "compaction_time_cpu_micros": 48074, "output_level": 6, "num_output_files": 1, "total_output_size": 9547174, "num_input_records": 10300, "num_output_records": 9786, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016107289161, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016107292301, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.205813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.292393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.292400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.292402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.292403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:27.292406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:27.483+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:27 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:27.505+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:27 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Nov 24 20:28:28 compute-0 objective_williamson[279845]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:28:28 compute-0 objective_williamson[279845]: --> relative data size: 1.0
Nov 24 20:28:28 compute-0 objective_williamson[279845]: --> All data devices are unavailable
Nov 24 20:28:28 compute-0 systemd[1]: libpod-7f3a2dd6ae9f2d746cae17ff6fd37bc9598f815207da5598ce90a60de8a96a3a.scope: Deactivated successfully.
Nov 24 20:28:28 compute-0 systemd[1]: libpod-7f3a2dd6ae9f2d746cae17ff6fd37bc9598f815207da5598ce90a60de8a96a3a.scope: Consumed 1.010s CPU time.
Nov 24 20:28:28 compute-0 podman[279828]: 2025-11-24 20:28:28.125486632 +0000 UTC m=+1.246466985 container died 7f3a2dd6ae9f2d746cae17ff6fd37bc9598f815207da5598ce90a60de8a96a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:28:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-565670b1c3077f5aec402f3be16f77891d8eef8d4d106500e8d030da3ef06c59-merged.mount: Deactivated successfully.
Nov 24 20:28:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:28 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:28 compute-0 podman[279828]: 2025-11-24 20:28:28.189423467 +0000 UTC m=+1.310403780 container remove 7f3a2dd6ae9f2d746cae17ff6fd37bc9598f815207da5598ce90a60de8a96a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:28:28 compute-0 systemd[1]: libpod-conmon-7f3a2dd6ae9f2d746cae17ff6fd37bc9598f815207da5598ce90a60de8a96a3a.scope: Deactivated successfully.
Nov 24 20:28:28 compute-0 sudo[279722]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:28 compute-0 sudo[279888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:28:28 compute-0 sudo[279888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:28 compute-0 sudo[279888]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:28 compute-0 sudo[279913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:28:28 compute-0 sudo[279913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:28 compute-0 sudo[279913]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:28 compute-0 sudo[279938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:28:28 compute-0 sudo[279938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:28 compute-0 sudo[279938]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:28 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:28.490+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:28.492+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:28 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:28 compute-0 sudo[279963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:28:28 compute-0 sudo[279963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:28:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 7874 writes, 32K keys, 7874 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7874 writes, 1700 syncs, 4.63 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1198 writes, 4718 keys, 1198 commit groups, 1.0 writes per commit group, ingest: 3.82 MB, 0.01 MB/s
                                           Interval WAL: 1198 writes, 487 syncs, 2.46 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:28:28 compute-0 podman[280027]: 2025-11-24 20:28:28.907233373 +0000 UTC m=+0.058522345 container create 70b28964e11d26c3454adff6536caf5f5602ecc67c53b55a4b8c8f995b811aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_matsumoto, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:28:28 compute-0 systemd[1]: Started libpod-conmon-70b28964e11d26c3454adff6536caf5f5602ecc67c53b55a4b8c8f995b811aec.scope.
Nov 24 20:28:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:28:28 compute-0 podman[280027]: 2025-11-24 20:28:28.887125516 +0000 UTC m=+0.038414548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:28:28 compute-0 podman[280027]: 2025-11-24 20:28:28.987896208 +0000 UTC m=+0.139185280 container init 70b28964e11d26c3454adff6536caf5f5602ecc67c53b55a4b8c8f995b811aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_matsumoto, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:28:29 compute-0 podman[280027]: 2025-11-24 20:28:29.000684272 +0000 UTC m=+0.151973284 container start 70b28964e11d26c3454adff6536caf5f5602ecc67c53b55a4b8c8f995b811aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_matsumoto, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:28:29 compute-0 podman[280027]: 2025-11-24 20:28:29.004457801 +0000 UTC m=+0.155746863 container attach 70b28964e11d26c3454adff6536caf5f5602ecc67c53b55a4b8c8f995b811aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:28:29 compute-0 zen_matsumoto[280042]: 167 167
Nov 24 20:28:29 compute-0 systemd[1]: libpod-70b28964e11d26c3454adff6536caf5f5602ecc67c53b55a4b8c8f995b811aec.scope: Deactivated successfully.
Nov 24 20:28:29 compute-0 podman[280027]: 2025-11-24 20:28:29.009234046 +0000 UTC m=+0.160523068 container died 70b28964e11d26c3454adff6536caf5f5602ecc67c53b55a4b8c8f995b811aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_matsumoto, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-74ae2f7de312ee2374923f73ca89eb0165bbc0a507d66655d6a96f047574bb6c-merged.mount: Deactivated successfully.
Nov 24 20:28:29 compute-0 podman[280027]: 2025-11-24 20:28:29.045856106 +0000 UTC m=+0.197145108 container remove 70b28964e11d26c3454adff6536caf5f5602ecc67c53b55a4b8c8f995b811aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_matsumoto, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:28:29 compute-0 podman[280041]: 2025-11-24 20:28:29.053702792 +0000 UTC m=+0.099618173 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 24 20:28:29 compute-0 systemd[1]: libpod-conmon-70b28964e11d26c3454adff6536caf5f5602ecc67c53b55a4b8c8f995b811aec.scope: Deactivated successfully.
Nov 24 20:28:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:29 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:29 compute-0 ceph-mon[75677]: pgmap v1360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Nov 24 20:28:29 compute-0 podman[280090]: 2025-11-24 20:28:29.277660543 +0000 UTC m=+0.106939184 container create 3e0e8545d56335cd4979e68a6be70bf9fbfdbcd129b4e7d95ca52dad486fb4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_beaver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:28:29 compute-0 podman[280090]: 2025-11-24 20:28:29.204159676 +0000 UTC m=+0.033438327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:28:29 compute-0 systemd[1]: Started libpod-conmon-3e0e8545d56335cd4979e68a6be70bf9fbfdbcd129b4e7d95ca52dad486fb4c1.scope.
Nov 24 20:28:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49fa91430708cede40f0312b05abfbfa381f2b3441c255903bce8fcead4f5729/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49fa91430708cede40f0312b05abfbfa381f2b3441c255903bce8fcead4f5729/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49fa91430708cede40f0312b05abfbfa381f2b3441c255903bce8fcead4f5729/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49fa91430708cede40f0312b05abfbfa381f2b3441c255903bce8fcead4f5729/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:29 compute-0 podman[280090]: 2025-11-24 20:28:29.489703751 +0000 UTC m=+0.318982442 container init 3e0e8545d56335cd4979e68a6be70bf9fbfdbcd129b4e7d95ca52dad486fb4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 20:28:29 compute-0 podman[280090]: 2025-11-24 20:28:29.507786285 +0000 UTC m=+0.337064936 container start 3e0e8545d56335cd4979e68a6be70bf9fbfdbcd129b4e7d95ca52dad486fb4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_beaver, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:28:29 compute-0 podman[280090]: 2025-11-24 20:28:29.512080558 +0000 UTC m=+0.341359219 container attach 3e0e8545d56335cd4979e68a6be70bf9fbfdbcd129b4e7d95ca52dad486fb4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:28:29 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:29.522+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:29.525+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:29 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:30 compute-0 infallible_beaver[280106]: {
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:     "0": [
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:         {
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "devices": [
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "/dev/loop3"
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             ],
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_name": "ceph_lv0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_size": "21470642176",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "name": "ceph_lv0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "tags": {
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.cluster_name": "ceph",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.crush_device_class": "",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.encrypted": "0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.osd_id": "0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.type": "block",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.vdo": "0"
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             },
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "type": "block",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "vg_name": "ceph_vg0"
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:         }
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:     ],
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:     "1": [
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:         {
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "devices": [
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "/dev/loop4"
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             ],
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_name": "ceph_lv1",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_size": "21470642176",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "name": "ceph_lv1",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "tags": {
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.cluster_name": "ceph",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.crush_device_class": "",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.encrypted": "0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.osd_id": "1",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.type": "block",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.vdo": "0"
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             },
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "type": "block",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "vg_name": "ceph_vg1"
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:         }
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:     ],
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:     "2": [
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:         {
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "devices": [
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "/dev/loop5"
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             ],
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_name": "ceph_lv2",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_size": "21470642176",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "name": "ceph_lv2",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "tags": {
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.cluster_name": "ceph",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.crush_device_class": "",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.encrypted": "0",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.osd_id": "2",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.type": "block",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:                 "ceph.vdo": "0"
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             },
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "type": "block",
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:             "vg_name": "ceph_vg2"
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:         }
Nov 24 20:28:30 compute-0 infallible_beaver[280106]:     ]
Nov 24 20:28:30 compute-0 infallible_beaver[280106]: }
Nov 24 20:28:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:30 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:30 compute-0 systemd[1]: libpod-3e0e8545d56335cd4979e68a6be70bf9fbfdbcd129b4e7d95ca52dad486fb4c1.scope: Deactivated successfully.
Nov 24 20:28:30 compute-0 podman[280090]: 2025-11-24 20:28:30.322175772 +0000 UTC m=+1.151454423 container died 3e0e8545d56335cd4979e68a6be70bf9fbfdbcd129b4e7d95ca52dad486fb4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_beaver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:28:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-49fa91430708cede40f0312b05abfbfa381f2b3441c255903bce8fcead4f5729-merged.mount: Deactivated successfully.
Nov 24 20:28:30 compute-0 podman[280090]: 2025-11-24 20:28:30.389812725 +0000 UTC m=+1.219091346 container remove 3e0e8545d56335cd4979e68a6be70bf9fbfdbcd129b4e7d95ca52dad486fb4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:28:30 compute-0 systemd[1]: libpod-conmon-3e0e8545d56335cd4979e68a6be70bf9fbfdbcd129b4e7d95ca52dad486fb4c1.scope: Deactivated successfully.
Nov 24 20:28:30 compute-0 sudo[279963]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:30 compute-0 sudo[280127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:28:30 compute-0 sudo[280127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:30 compute-0 sudo[280127]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:30 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:30.516+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:30 compute-0 sudo[280152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:28:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:30.561+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:30 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:30 compute-0 sudo[280152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:30 compute-0 sudo[280152]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:30 compute-0 sudo[280177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:28:30 compute-0 sudo[280177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:30 compute-0 sudo[280177]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:30 compute-0 sudo[280202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:28:30 compute-0 sudo[280202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:31 compute-0 podman[280267]: 2025-11-24 20:28:31.094461665 +0000 UTC m=+0.049502868 container create cba553fc23aa8719340107b1db47710f5581e7a3b70db838fa198eb9ed5408bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:28:31 compute-0 systemd[1]: Started libpod-conmon-cba553fc23aa8719340107b1db47710f5581e7a3b70db838fa198eb9ed5408bd.scope.
Nov 24 20:28:31 compute-0 podman[280267]: 2025-11-24 20:28:31.073310891 +0000 UTC m=+0.028352114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:28:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:28:31 compute-0 podman[280267]: 2025-11-24 20:28:31.196243484 +0000 UTC m=+0.151284737 container init cba553fc23aa8719340107b1db47710f5581e7a3b70db838fa198eb9ed5408bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:28:31 compute-0 podman[280267]: 2025-11-24 20:28:31.208528906 +0000 UTC m=+0.163570129 container start cba553fc23aa8719340107b1db47710f5581e7a3b70db838fa198eb9ed5408bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:28:31 compute-0 podman[280267]: 2025-11-24 20:28:31.211638327 +0000 UTC m=+0.166679550 container attach cba553fc23aa8719340107b1db47710f5581e7a3b70db838fa198eb9ed5408bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:28:31 compute-0 sharp_shaw[280283]: 167 167
Nov 24 20:28:31 compute-0 systemd[1]: libpod-cba553fc23aa8719340107b1db47710f5581e7a3b70db838fa198eb9ed5408bd.scope: Deactivated successfully.
Nov 24 20:28:31 compute-0 conmon[280283]: conmon cba553fc23aa87193401 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cba553fc23aa8719340107b1db47710f5581e7a3b70db838fa198eb9ed5408bd.scope/container/memory.events
Nov 24 20:28:31 compute-0 podman[280267]: 2025-11-24 20:28:31.216529935 +0000 UTC m=+0.171571138 container died cba553fc23aa8719340107b1db47710f5581e7a3b70db838fa198eb9ed5408bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c96728de880269b91593599d31e092456db99f287fb09a1bae18ca63c50d7aa0-merged.mount: Deactivated successfully.
Nov 24 20:28:31 compute-0 podman[280267]: 2025-11-24 20:28:31.261010872 +0000 UTC m=+0.216052085 container remove cba553fc23aa8719340107b1db47710f5581e7a3b70db838fa198eb9ed5408bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:28:31 compute-0 systemd[1]: libpod-conmon-cba553fc23aa8719340107b1db47710f5581e7a3b70db838fa198eb9ed5408bd.scope: Deactivated successfully.
Nov 24 20:28:31 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:31 compute-0 ceph-mon[75677]: pgmap v1361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:31 compute-0 podman[280307]: 2025-11-24 20:28:31.42881956 +0000 UTC m=+0.046909461 container create 37c77a94685ae9418d36823b61d99d5bb088e2195f86f2d26be53e972f760795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 20:28:31 compute-0 systemd[1]: Started libpod-conmon-37c77a94685ae9418d36823b61d99d5bb088e2195f86f2d26be53e972f760795.scope.
Nov 24 20:28:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8446b56156fc53d80cecfa8f70f3e860b4efa6f4942cbd72a975dd68e644e2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8446b56156fc53d80cecfa8f70f3e860b4efa6f4942cbd72a975dd68e644e2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8446b56156fc53d80cecfa8f70f3e860b4efa6f4942cbd72a975dd68e644e2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8446b56156fc53d80cecfa8f70f3e860b4efa6f4942cbd72a975dd68e644e2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:28:31 compute-0 podman[280307]: 2025-11-24 20:28:31.414124955 +0000 UTC m=+0.032214876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:28:31 compute-0 podman[280307]: 2025-11-24 20:28:31.516055047 +0000 UTC m=+0.134144978 container init 37c77a94685ae9418d36823b61d99d5bb088e2195f86f2d26be53e972f760795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:28:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:31.522+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:31.522+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:31 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:31 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:31 compute-0 podman[280307]: 2025-11-24 20:28:31.530015413 +0000 UTC m=+0.148105314 container start 37c77a94685ae9418d36823b61d99d5bb088e2195f86f2d26be53e972f760795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:28:31 compute-0 podman[280307]: 2025-11-24 20:28:31.536309398 +0000 UTC m=+0.154399329 container attach 37c77a94685ae9418d36823b61d99d5bb088e2195f86f2d26be53e972f760795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:28:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2227 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:32.495+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:32 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:32.529+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:32 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]: {
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "osd_id": 2,
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "type": "bluestore"
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:     },
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "osd_id": 1,
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "type": "bluestore"
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:     },
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "osd_id": 0,
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:         "type": "bluestore"
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]:     }
Nov 24 20:28:32 compute-0 vigilant_mayer[280323]: }
Nov 24 20:28:32 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:32 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2227 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:32 compute-0 systemd[1]: libpod-37c77a94685ae9418d36823b61d99d5bb088e2195f86f2d26be53e972f760795.scope: Deactivated successfully.
Nov 24 20:28:32 compute-0 systemd[1]: libpod-37c77a94685ae9418d36823b61d99d5bb088e2195f86f2d26be53e972f760795.scope: Consumed 1.084s CPU time.
Nov 24 20:28:32 compute-0 podman[280307]: 2025-11-24 20:28:32.61262495 +0000 UTC m=+1.230714921 container died 37c77a94685ae9418d36823b61d99d5bb088e2195f86f2d26be53e972f760795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8446b56156fc53d80cecfa8f70f3e860b4efa6f4942cbd72a975dd68e644e2c-merged.mount: Deactivated successfully.
Nov 24 20:28:33 compute-0 podman[280307]: 2025-11-24 20:28:33.427176322 +0000 UTC m=+2.045266243 container remove 37c77a94685ae9418d36823b61d99d5bb088e2195f86f2d26be53e972f760795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:28:33 compute-0 systemd[1]: libpod-conmon-37c77a94685ae9418d36823b61d99d5bb088e2195f86f2d26be53e972f760795.scope: Deactivated successfully.
Nov 24 20:28:33 compute-0 sudo[280202]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:33.472+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:33 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:28:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:33.503+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:33 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:28:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:28:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:28:33 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 826fb50c-aa65-4d03-b329-10935f36d420 does not exist
Nov 24 20:28:33 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev bac3e7bb-c1cc-41e8-87b1-1cddc1322aa5 does not exist
Nov 24 20:28:33 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:33 compute-0 ceph-mon[75677]: pgmap v1362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:33 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:28:33 compute-0 sudo[280370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:28:33 compute-0 sudo[280370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:33 compute-0 sudo[280370]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:33 compute-0 sudo[280395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:28:33 compute-0 sudo[280395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:28:33 compute-0 sudo[280395]: pam_unix(sudo:session): session closed for user root
Nov 24 20:28:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:28:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6345 writes, 27K keys, 6345 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6345 writes, 1165 syncs, 5.45 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 940 writes, 3540 keys, 940 commit groups, 1.0 writes per commit group, ingest: 2.88 MB, 0.00 MB/s
                                           Interval WAL: 940 writes, 393 syncs, 2.39 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:28:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:34.483+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:34 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:34.486+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:34 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:28:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:28:34 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:34 compute-0 ceph-mon[75677]: pgmap v1363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:28:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:35.518+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:35 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:35.528+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:35 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:35 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:36.542+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:36 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:36.547+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:36 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:36 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:36 compute-0 ceph-mon[75677]: pgmap v1364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2232 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.232704) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016117232835, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 391, "num_deletes": 255, "total_data_size": 222679, "memory_usage": 230032, "flush_reason": "Manual Compaction"}
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016117289372, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 220352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38429, "largest_seqno": 38819, "table_properties": {"data_size": 217996, "index_size": 456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6070, "raw_average_key_size": 18, "raw_value_size": 213118, "raw_average_value_size": 653, "num_data_blocks": 19, "num_entries": 326, "num_filter_entries": 326, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016108, "oldest_key_time": 1764016108, "file_creation_time": 1764016117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 56751 microseconds, and 3055 cpu microseconds.
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.289469) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 220352 bytes OK
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.289512) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.330777) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.330812) EVENT_LOG_v1 {"time_micros": 1764016117330800, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.330846) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 220095, prev total WAL file size 220095, number of live WAL files 2.
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.331568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353132' seq:72057594037927935, type:22 .. '6C6F676D0031373633' seq:0, type:0; will stop at (end)
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(215KB)], [77(9323KB)]
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016117331710, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 9767526, "oldest_snapshot_seqno": -1}
Nov 24 20:28:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:37.516+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:37 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:37.585+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:37 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 9592 keys, 9573705 bytes, temperature: kUnknown
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016117776764, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 9573705, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9517813, "index_size": 30820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24005, "raw_key_size": 259445, "raw_average_key_size": 27, "raw_value_size": 9350299, "raw_average_value_size": 974, "num_data_blocks": 1177, "num_entries": 9592, "num_filter_entries": 9592, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.777152) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9573705 bytes
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.790812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 21.9 rd, 21.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(87.8) write-amplify(43.4) OK, records in: 10112, records dropped: 520 output_compression: NoCompression
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.790841) EVENT_LOG_v1 {"time_micros": 1764016117790827, "job": 44, "event": "compaction_finished", "compaction_time_micros": 445168, "compaction_time_cpu_micros": 52776, "output_level": 6, "num_output_files": 1, "total_output_size": 9573705, "num_input_records": 10112, "num_output_records": 9592, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016117791043, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016117793963, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.331456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.794085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.794096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.794101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.794105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:28:37.794110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:28:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:38 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:38 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2232 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:38.529+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:38 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:38.605+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:38 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:38 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:28:38.625 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:28:38 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:28:38.627 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:28:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:39 compute-0 ceph-mon[75677]: pgmap v1365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:39 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:39.486+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:39 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:39.580+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:39 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:39 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Check health
Nov 24 20:28:39 compute-0 sshd-session[280420]: Invalid user test from 80.94.95.116 port 38162
Nov 24 20:28:39 compute-0 sshd-session[280420]: Connection closed by invalid user test 80.94.95.116 port 38162 [preauth]
Nov 24 20:28:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:40 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:40.488+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:40 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:28:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:28:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:28:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:28:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:28:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:40.599+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:40 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:41 compute-0 ceph-mon[75677]: pgmap v1366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:41 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:41.441+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:41 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:41.589+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2242 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:42 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:42.429+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:42 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:42.602+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:42 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:43 compute-0 ceph-mon[75677]: pgmap v1367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:43 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:28:43 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2242 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:43.422+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:43 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:43 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:28:43.629 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:28:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:43.634+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:43 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:44 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:44.468+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:44 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:44.587+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:44 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:44 compute-0 podman[280422]: 2025-11-24 20:28:44.830785541 +0000 UTC m=+0.060102187 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Nov 24 20:28:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:45 compute-0 ceph-mon[75677]: pgmap v1368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:45 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:45.425+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:45 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:45.561+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:45 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:46 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:46 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:46.403+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:46.610+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:46 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 2242 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:47 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:47 compute-0 ceph-mon[75677]: pgmap v1369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:47 compute-0 ceph-mon[75677]: Health check update: 31 slow ops, oldest one blocked for 2242 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:47.444+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:47 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:47.649+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:47 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:48 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:48.448+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:48 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:48.613+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:48 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:49 compute-0 ceph-mon[75677]: pgmap v1370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:49 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:49.498+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:49 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:49.604+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:49 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:50 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:50.512+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:50 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:50.599+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:50 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:51 compute-0 ceph-mon[75677]: pgmap v1371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:51 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:51.549+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:51 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:51.600+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:51 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 2247 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:52 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:52 compute-0 ceph-mon[75677]: Health check update: 31 slow ops, oldest one blocked for 2247 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:52.502+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:52 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:52.564+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:53 compute-0 ceph-mon[75677]: pgmap v1372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:53 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:53.458+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:53 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:53.522+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:53 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:28:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:28:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:28:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:28:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:28:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:28:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:54 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:54 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:54.443+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:54 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:54.500+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:54 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:55 compute-0 ceph-mon[75677]: pgmap v1373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:55 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:55.492+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:55 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:55.539+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:55 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:55 compute-0 podman[280442]: 2025-11-24 20:28:55.825453519 +0000 UTC m=+0.055161398 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 20:28:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:56.467+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:56 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:56 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:56.560+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:56 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 2252 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:28:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:57.426+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:57 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:57 compute-0 ceph-mon[75677]: pgmap v1374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:57 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:57 compute-0 ceph-mon[75677]: Health check update: 31 slow ops, oldest one blocked for 2252 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:28:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:57.564+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:57 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:58.429+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:58 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:58 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:58.612+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:58 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:28:59.392+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:59 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:28:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:59 compute-0 ceph-mon[75677]: pgmap v1375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:28:59 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:28:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:28:59.642+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:59 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:28:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:28:59 compute-0 podman[280463]: 2025-11-24 20:28:59.882341221 +0000 UTC m=+0.117596674 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 24 20:29:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:00.379+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:00 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:00 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:00.651+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:00 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:01.345+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:01 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:01 compute-0 ceph-mon[75677]: pgmap v1376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:01 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:01.677+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:01 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 2257 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:02.377+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:02 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:02 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:02 compute-0 ceph-mon[75677]: Health check update: 31 slow ops, oldest one blocked for 2257 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:02.725+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:02 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:03.397+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:03 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:03 compute-0 ceph-mon[75677]: pgmap v1377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:03 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:03.734+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:04.378+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:04 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:04 compute-0 ceph-mon[75677]: pgmap v1378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:04 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:04.728+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:04 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:05.425+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:05 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:05 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:05.734+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:05 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:06.391+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:06 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:06 compute-0 ceph-mon[75677]: pgmap v1379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:06 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:06.770+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:06 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 2262 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:07.341+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:07 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:07 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:07 compute-0 ceph-mon[75677]: Health check update: 31 slow ops, oldest one blocked for 2262 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:07.788+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:07 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:08.363+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:08 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:08 compute-0 ceph-mon[75677]: pgmap v1380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:08 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:08.833+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:08 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:09.374+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:09 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:29:09.385 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:29:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:29:09.386 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:29:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:29:09.386 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:29:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:09.807+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:09 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:09 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:10.336+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:10 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:10.770+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:10 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:11 compute-0 ceph-mon[75677]: pgmap v1381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:11 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:11.341+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:11 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:11.770+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:11 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 2272 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:12 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:12.358+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:12 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:12.774+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:12 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:13.320+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:13 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:13 compute-0 ceph-mon[75677]: pgmap v1382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:13 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:13 compute-0 ceph-mon[75677]: Health check update: 31 slow ops, oldest one blocked for 2272 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:13.773+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:13 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:14 compute-0 nova_compute[257476]: 2025-11-24 20:29:14.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:29:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:14.314+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:14 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:14 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:14.786+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:14 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:15 compute-0 nova_compute[257476]: 2025-11-24 20:29:15.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:29:15 compute-0 nova_compute[257476]: 2025-11-24 20:29:15.151 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 20:29:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:15.360+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:15 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:15.833+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:15 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:15 compute-0 podman[280490]: 2025-11-24 20:29:15.844054119 +0000 UTC m=+0.070219062 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:29:16 compute-0 ceph-mon[75677]: pgmap v1383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:16 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:16 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:16 compute-0 nova_compute[257476]: 2025-11-24 20:29:16.151 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:29:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:16.362+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:16 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:29:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/327218896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:29:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:29:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/327218896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:29:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:16.850+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:16 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:17 compute-0 ceph-mon[75677]: pgmap v1384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:17 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/327218896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:29:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/327218896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:29:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:17.368+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:17 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:17.847+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:17 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:18 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 2277 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:18 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:18.406+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:18 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:18.841+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:19 compute-0 ceph-mon[75677]: pgmap v1385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:19 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:19 compute-0 ceph-mon[75677]: Health check update: 31 slow ops, oldest one blocked for 2277 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:19 compute-0 nova_compute[257476]: 2025-11-24 20:29:19.150 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:29:19 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:19.422+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:19.837+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:19 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:20 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:20 compute-0 ceph-mon[75677]: pgmap v1386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:20 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:20.423+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:20.820+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:20 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.146 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.170 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.194 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.194 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.195 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.195 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.195 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:29:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:21 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:21 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:21.461+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:29:21 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674863735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.709 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.785 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.785 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.790 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.791 257491 DEBUG nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 20:29:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:21.823+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.994 257491 WARNING nova.virt.libvirt.driver [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.995 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4891MB free_disk=59.936466217041016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.995 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:29:21 compute-0 nova_compute[257476]: 2025-11-24 20:29:21.996 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.075 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 43bc955c-77ee-42d8-98e2-84163217d1aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.076 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 5bc0dcb2-bec5-4d33-a8c8-42baca81a650 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.076 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 4e9758ff-13d1-447b-9a2a-d6ae9f807143 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.076 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance db8c22d1-e16d-49f8-b4a5-ba8e87849ea3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.076 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Instance 664ca0de-5d04-41c8-ada6-32391f471c42 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.076 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.077 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.176 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:29:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:22 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:22 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1674863735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:29:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:22.473+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:22 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:29:22 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3821369032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.645 257491 DEBUG oslo_concurrency.processutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.649 257491 DEBUG nova.compute.provider_tree [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.661 257491 DEBUG nova.scheduler.client.report [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.662 257491 DEBUG nova.compute.resource_tracker [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 20:29:22 compute-0 nova_compute[257476]: 2025-11-24 20:29:22.662 257491 DEBUG oslo_concurrency.lockutils [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:29:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:22.846+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:22 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:23 compute-0 ceph-mon[75677]: pgmap v1387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:23 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:23 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3821369032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:29:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:23.450+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:23 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.643 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.644 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.644 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.666 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 43bc955c-77ee-42d8-98e2-84163217d1aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.667 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 5bc0dcb2-bec5-4d33-a8c8-42baca81a650] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.667 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 4e9758ff-13d1-447b-9a2a-d6ae9f807143] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.667 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: db8c22d1-e16d-49f8-b4a5-ba8e87849ea3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.667 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] [instance: 664ca0de-5d04-41c8-ada6-32391f471c42] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.668 257491 DEBUG nova.compute.manager [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.668 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:29:23 compute-0 nova_compute[257476]: 2025-11-24 20:29:23.669 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:29:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:23.889+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:23 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:29:24
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'backups', '.mgr', 'volumes']
Nov 24 20:29:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:29:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:24.459+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:24 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:24 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:24 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:24.850+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:24 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:25 compute-0 nova_compute[257476]: 2025-11-24 20:29:25.171 257491 DEBUG oslo_service.periodic_task [None req-df517b26-d704-494f-9c1b-d690779ee636 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 20:29:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:25.475+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:25 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:25 compute-0 ceph-mon[75677]: pgmap v1388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:25 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:25.812+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:25 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 24 20:29:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:26.518+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:26 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:26 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:26.829+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:26 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:26 compute-0 podman[280554]: 2025-11-24 20:29:26.865839717 +0000 UTC m=+0.099776626 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:29:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 2282 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:27.548+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:27 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:27 compute-0 ceph-mon[75677]: pgmap v1389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 24 20:29:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:27 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:27 compute-0 ceph-mon[75677]: Health check update: 31 slow ops, oldest one blocked for 2282 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:29:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:27.839+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:27 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:28.573+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:28 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:28 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:28.877+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:28 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:29.612+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:29 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:29 compute-0 ceph-mon[75677]: pgmap v1390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:29:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:29 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:29:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:29.884+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:29 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:30.591+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:30 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:30 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:30.853+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:30 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:30 compute-0 podman[280574]: 2025-11-24 20:29:30.922801011 +0000 UTC m=+0.149485749 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 24 20:29:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:31.642+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:31 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:29:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:31.852+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:31 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:32 compute-0 ceph-mon[75677]: pgmap v1391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:29:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:32 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 2292 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:32.605+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:32 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:32.824+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:32 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:33 compute-0 nova_compute[257476]: 2025-11-24 20:29:33.282 257491 DEBUG oslo_concurrency.lockutils [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] Acquiring lock "8d9085e2-9df7-4f22-83ec-889f2d18edc9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:29:33 compute-0 nova_compute[257476]: 2025-11-24 20:29:33.282 257491 DEBUG oslo_concurrency.lockutils [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] Lock "8d9085e2-9df7-4f22-83ec-889f2d18edc9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:29:33 compute-0 nova_compute[257476]: 2025-11-24 20:29:33.330 257491 DEBUG nova.compute.manager [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] [instance: 8d9085e2-9df7-4f22-83ec-889f2d18edc9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 20:29:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:33 compute-0 ceph-mon[75677]: pgmap v1392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:29:33 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:33 compute-0 ceph-mon[75677]: Health check update: 31 slow ops, oldest one blocked for 2292 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:33.578+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:33 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:29:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:33.837+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:33 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:33 compute-0 sudo[280599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:33 compute-0 sudo[280599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:33 compute-0 sudo[280599]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:33 compute-0 sudo[280624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:29:33 compute-0 sudo[280624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:33 compute-0 sudo[280624]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:34 compute-0 sudo[280649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:34 compute-0 sudo[280649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:34 compute-0 sudo[280649]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:34 compute-0 sudo[280674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 20:29:34 compute-0 sudo[280674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:34 compute-0 nova_compute[257476]: 2025-11-24 20:29:34.222 257491 DEBUG oslo_concurrency.lockutils [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:29:34 compute-0 nova_compute[257476]: 2025-11-24 20:29:34.222 257491 DEBUG oslo_concurrency.lockutils [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:29:34 compute-0 nova_compute[257476]: 2025-11-24 20:29:34.228 257491 DEBUG nova.virt.hardware [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 20:29:34 compute-0 nova_compute[257476]: 2025-11-24 20:29:34.228 257491 INFO nova.compute.claims [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] [instance: 8d9085e2-9df7-4f22-83ec-889f2d18edc9] Claim successful on node compute-0.ctlplane.example.com
Nov 24 20:29:34 compute-0 sudo[280674]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:29:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:34 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:34 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:34.567+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:29:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:29:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:29:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:34.886+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:34 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:29:34 compute-0 sudo[280721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:34 compute-0 sudo[280721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:34 compute-0 sudo[280721]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:35 compute-0 sudo[280746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:29:35 compute-0 sudo[280746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:35 compute-0 sudo[280746]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:35 compute-0 sudo[280771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:35 compute-0 sudo[280771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:35 compute-0 sudo[280771]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:35 compute-0 sudo[280796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:29:35 compute-0 sudo[280796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:35 compute-0 nova_compute[257476]: 2025-11-24 20:29:35.371 257491 DEBUG oslo_concurrency.processutils [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 20:29:35 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:35.521+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:35 compute-0 ceph-mon[75677]: pgmap v1393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:29:35 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:29:35 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:29:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:29:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 20:29:35 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1943581026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:29:35 compute-0 sudo[280796]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:35 compute-0 nova_compute[257476]: 2025-11-24 20:29:35.911 257491 DEBUG oslo_concurrency.processutils [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 20:29:35 compute-0 nova_compute[257476]: 2025-11-24 20:29:35.918 257491 DEBUG nova.compute.provider_tree [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] Inventory has not changed in ProviderTree for provider: 36172ea5-11d9-49c4-91b9-fe09a4a54b66 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 20:29:35 compute-0 nova_compute[257476]: 2025-11-24 20:29:35.929 257491 DEBUG nova.scheduler.client.report [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] Inventory has not changed for provider 36172ea5-11d9-49c4-91b9-fe09a4a54b66 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 20:29:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:35.934+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:35 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:29:35 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:29:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:29:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:29:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:29:35 compute-0 nova_compute[257476]: 2025-11-24 20:29:35.957 257491 DEBUG oslo_concurrency.lockutils [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:29:35 compute-0 nova_compute[257476]: 2025-11-24 20:29:35.957 257491 DEBUG nova.compute.manager [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] [instance: 8d9085e2-9df7-4f22-83ec-889f2d18edc9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 20:29:36 compute-0 nova_compute[257476]: 2025-11-24 20:29:36.025 257491 DEBUG nova.compute.manager [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] [instance: 8d9085e2-9df7-4f22-83ec-889f2d18edc9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 20:29:36 compute-0 nova_compute[257476]: 2025-11-24 20:29:36.025 257491 DEBUG nova.network.neutron [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] [instance: 8d9085e2-9df7-4f22-83ec-889f2d18edc9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 20:29:36 compute-0 nova_compute[257476]: 2025-11-24 20:29:36.076 257491 INFO nova.virt.libvirt.driver [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] [instance: 8d9085e2-9df7-4f22-83ec-889f2d18edc9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 20:29:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:29:36 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4795334c-1cc0-4f3e-8219-d1b3b7f2b0a5 does not exist
Nov 24 20:29:36 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 975ba344-acdd-4831-ab07-cf0d126ebbeb does not exist
Nov 24 20:29:36 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 714030a7-2544-4744-97a5-ba24ee40658e does not exist
Nov 24 20:29:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:29:36 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:29:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:29:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:29:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:29:36 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:29:36 compute-0 nova_compute[257476]: 2025-11-24 20:29:36.108 257491 DEBUG nova.compute.manager [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] [instance: 8d9085e2-9df7-4f22-83ec-889f2d18edc9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 20:29:36 compute-0 sudo[280876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:36 compute-0 sudo[280876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:36 compute-0 sudo[280876]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:36 compute-0 nova_compute[257476]: 2025-11-24 20:29:36.244 257491 DEBUG nova.compute.manager [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] [instance: 8d9085e2-9df7-4f22-83ec-889f2d18edc9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 20:29:36 compute-0 nova_compute[257476]: 2025-11-24 20:29:36.246 257491 DEBUG nova.virt.libvirt.driver [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] [instance: 8d9085e2-9df7-4f22-83ec-889f2d18edc9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 20:29:36 compute-0 nova_compute[257476]: 2025-11-24 20:29:36.247 257491 INFO nova.virt.libvirt.driver [None req-ea3a1a74-53c3-43d1-bbed-a25f437c61a2 c54008937433418dbbdfd6d93e0293d5 4f8357d25f464dd3bc888f393e6b1d39 - - default default] [instance: 8d9085e2-9df7-4f22-83ec-889f2d18edc9] Creating image(s)
Nov 24 20:29:36 compute-0 sudo[280901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:29:36 compute-0 sudo[280901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:36 compute-0 sudo[280901]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:36 compute-0 sudo[280942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:36 compute-0 sudo[280942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:36 compute-0 sudo[280942]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:36 compute-0 sudo[280967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:29:36 compute-0 sudo[280967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:36 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:36.554+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:36 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1943581026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 20:29:36 compute-0 ceph-mon[75677]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 12 ])
Nov 24 20:29:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:29:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:29:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:29:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:29:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:29:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:29:36 compute-0 podman[281032]: 2025-11-24 20:29:36.937963104 +0000 UTC m=+0.060153808 container create 4b7fd0b70bac82bdaaa67d06345e051ee4065669995590a55464b55a7b08c3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:29:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:36.953+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:36 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:36 compute-0 systemd[1]: Started libpod-conmon-4b7fd0b70bac82bdaaa67d06345e051ee4065669995590a55464b55a7b08c3aa.scope.
Nov 24 20:29:36 compute-0 podman[281032]: 2025-11-24 20:29:36.903505611 +0000 UTC m=+0.025696355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:29:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:29:37 compute-0 podman[281032]: 2025-11-24 20:29:37.059591783 +0000 UTC m=+0.181782487 container init 4b7fd0b70bac82bdaaa67d06345e051ee4065669995590a55464b55a7b08c3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 20:29:37 compute-0 podman[281032]: 2025-11-24 20:29:37.070748665 +0000 UTC m=+0.192939359 container start 4b7fd0b70bac82bdaaa67d06345e051ee4065669995590a55464b55a7b08c3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:29:37 compute-0 podman[281032]: 2025-11-24 20:29:37.080713346 +0000 UTC m=+0.202904050 container attach 4b7fd0b70bac82bdaaa67d06345e051ee4065669995590a55464b55a7b08c3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:29:37 compute-0 youthful_hofstadter[281048]: 167 167
Nov 24 20:29:37 compute-0 systemd[1]: libpod-4b7fd0b70bac82bdaaa67d06345e051ee4065669995590a55464b55a7b08c3aa.scope: Deactivated successfully.
Nov 24 20:29:37 compute-0 conmon[281048]: conmon 4b7fd0b70bac82bdaaa6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b7fd0b70bac82bdaaa67d06345e051ee4065669995590a55464b55a7b08c3aa.scope/container/memory.events
Nov 24 20:29:37 compute-0 podman[281032]: 2025-11-24 20:29:37.083020016 +0000 UTC m=+0.205210720 container died 4b7fd0b70bac82bdaaa67d06345e051ee4065669995590a55464b55a7b08c3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 20:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-309f77533b493bb66e15654293d8ed3f56ffec3a278d9bc6257f2861020226de-merged.mount: Deactivated successfully.
Nov 24 20:29:37 compute-0 podman[281032]: 2025-11-24 20:29:37.167200373 +0000 UTC m=+0.289391097 container remove 4b7fd0b70bac82bdaaa67d06345e051ee4065669995590a55464b55a7b08c3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 24 20:29:37 compute-0 systemd[1]: libpod-conmon-4b7fd0b70bac82bdaaa67d06345e051ee4065669995590a55464b55a7b08c3aa.scope: Deactivated successfully.
Nov 24 20:29:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:37 compute-0 podman[281071]: 2025-11-24 20:29:37.43143723 +0000 UTC m=+0.084617320 container create 97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:29:37 compute-0 podman[281071]: 2025-11-24 20:29:37.388119854 +0000 UTC m=+0.041300004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:29:37 compute-0 systemd[1]: Started libpod-conmon-97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a.scope.
Nov 24 20:29:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:37.522+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:37 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e2dc2877913f0fddf31d358ad8b5191e55b750ba17ea295adb968c0dfb66a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e2dc2877913f0fddf31d358ad8b5191e55b750ba17ea295adb968c0dfb66a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e2dc2877913f0fddf31d358ad8b5191e55b750ba17ea295adb968c0dfb66a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e2dc2877913f0fddf31d358ad8b5191e55b750ba17ea295adb968c0dfb66a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e2dc2877913f0fddf31d358ad8b5191e55b750ba17ea295adb968c0dfb66a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:37 compute-0 podman[281071]: 2025-11-24 20:29:37.570638928 +0000 UTC m=+0.223819018 container init 97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 20:29:37 compute-0 podman[281071]: 2025-11-24 20:29:37.588890717 +0000 UTC m=+0.242070807 container start 97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:29:37 compute-0 podman[281071]: 2025-11-24 20:29:37.59855453 +0000 UTC m=+0.251734670 container attach 97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:29:37 compute-0 ceph-mon[75677]: pgmap v1394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:29:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:37 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 20:29:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:37.974+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:37 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:38 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:38.549+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:38 compute-0 intelligent_montalcini[281088]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:29:38 compute-0 intelligent_montalcini[281088]: --> relative data size: 1.0
Nov 24 20:29:38 compute-0 intelligent_montalcini[281088]: --> All data devices are unavailable
Nov 24 20:29:38 compute-0 systemd[1]: libpod-97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a.scope: Deactivated successfully.
Nov 24 20:29:38 compute-0 systemd[1]: libpod-97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a.scope: Consumed 1.148s CPU time.
Nov 24 20:29:38 compute-0 conmon[281088]: conmon 97295d96eefef6176d3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a.scope/container/memory.events
Nov 24 20:29:38 compute-0 podman[281071]: 2025-11-24 20:29:38.776967229 +0000 UTC m=+1.430147319 container died 97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:29:38 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 31 slow ops, oldest one blocked for 2297 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:38 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-13e2dc2877913f0fddf31d358ad8b5191e55b750ba17ea295adb968c0dfb66a4-merged.mount: Deactivated successfully.
Nov 24 20:29:38 compute-0 podman[281071]: 2025-11-24 20:29:38.925955215 +0000 UTC m=+1.579135295 container remove 97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 20:29:38 compute-0 systemd[1]: libpod-conmon-97295d96eefef6176d3cf077b09c1be1ab60a2581f41230c879162237709292a.scope: Deactivated successfully.
Nov 24 20:29:38 compute-0 sudo[280967]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:39.002+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:39 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:39 compute-0 sudo[281129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:39 compute-0 sudo[281129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:39 compute-0 sudo[281129]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:39 compute-0 sudo[281154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:29:39 compute-0 sudo[281154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:39 compute-0 sudo[281154]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:39 compute-0 sudo[281179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:39 compute-0 sudo[281179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:39 compute-0 sudo[281179]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:39 compute-0 sudo[281204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:29:39 compute-0 sudo[281204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:39.551+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:39 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:39 compute-0 ceph-mon[75677]: pgmap v1395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 20:29:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:39 compute-0 ceph-mon[75677]: Health check update: 31 slow ops, oldest one blocked for 2297 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:39 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:39 compute-0 podman[281269]: 2025-11-24 20:29:39.841539264 +0000 UTC m=+0.065813076 container create 27f6a5fea4ea42c47086b85bf8e37662719243a56560fe6c49ae1cd2f89600d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_torvalds, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:29:39 compute-0 systemd[1]: Started libpod-conmon-27f6a5fea4ea42c47086b85bf8e37662719243a56560fe6c49ae1cd2f89600d7.scope.
Nov 24 20:29:39 compute-0 podman[281269]: 2025-11-24 20:29:39.815145922 +0000 UTC m=+0.039419794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:29:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:29:39 compute-0 podman[281269]: 2025-11-24 20:29:39.944297498 +0000 UTC m=+0.168571370 container init 27f6a5fea4ea42c47086b85bf8e37662719243a56560fe6c49ae1cd2f89600d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_torvalds, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:29:39 compute-0 podman[281269]: 2025-11-24 20:29:39.956738454 +0000 UTC m=+0.181012266 container start 27f6a5fea4ea42c47086b85bf8e37662719243a56560fe6c49ae1cd2f89600d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:29:39 compute-0 podman[281269]: 2025-11-24 20:29:39.960553724 +0000 UTC m=+0.184827546 container attach 27f6a5fea4ea42c47086b85bf8e37662719243a56560fe6c49ae1cd2f89600d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:29:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:39.960+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:39 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:39 compute-0 agitated_torvalds[281285]: 167 167
Nov 24 20:29:39 compute-0 systemd[1]: libpod-27f6a5fea4ea42c47086b85bf8e37662719243a56560fe6c49ae1cd2f89600d7.scope: Deactivated successfully.
Nov 24 20:29:39 compute-0 podman[281269]: 2025-11-24 20:29:39.965381601 +0000 UTC m=+0.189655423 container died 27f6a5fea4ea42c47086b85bf8e37662719243a56560fe6c49ae1cd2f89600d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 20:29:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e03c233599adf2837f40ca0efb4897a9f1fa8051680a68fd5c56214d7d647716-merged.mount: Deactivated successfully.
Nov 24 20:29:40 compute-0 podman[281269]: 2025-11-24 20:29:40.01573436 +0000 UTC m=+0.240008182 container remove 27f6a5fea4ea42c47086b85bf8e37662719243a56560fe6c49ae1cd2f89600d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 20:29:40 compute-0 systemd[1]: libpod-conmon-27f6a5fea4ea42c47086b85bf8e37662719243a56560fe6c49ae1cd2f89600d7.scope: Deactivated successfully.
Nov 24 20:29:40 compute-0 podman[281309]: 2025-11-24 20:29:40.214760187 +0000 UTC m=+0.052958029 container create a3401785cead9e03fca5d285923854943965ecbefa71922702f9d8a5c4b12a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:29:40 compute-0 systemd[1]: Started libpod-conmon-a3401785cead9e03fca5d285923854943965ecbefa71922702f9d8a5c4b12a1f.scope.
Nov 24 20:29:40 compute-0 podman[281309]: 2025-11-24 20:29:40.192152995 +0000 UTC m=+0.030350847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:29:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/634c76772b2c9805175f99839bcfc47aef90c71434ea08b783f062abe920c269/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/634c76772b2c9805175f99839bcfc47aef90c71434ea08b783f062abe920c269/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/634c76772b2c9805175f99839bcfc47aef90c71434ea08b783f062abe920c269/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/634c76772b2c9805175f99839bcfc47aef90c71434ea08b783f062abe920c269/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:40 compute-0 podman[281309]: 2025-11-24 20:29:40.335341998 +0000 UTC m=+0.173539850 container init a3401785cead9e03fca5d285923854943965ecbefa71922702f9d8a5c4b12a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:29:40 compute-0 podman[281309]: 2025-11-24 20:29:40.349732785 +0000 UTC m=+0.187930627 container start a3401785cead9e03fca5d285923854943965ecbefa71922702f9d8a5c4b12a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 20:29:40 compute-0 podman[281309]: 2025-11-24 20:29:40.35637139 +0000 UTC m=+0.194569242 container attach a3401785cead9e03fca5d285923854943965ecbefa71922702f9d8a5c4b12a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 20:29:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:29:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:29:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:29:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:29:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:29:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:40.596+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:40 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:40 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:40.991+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:40 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:41 compute-0 brave_mclean[281325]: {
Nov 24 20:29:41 compute-0 brave_mclean[281325]:     "0": [
Nov 24 20:29:41 compute-0 brave_mclean[281325]:         {
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "devices": [
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "/dev/loop3"
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             ],
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_name": "ceph_lv0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_size": "21470642176",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "name": "ceph_lv0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "tags": {
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.cluster_name": "ceph",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.crush_device_class": "",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.encrypted": "0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.osd_id": "0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.type": "block",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.vdo": "0"
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             },
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "type": "block",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "vg_name": "ceph_vg0"
Nov 24 20:29:41 compute-0 brave_mclean[281325]:         }
Nov 24 20:29:41 compute-0 brave_mclean[281325]:     ],
Nov 24 20:29:41 compute-0 brave_mclean[281325]:     "1": [
Nov 24 20:29:41 compute-0 brave_mclean[281325]:         {
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "devices": [
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "/dev/loop4"
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             ],
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_name": "ceph_lv1",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_size": "21470642176",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "name": "ceph_lv1",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "tags": {
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.cluster_name": "ceph",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.crush_device_class": "",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.encrypted": "0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.osd_id": "1",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.type": "block",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.vdo": "0"
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             },
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "type": "block",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "vg_name": "ceph_vg1"
Nov 24 20:29:41 compute-0 brave_mclean[281325]:         }
Nov 24 20:29:41 compute-0 brave_mclean[281325]:     ],
Nov 24 20:29:41 compute-0 brave_mclean[281325]:     "2": [
Nov 24 20:29:41 compute-0 brave_mclean[281325]:         {
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "devices": [
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "/dev/loop5"
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             ],
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_name": "ceph_lv2",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_size": "21470642176",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "name": "ceph_lv2",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "tags": {
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.cluster_name": "ceph",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.crush_device_class": "",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.encrypted": "0",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.osd_id": "2",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.type": "block",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:                 "ceph.vdo": "0"
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             },
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "type": "block",
Nov 24 20:29:41 compute-0 brave_mclean[281325]:             "vg_name": "ceph_vg2"
Nov 24 20:29:41 compute-0 brave_mclean[281325]:         }
Nov 24 20:29:41 compute-0 brave_mclean[281325]:     ]
Nov 24 20:29:41 compute-0 brave_mclean[281325]: }
Nov 24 20:29:41 compute-0 systemd[1]: libpod-a3401785cead9e03fca5d285923854943965ecbefa71922702f9d8a5c4b12a1f.scope: Deactivated successfully.
Nov 24 20:29:41 compute-0 podman[281309]: 2025-11-24 20:29:41.125216143 +0000 UTC m=+0.963414025 container died a3401785cead9e03fca5d285923854943965ecbefa71922702f9d8a5c4b12a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 20:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-634c76772b2c9805175f99839bcfc47aef90c71434ea08b783f062abe920c269-merged.mount: Deactivated successfully.
Nov 24 20:29:41 compute-0 podman[281309]: 2025-11-24 20:29:41.192505767 +0000 UTC m=+1.030703589 container remove a3401785cead9e03fca5d285923854943965ecbefa71922702f9d8a5c4b12a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mclean, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 24 20:29:41 compute-0 systemd[1]: libpod-conmon-a3401785cead9e03fca5d285923854943965ecbefa71922702f9d8a5c4b12a1f.scope: Deactivated successfully.
Nov 24 20:29:41 compute-0 sudo[281204]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:41 compute-0 sudo[281348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:41 compute-0 sudo[281348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:41 compute-0 sudo[281348]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:41 compute-0 sudo[281373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:29:41 compute-0 sudo[281373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:41 compute-0 sudo[281373]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:41 compute-0 sudo[281398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:41 compute-0 sudo[281398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:41 compute-0 sudo[281398]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:41.555+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:41 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:41 compute-0 sudo[281423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:29:41 compute-0 sudo[281423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:41 compute-0 ceph-mon[75677]: pgmap v1396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:41 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:41.950+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:42 compute-0 podman[281489]: 2025-11-24 20:29:42.08648638 +0000 UTC m=+0.086446257 container create 42c408c899ac1b6545525b517a7d55a61786497124eeb868873d1f38eb4f063a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_solomon, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 24 20:29:42 compute-0 systemd[1]: Started libpod-conmon-42c408c899ac1b6545525b517a7d55a61786497124eeb868873d1f38eb4f063a.scope.
Nov 24 20:29:42 compute-0 podman[281489]: 2025-11-24 20:29:42.056151875 +0000 UTC m=+0.056111832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:29:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:29:42 compute-0 podman[281489]: 2025-11-24 20:29:42.170337918 +0000 UTC m=+0.170297885 container init 42c408c899ac1b6545525b517a7d55a61786497124eeb868873d1f38eb4f063a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:29:42 compute-0 podman[281489]: 2025-11-24 20:29:42.18414717 +0000 UTC m=+0.184107047 container start 42c408c899ac1b6545525b517a7d55a61786497124eeb868873d1f38eb4f063a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:29:42 compute-0 podman[281489]: 2025-11-24 20:29:42.18911239 +0000 UTC m=+0.189072297 container attach 42c408c899ac1b6545525b517a7d55a61786497124eeb868873d1f38eb4f063a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 24 20:29:42 compute-0 priceless_solomon[281505]: 167 167
Nov 24 20:29:42 compute-0 systemd[1]: libpod-42c408c899ac1b6545525b517a7d55a61786497124eeb868873d1f38eb4f063a.scope: Deactivated successfully.
Nov 24 20:29:42 compute-0 podman[281489]: 2025-11-24 20:29:42.19176227 +0000 UTC m=+0.191722147 container died 42c408c899ac1b6545525b517a7d55a61786497124eeb868873d1f38eb4f063a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_solomon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-68b0d7339e89e5066dee02e70ccebaecd3ed320ada4a1b83ccb7dd8ec735932c-merged.mount: Deactivated successfully.
Nov 24 20:29:42 compute-0 podman[281489]: 2025-11-24 20:29:42.232316402 +0000 UTC m=+0.232276269 container remove 42c408c899ac1b6545525b517a7d55a61786497124eeb868873d1f38eb4f063a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_solomon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:29:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:42 compute-0 systemd[1]: libpod-conmon-42c408c899ac1b6545525b517a7d55a61786497124eeb868873d1f38eb4f063a.scope: Deactivated successfully.
Nov 24 20:29:42 compute-0 podman[281529]: 2025-11-24 20:29:42.451285322 +0000 UTC m=+0.062600982 container create 1c20fe78e1be8e5c4a3c7249dd66d7f26b49db560ef3976df2684d858f0f230b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 20:29:42 compute-0 podman[281529]: 2025-11-24 20:29:42.428003192 +0000 UTC m=+0.039318892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:29:42 compute-0 systemd[1]: Started libpod-conmon-1c20fe78e1be8e5c4a3c7249dd66d7f26b49db560ef3976df2684d858f0f230b.scope.
Nov 24 20:29:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d15607740f3c7e47b46354a42827f25f42b3a4a51d427f0a7e49a7f2ea5eb4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d15607740f3c7e47b46354a42827f25f42b3a4a51d427f0a7e49a7f2ea5eb4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d15607740f3c7e47b46354a42827f25f42b3a4a51d427f0a7e49a7f2ea5eb4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d15607740f3c7e47b46354a42827f25f42b3a4a51d427f0a7e49a7f2ea5eb4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:29:42 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:42.568+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:42 compute-0 podman[281529]: 2025-11-24 20:29:42.724577446 +0000 UTC m=+0.335893186 container init 1c20fe78e1be8e5c4a3c7249dd66d7f26b49db560ef3976df2684d858f0f230b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cerf, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:29:42 compute-0 podman[281529]: 2025-11-24 20:29:42.736004575 +0000 UTC m=+0.347320255 container start 1c20fe78e1be8e5c4a3c7249dd66d7f26b49db560ef3976df2684d858f0f230b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:29:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:42.930+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:42 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:42 compute-0 podman[281529]: 2025-11-24 20:29:42.990904517 +0000 UTC m=+0.602220187 container attach 1c20fe78e1be8e5c4a3c7249dd66d7f26b49db560ef3976df2684d858f0f230b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cerf, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:29:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:42 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:43 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:43.571+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:43 compute-0 cranky_cerf[281546]: {
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "osd_id": 2,
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "type": "bluestore"
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:     },
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "osd_id": 1,
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "type": "bluestore"
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:     },
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "osd_id": 0,
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:         "type": "bluestore"
Nov 24 20:29:43 compute-0 cranky_cerf[281546]:     }
Nov 24 20:29:43 compute-0 cranky_cerf[281546]: }
Nov 24 20:29:43 compute-0 systemd[1]: libpod-1c20fe78e1be8e5c4a3c7249dd66d7f26b49db560ef3976df2684d858f0f230b.scope: Deactivated successfully.
Nov 24 20:29:43 compute-0 systemd[1]: libpod-1c20fe78e1be8e5c4a3c7249dd66d7f26b49db560ef3976df2684d858f0f230b.scope: Consumed 1.066s CPU time.
Nov 24 20:29:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:43 compute-0 podman[281579]: 2025-11-24 20:29:43.850365716 +0000 UTC m=+0.040956195 container died 1c20fe78e1be8e5c4a3c7249dd66d7f26b49db560ef3976df2684d858f0f230b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cerf, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:29:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:43.886+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:43 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:44 compute-0 ceph-mon[75677]: pgmap v1397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:44 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d15607740f3c7e47b46354a42827f25f42b3a4a51d427f0a7e49a7f2ea5eb4b-merged.mount: Deactivated successfully.
Nov 24 20:29:44 compute-0 podman[281579]: 2025-11-24 20:29:44.156229693 +0000 UTC m=+0.346820072 container remove 1c20fe78e1be8e5c4a3c7249dd66d7f26b49db560ef3976df2684d858f0f230b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 20:29:44 compute-0 systemd[1]: libpod-conmon-1c20fe78e1be8e5c4a3c7249dd66d7f26b49db560ef3976df2684d858f0f230b.scope: Deactivated successfully.
Nov 24 20:29:44 compute-0 sudo[281423]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:29:44 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:29:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:29:44 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:29:44 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 623359b2-c88b-4b20-9b7e-556afcc6335f does not exist
Nov 24 20:29:44 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a6ef36d7-329c-4273-9620-04002f23d209 does not exist
Nov 24 20:29:44 compute-0 sudo[281594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:29:44 compute-0 sudo[281594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:44 compute-0 sudo[281594]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:44 compute-0 sudo[281619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:29:44 compute-0 sudo[281619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:29:44 compute-0 sudo[281619]: pam_unix(sudo:session): session closed for user root
Nov 24 20:29:44 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:44.597+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:44.906+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:44 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:45 compute-0 ceph-mon[75677]: pgmap v1398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:45 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:29:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:29:45 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:45.579+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:45.928+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:45 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:46 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:46 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:46.608+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:46 compute-0 podman[281644]: 2025-11-24 20:29:46.887802145 +0000 UTC m=+0.102545829 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 20:29:46 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:46.957+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2302 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:47 compute-0 ceph-mon[75677]: pgmap v1399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:47 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:47 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2302 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:47.631+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:47 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:47 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:47.987+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:48 compute-0 sshd-session[281664]: Invalid user odoo from 182.93.7.194 port 45872
Nov 24 20:29:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:48 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:48 compute-0 sshd-session[281664]: Received disconnect from 182.93.7.194 port 45872:11: Bye Bye [preauth]
Nov 24 20:29:48 compute-0 sshd-session[281664]: Disconnected from invalid user odoo 182.93.7.194 port 45872 [preauth]
Nov 24 20:29:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:48.663+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:48 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:48 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:48.998+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:49 compute-0 ceph-mon[75677]: pgmap v1400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:49 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:49.685+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:49 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:49.971+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:49 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:50 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:50.673+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:50 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:51.009+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:51 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:51 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 20:29:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:51 compute-0 ceph-mon[75677]: pgmap v1401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:51 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:51.644+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:51 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:52.005+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2307 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:52 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:52 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2307 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:52.689+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:52 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:52.970+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:53 compute-0 ceph-mon[75677]: pgmap v1402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:53 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:53.725+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:53 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:53 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:53.949+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:54 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:29:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:29:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:29:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:29:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:29:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:29:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:54.755+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:54 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:54 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:54.997+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:55 compute-0 ceph-mon[75677]: pgmap v1403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:55 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:55.718+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:55 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:55 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:55.987+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:56 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:56.758+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:56 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:56.995+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:56 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2312 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:29:57 compute-0 ceph-mon[75677]: pgmap v1404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:57 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:57 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2312 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:29:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:57.799+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:57 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:57 compute-0 podman[281667]: 2025-11-24 20:29:57.874103813 +0000 UTC m=+0.108359172 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:29:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:57.978+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:57 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:58 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:58.798+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:58 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:58.947+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:58 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:59 compute-0 ceph-mon[75677]: pgmap v1405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:59 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:29:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:29:59.800+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:59 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:29:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:29:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:29:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:29:59.928+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:59 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:29:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:00 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:00 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:00.784+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:00.949+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:00 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:01 compute-0 ceph-mon[75677]: pgmap v1406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:01 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:01.801+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:01 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:01 compute-0 podman[281687]: 2025-11-24 20:30:01.909061922 +0000 UTC m=+0.145015933 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:30:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:01.978+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:01 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2317 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:02 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:02 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:02 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2317 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:02.792+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:02 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:03.008+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:03 compute-0 ceph-mon[75677]: pgmap v1407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:03 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:03.810+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:03 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:03.994+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:04 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:04.766+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:04 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:04.976+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:04 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:05 compute-0 ceph-mon[75677]: pgmap v1408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:05 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:05.760+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:05 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:06.017+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:06 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:06 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:30:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:06.750+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:06 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:07.060+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:07 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 27 slow ops, oldest one blocked for 2322 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:07 compute-0 ceph-mon[75677]: pgmap v1409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:07 compute-0 ceph-mon[75677]: Health check update: 27 slow ops, oldest one blocked for 2322 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:07.750+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:07 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:08.082+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:08 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:08.730+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:08 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:09.093+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:09 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:30:09.386 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:30:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:30:09.386 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:30:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:30:09.386 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:30:09 compute-0 ceph-mon[75677]: pgmap v1410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:09.746+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:09 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:10.047+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:10 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:10.731+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:10 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:11.082+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:11 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:11.713+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:11 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:11 compute-0 ceph-mon[75677]: pgmap v1411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:12.099+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:12 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2327 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:12.728+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:12 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:12 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2327 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:13.107+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:13 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:13.690+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:13 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:13 compute-0 ceph-mon[75677]: pgmap v1412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:14.156+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:14 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:14.657+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:14 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:15.163+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:15 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:15.666+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:15 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:15 compute-0 ceph-mon[75677]: pgmap v1413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:16.186+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:16 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:30:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1901466811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:30:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:30:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1901466811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:30:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:16.681+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:16 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1901466811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:30:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1901466811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:30:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:17.214+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:17 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2332 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:17.659+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:17 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:17 compute-0 podman[281713]: 2025-11-24 20:30:17.835478929 +0000 UTC m=+0.062775868 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 20:30:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:17 compute-0 ceph-mon[75677]: pgmap v1414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:17 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2332 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:18.166+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:18.701+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:18 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:18 compute-0 ceph-mon[75677]: pgmap v1415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:19.168+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:19 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:19.673+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:19 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:20.192+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:20 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:20.660+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:20 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:20 compute-0 ceph-mon[75677]: pgmap v1416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:21.222+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:21.631+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:21 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:22.231+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:22 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2342 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:22.591+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:22 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:22 compute-0 ceph-mon[75677]: pgmap v1417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:22 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2342 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:23.192+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:23 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:23.603+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:23 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:24.236+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:24 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:30:24
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.meta']
Nov 24 20:30:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:30:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:24.580+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:24 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:25 compute-0 ceph-mon[75677]: pgmap v1418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:25.197+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:25 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:25.537+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:25 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:26.207+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:26 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:26.518+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:26 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:27 compute-0 ceph-mon[75677]: pgmap v1419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:27.235+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:27 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:27.563+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:27 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:28 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2347 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:28.239+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:28 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:28.530+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:28 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:28 compute-0 podman[281732]: 2025-11-24 20:30:28.837663941 +0000 UTC m=+0.071186282 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 20:30:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:29 compute-0 ceph-mon[75677]: pgmap v1420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:29 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2347 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:29.258+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:29 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:29.545+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:29 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:30.230+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:30 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:30.562+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:30 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:31 compute-0 ceph-mon[75677]: pgmap v1421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:31.274+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:31 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:31.534+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:31 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:32.307+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:32 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:32.547+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:32 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:32 compute-0 podman[281753]: 2025-11-24 20:30:32.932628289 +0000 UTC m=+0.115749936 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:30:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:33.290+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:33 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:33 compute-0 ceph-mon[75677]: pgmap v1422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:33.551+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:33 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:34.302+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:34 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:34.507+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:34 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:30:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:30:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:35.298+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:35 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:35.462+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:35 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:35 compute-0 ceph-mon[75677]: pgmap v1423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:36.287+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:36 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:36.445+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:36 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2352 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:37.296+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:37 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:37.468+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:37 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:37 compute-0 ceph-mon[75677]: pgmap v1424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:37 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2352 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:38.325+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:38 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:38.444+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:38 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:39.304+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:39 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:39.451+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:39 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:39 compute-0 ceph-mon[75677]: pgmap v1425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:40.301+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:40 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:40.457+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:40 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:30:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:30:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:30:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:30:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:30:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:41.254+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:41 compute-0 ceph-mon[75677]: pgmap v1426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:41.450+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:41 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:42.208+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:42 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2357 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:42 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2357 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:42.451+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:42 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:43.168+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:43 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:43.417+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:43 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:43 compute-0 ceph-mon[75677]: pgmap v1427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:44.184+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:44 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:44.427+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:44 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:44 compute-0 sudo[281779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:30:44 compute-0 sudo[281779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:44 compute-0 sudo[281779]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:44 compute-0 sudo[281804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:30:44 compute-0 sudo[281804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:44 compute-0 sudo[281804]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:44 compute-0 sudo[281829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:30:44 compute-0 sudo[281829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:44 compute-0 sudo[281829]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:44 compute-0 sudo[281854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:30:44 compute-0 sudo[281854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:45.177+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:45 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:45 compute-0 sudo[281854]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:45.404+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:45 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 20:30:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:30:45 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:30:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:30:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:30:45 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1c58f519-457b-4665-822b-b85c771b8094 does not exist
Nov 24 20:30:45 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 12ee60a9-4aaa-4f47-a1de-47d0962dc2de does not exist
Nov 24 20:30:45 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7ea85651-7090-4e8f-b8e7-a52b1e7964bf does not exist
Nov 24 20:30:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:30:45 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:30:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:30:45 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: pgmap v1428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:30:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:30:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:30:45 compute-0 sudo[281911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:30:45 compute-0 sudo[281911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:45 compute-0 sudo[281911]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:45 compute-0 sudo[281936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:30:45 compute-0 sudo[281936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:45 compute-0 sudo[281936]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:45 compute-0 sudo[281961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:30:45 compute-0 sudo[281961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:45 compute-0 sudo[281961]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:45 compute-0 sudo[281986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:30:45 compute-0 sudo[281986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:46.216+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:46 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:46 compute-0 podman[282052]: 2025-11-24 20:30:46.262029718 +0000 UTC m=+0.045373046 container create 7c0c5af9cb153de29d7e1efc6aaae26d91a9ef680cac511481a51e9173841ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:30:46 compute-0 systemd[1]: Started libpod-conmon-7c0c5af9cb153de29d7e1efc6aaae26d91a9ef680cac511481a51e9173841ca0.scope.
Nov 24 20:30:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:30:46 compute-0 podman[282052]: 2025-11-24 20:30:46.243864315 +0000 UTC m=+0.027207633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:30:46 compute-0 podman[282052]: 2025-11-24 20:30:46.354576027 +0000 UTC m=+0.137919345 container init 7c0c5af9cb153de29d7e1efc6aaae26d91a9ef680cac511481a51e9173841ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:30:46 compute-0 podman[282052]: 2025-11-24 20:30:46.36785947 +0000 UTC m=+0.151202758 container start 7c0c5af9cb153de29d7e1efc6aaae26d91a9ef680cac511481a51e9173841ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:30:46 compute-0 podman[282052]: 2025-11-24 20:30:46.371150097 +0000 UTC m=+0.154493385 container attach 7c0c5af9cb153de29d7e1efc6aaae26d91a9ef680cac511481a51e9173841ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:30:46 compute-0 cranky_kepler[282069]: 167 167
Nov 24 20:30:46 compute-0 systemd[1]: libpod-7c0c5af9cb153de29d7e1efc6aaae26d91a9ef680cac511481a51e9173841ca0.scope: Deactivated successfully.
Nov 24 20:30:46 compute-0 conmon[282069]: conmon 7c0c5af9cb153de29d7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c0c5af9cb153de29d7e1efc6aaae26d91a9ef680cac511481a51e9173841ca0.scope/container/memory.events
Nov 24 20:30:46 compute-0 podman[282052]: 2025-11-24 20:30:46.376826038 +0000 UTC m=+0.160169366 container died 7c0c5af9cb153de29d7e1efc6aaae26d91a9ef680cac511481a51e9173841ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:30:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:46.380+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:46 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-62f6815f5e2e70962394d1f55dfe6a52facdd9579c72603890b46162b346a3ef-merged.mount: Deactivated successfully.
Nov 24 20:30:46 compute-0 podman[282052]: 2025-11-24 20:30:46.42961943 +0000 UTC m=+0.212962758 container remove 7c0c5af9cb153de29d7e1efc6aaae26d91a9ef680cac511481a51e9173841ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:30:46 compute-0 systemd[1]: libpod-conmon-7c0c5af9cb153de29d7e1efc6aaae26d91a9ef680cac511481a51e9173841ca0.scope: Deactivated successfully.
Nov 24 20:30:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:46 compute-0 podman[282092]: 2025-11-24 20:30:46.661234212 +0000 UTC m=+0.067147745 container create aabee6e3a516aba26b88240f35e149b25cae40230a4bd32e77ab6895c36490df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_margulis, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:30:46 compute-0 systemd[1]: Started libpod-conmon-aabee6e3a516aba26b88240f35e149b25cae40230a4bd32e77ab6895c36490df.scope.
Nov 24 20:30:46 compute-0 podman[282092]: 2025-11-24 20:30:46.63702844 +0000 UTC m=+0.042942013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:30:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e47cd2646ec6de02e25c7333e086200598875de1c59871427ade7fbffc1a83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e47cd2646ec6de02e25c7333e086200598875de1c59871427ade7fbffc1a83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e47cd2646ec6de02e25c7333e086200598875de1c59871427ade7fbffc1a83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e47cd2646ec6de02e25c7333e086200598875de1c59871427ade7fbffc1a83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e47cd2646ec6de02e25c7333e086200598875de1c59871427ade7fbffc1a83/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:46 compute-0 podman[282092]: 2025-11-24 20:30:46.771137442 +0000 UTC m=+0.177051035 container init aabee6e3a516aba26b88240f35e149b25cae40230a4bd32e77ab6895c36490df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:30:46 compute-0 podman[282092]: 2025-11-24 20:30:46.783657725 +0000 UTC m=+0.189571298 container start aabee6e3a516aba26b88240f35e149b25cae40230a4bd32e77ab6895c36490df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_margulis, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:30:46 compute-0 podman[282092]: 2025-11-24 20:30:46.787927398 +0000 UTC m=+0.193841001 container attach aabee6e3a516aba26b88240f35e149b25cae40230a4bd32e77ab6895c36490df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_margulis, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:30:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:47.239+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:47 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2362 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:47.386+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:47 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:47 compute-0 ceph-mon[75677]: pgmap v1429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:47 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2362 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:47 compute-0 inspiring_margulis[282108]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:30:47 compute-0 inspiring_margulis[282108]: --> relative data size: 1.0
Nov 24 20:30:47 compute-0 inspiring_margulis[282108]: --> All data devices are unavailable
Nov 24 20:30:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:47 compute-0 systemd[1]: libpod-aabee6e3a516aba26b88240f35e149b25cae40230a4bd32e77ab6895c36490df.scope: Deactivated successfully.
Nov 24 20:30:47 compute-0 systemd[1]: libpod-aabee6e3a516aba26b88240f35e149b25cae40230a4bd32e77ab6895c36490df.scope: Consumed 1.056s CPU time.
Nov 24 20:30:47 compute-0 podman[282137]: 2025-11-24 20:30:47.938938753 +0000 UTC m=+0.032921615 container died aabee6e3a516aba26b88240f35e149b25cae40230a4bd32e77ab6895c36490df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-32e47cd2646ec6de02e25c7333e086200598875de1c59871427ade7fbffc1a83-merged.mount: Deactivated successfully.
Nov 24 20:30:48 compute-0 podman[282137]: 2025-11-24 20:30:48.120965618 +0000 UTC m=+0.214948410 container remove aabee6e3a516aba26b88240f35e149b25cae40230a4bd32e77ab6895c36490df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 20:30:48 compute-0 systemd[1]: libpod-conmon-aabee6e3a516aba26b88240f35e149b25cae40230a4bd32e77ab6895c36490df.scope: Deactivated successfully.
Nov 24 20:30:48 compute-0 podman[282138]: 2025-11-24 20:30:48.133546463 +0000 UTC m=+0.203343413 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:30:48 compute-0 sudo[281986]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:48 compute-0 sudo[282170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:30:48 compute-0 sudo[282170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:48 compute-0 sudo[282170]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:48.286+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:48 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:48 compute-0 sudo[282195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:30:48 compute-0 sudo[282195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:48 compute-0 sudo[282195]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:48 compute-0 sudo[282220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:30:48 compute-0 sudo[282220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:48 compute-0 sudo[282220]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:48.412+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:48 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:48 compute-0 sudo[282245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:30:48 compute-0 sudo[282245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:48 compute-0 podman[282309]: 2025-11-24 20:30:48.880940037 +0000 UTC m=+0.086281403 container create 0a00857be863182819cd2d2e39a320d1bd909b86c442b593eedca043cb4a6406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:30:48 compute-0 podman[282309]: 2025-11-24 20:30:48.832056568 +0000 UTC m=+0.037398004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:30:49 compute-0 systemd[1]: Started libpod-conmon-0a00857be863182819cd2d2e39a320d1bd909b86c442b593eedca043cb4a6406.scope.
Nov 24 20:30:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:30:49 compute-0 podman[282309]: 2025-11-24 20:30:49.193565851 +0000 UTC m=+0.398907297 container init 0a00857be863182819cd2d2e39a320d1bd909b86c442b593eedca043cb4a6406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:30:49 compute-0 podman[282309]: 2025-11-24 20:30:49.20592191 +0000 UTC m=+0.411263306 container start 0a00857be863182819cd2d2e39a320d1bd909b86c442b593eedca043cb4a6406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:30:49 compute-0 laughing_bhabha[282325]: 167 167
Nov 24 20:30:49 compute-0 systemd[1]: libpod-0a00857be863182819cd2d2e39a320d1bd909b86c442b593eedca043cb4a6406.scope: Deactivated successfully.
Nov 24 20:30:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:49.275+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:49 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:49 compute-0 podman[282309]: 2025-11-24 20:30:49.332956774 +0000 UTC m=+0.538298220 container attach 0a00857be863182819cd2d2e39a320d1bd909b86c442b593eedca043cb4a6406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:30:49 compute-0 podman[282309]: 2025-11-24 20:30:49.333822956 +0000 UTC m=+0.539164352 container died 0a00857be863182819cd2d2e39a320d1bd909b86c442b593eedca043cb4a6406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 20:30:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:49.447+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:49 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:49 compute-0 ceph-mon[75677]: pgmap v1430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a761fbb7570295284330169017e08f3212affeb386738fc81898d5b32310ec86-merged.mount: Deactivated successfully.
Nov 24 20:30:50 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:50.281+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:50.441+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:50 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:50 compute-0 podman[282309]: 2025-11-24 20:30:50.963861685 +0000 UTC m=+2.169203091 container remove 0a00857be863182819cd2d2e39a320d1bd909b86c442b593eedca043cb4a6406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 20:30:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:50 compute-0 ceph-mon[75677]: pgmap v1431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:51 compute-0 systemd[1]: libpod-conmon-0a00857be863182819cd2d2e39a320d1bd909b86c442b593eedca043cb4a6406.scope: Deactivated successfully.
Nov 24 20:30:51 compute-0 podman[282351]: 2025-11-24 20:30:51.163210312 +0000 UTC m=+0.024242995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:30:51 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:51.266+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:51 compute-0 podman[282351]: 2025-11-24 20:30:51.309910689 +0000 UTC m=+0.170943332 container create 6d51fb8e734eedd678932eac21d349fb369392aa7ba83c59231d0a1f0a1b8844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:30:51 compute-0 systemd[1]: Started libpod-conmon-6d51fb8e734eedd678932eac21d349fb369392aa7ba83c59231d0a1f0a1b8844.scope.
Nov 24 20:30:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:51.460+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:51 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153dd8e3a4a75cf6fc209eebb14001783fb50e6452d1c77997f7274fd5faecb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153dd8e3a4a75cf6fc209eebb14001783fb50e6452d1c77997f7274fd5faecb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153dd8e3a4a75cf6fc209eebb14001783fb50e6452d1c77997f7274fd5faecb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153dd8e3a4a75cf6fc209eebb14001783fb50e6452d1c77997f7274fd5faecb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:51 compute-0 podman[282351]: 2025-11-24 20:30:51.655047147 +0000 UTC m=+0.516079870 container init 6d51fb8e734eedd678932eac21d349fb369392aa7ba83c59231d0a1f0a1b8844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 20:30:51 compute-0 podman[282351]: 2025-11-24 20:30:51.667983471 +0000 UTC m=+0.529016154 container start 6d51fb8e734eedd678932eac21d349fb369392aa7ba83c59231d0a1f0a1b8844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 20:30:51 compute-0 podman[282351]: 2025-11-24 20:30:51.726508895 +0000 UTC m=+0.587541548 container attach 6d51fb8e734eedd678932eac21d349fb369392aa7ba83c59231d0a1f0a1b8844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:30:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2372 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:52.299+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]: {
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:     "0": [
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:         {
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "devices": [
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "/dev/loop3"
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             ],
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_name": "ceph_lv0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_size": "21470642176",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "name": "ceph_lv0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "tags": {
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.cluster_name": "ceph",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.crush_device_class": "",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.encrypted": "0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.osd_id": "0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.type": "block",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.vdo": "0"
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             },
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "type": "block",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "vg_name": "ceph_vg0"
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:         }
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:     ],
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:     "1": [
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:         {
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "devices": [
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "/dev/loop4"
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             ],
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_name": "ceph_lv1",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_size": "21470642176",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "name": "ceph_lv1",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "tags": {
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.cluster_name": "ceph",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.crush_device_class": "",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.encrypted": "0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.osd_id": "1",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.type": "block",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.vdo": "0"
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             },
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "type": "block",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "vg_name": "ceph_vg1"
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:         }
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:     ],
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:     "2": [
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:         {
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "devices": [
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "/dev/loop5"
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             ],
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_name": "ceph_lv2",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_size": "21470642176",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "name": "ceph_lv2",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "tags": {
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.cluster_name": "ceph",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.crush_device_class": "",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.encrypted": "0",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.osd_id": "2",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.type": "block",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:                 "ceph.vdo": "0"
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             },
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "type": "block",
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:             "vg_name": "ceph_vg2"
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:         }
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]:     ]
Nov 24 20:30:52 compute-0 optimistic_hugle[282367]: }
Nov 24 20:30:52 compute-0 systemd[1]: libpod-6d51fb8e734eedd678932eac21d349fb369392aa7ba83c59231d0a1f0a1b8844.scope: Deactivated successfully.
Nov 24 20:30:52 compute-0 podman[282351]: 2025-11-24 20:30:52.415758834 +0000 UTC m=+1.276791507 container died 6d51fb8e734eedd678932eac21d349fb369392aa7ba83c59231d0a1f0a1b8844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:30:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:52.447+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:52 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-153dd8e3a4a75cf6fc209eebb14001783fb50e6452d1c77997f7274fd5faecb4-merged.mount: Deactivated successfully.
Nov 24 20:30:52 compute-0 podman[282351]: 2025-11-24 20:30:52.866245681 +0000 UTC m=+1.727278364 container remove 6d51fb8e734eedd678932eac21d349fb369392aa7ba83c59231d0a1f0a1b8844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:30:52 compute-0 systemd[1]: libpod-conmon-6d51fb8e734eedd678932eac21d349fb369392aa7ba83c59231d0a1f0a1b8844.scope: Deactivated successfully.
Nov 24 20:30:52 compute-0 sudo[282245]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:53 compute-0 sudo[282388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:30:53 compute-0 sudo[282388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:53 compute-0 sudo[282388]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:53 compute-0 sudo[282413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:30:53 compute-0 sudo[282413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:53 compute-0 sudo[282413]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:53 compute-0 sudo[282438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:30:53 compute-0 sudo[282438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:53 compute-0 sudo[282438]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:53.305+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:53 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:53 compute-0 sudo[282463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:30:53 compute-0 sudo[282463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:53 compute-0 ceph-mon[75677]: pgmap v1432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2372 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:53.460+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:53 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:53 compute-0 podman[282530]: 2025-11-24 20:30:53.809647642 +0000 UTC m=+0.079784880 container create 72b989597c897a4a419314fdbaed565751c3084bbc3cb3c63c46965530dace80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:30:53 compute-0 systemd[1]: Started libpod-conmon-72b989597c897a4a419314fdbaed565751c3084bbc3cb3c63c46965530dace80.scope.
Nov 24 20:30:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:53 compute-0 podman[282530]: 2025-11-24 20:30:53.777249371 +0000 UTC m=+0.047386699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:30:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:30:53 compute-0 podman[282530]: 2025-11-24 20:30:53.908071076 +0000 UTC m=+0.178208384 container init 72b989597c897a4a419314fdbaed565751c3084bbc3cb3c63c46965530dace80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 24 20:30:53 compute-0 podman[282530]: 2025-11-24 20:30:53.916945522 +0000 UTC m=+0.187082800 container start 72b989597c897a4a419314fdbaed565751c3084bbc3cb3c63c46965530dace80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:30:53 compute-0 podman[282530]: 2025-11-24 20:30:53.9221475 +0000 UTC m=+0.192284828 container attach 72b989597c897a4a419314fdbaed565751c3084bbc3cb3c63c46965530dace80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carver, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:30:53 compute-0 serene_carver[282547]: 167 167
Nov 24 20:30:53 compute-0 systemd[1]: libpod-72b989597c897a4a419314fdbaed565751c3084bbc3cb3c63c46965530dace80.scope: Deactivated successfully.
Nov 24 20:30:53 compute-0 conmon[282547]: conmon 72b989597c897a4a4193 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-72b989597c897a4a419314fdbaed565751c3084bbc3cb3c63c46965530dace80.scope/container/memory.events
Nov 24 20:30:53 compute-0 podman[282530]: 2025-11-24 20:30:53.92704596 +0000 UTC m=+0.197183238 container died 72b989597c897a4a419314fdbaed565751c3084bbc3cb3c63c46965530dace80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-14859fa487984defe2e2e74c075b39711ca3f9ea47002ce727ff7569d2ad737b-merged.mount: Deactivated successfully.
Nov 24 20:30:53 compute-0 podman[282530]: 2025-11-24 20:30:53.980822099 +0000 UTC m=+0.250959347 container remove 72b989597c897a4a419314fdbaed565751c3084bbc3cb3c63c46965530dace80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 20:30:53 compute-0 systemd[1]: libpod-conmon-72b989597c897a4a419314fdbaed565751c3084bbc3cb3c63c46965530dace80.scope: Deactivated successfully.
Nov 24 20:30:54 compute-0 podman[282571]: 2025-11-24 20:30:54.174981206 +0000 UTC m=+0.062989484 container create d8bdb532784e352d21232e4f73698f87752e916edb0c248ca3951008e9e8d1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 20:30:54 compute-0 systemd[1]: Started libpod-conmon-d8bdb532784e352d21232e4f73698f87752e916edb0c248ca3951008e9e8d1ec.scope.
Nov 24 20:30:54 compute-0 podman[282571]: 2025-11-24 20:30:54.143200732 +0000 UTC m=+0.031209070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:30:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ea30cf1fa564f871ec19a3cf8868c643975e6566f1b81d3a4e4ffdafa22321d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ea30cf1fa564f871ec19a3cf8868c643975e6566f1b81d3a4e4ffdafa22321d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ea30cf1fa564f871ec19a3cf8868c643975e6566f1b81d3a4e4ffdafa22321d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ea30cf1fa564f871ec19a3cf8868c643975e6566f1b81d3a4e4ffdafa22321d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:30:54 compute-0 podman[282571]: 2025-11-24 20:30:54.283307644 +0000 UTC m=+0.171315902 container init d8bdb532784e352d21232e4f73698f87752e916edb0c248ca3951008e9e8d1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:30:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:54.285+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:54 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:54 compute-0 podman[282571]: 2025-11-24 20:30:54.295904899 +0000 UTC m=+0.183913147 container start d8bdb532784e352d21232e4f73698f87752e916edb0c248ca3951008e9e8d1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:30:54 compute-0 podman[282571]: 2025-11-24 20:30:54.299519985 +0000 UTC m=+0.187528273 container attach d8bdb532784e352d21232e4f73698f87752e916edb0c248ca3951008e9e8d1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:30:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:30:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:30:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:30:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:30:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:30:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:30:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:54.447+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:54 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:55.242+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:55 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:55 compute-0 ceph-mon[75677]: pgmap v1433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]: {
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "osd_id": 2,
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "type": "bluestore"
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:     },
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "osd_id": 1,
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "type": "bluestore"
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:     },
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "osd_id": 0,
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:         "type": "bluestore"
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]:     }
Nov 24 20:30:55 compute-0 stupefied_zhukovsky[282587]: }
Nov 24 20:30:55 compute-0 systemd[1]: libpod-d8bdb532784e352d21232e4f73698f87752e916edb0c248ca3951008e9e8d1ec.scope: Deactivated successfully.
Nov 24 20:30:55 compute-0 podman[282571]: 2025-11-24 20:30:55.415216802 +0000 UTC m=+1.303225090 container died d8bdb532784e352d21232e4f73698f87752e916edb0c248ca3951008e9e8d1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:30:55 compute-0 systemd[1]: libpod-d8bdb532784e352d21232e4f73698f87752e916edb0c248ca3951008e9e8d1ec.scope: Consumed 1.129s CPU time.
Nov 24 20:30:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:55.424+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:55 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ea30cf1fa564f871ec19a3cf8868c643975e6566f1b81d3a4e4ffdafa22321d-merged.mount: Deactivated successfully.
Nov 24 20:30:55 compute-0 podman[282571]: 2025-11-24 20:30:55.495247018 +0000 UTC m=+1.383255296 container remove d8bdb532784e352d21232e4f73698f87752e916edb0c248ca3951008e9e8d1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:30:55 compute-0 systemd[1]: libpod-conmon-d8bdb532784e352d21232e4f73698f87752e916edb0c248ca3951008e9e8d1ec.scope: Deactivated successfully.
Nov 24 20:30:55 compute-0 sudo[282463]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:30:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:30:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:30:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:30:55 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev bf0b0285-7c37-4da1-b19b-a0ee82a37686 does not exist
Nov 24 20:30:55 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1bf16764-8f56-4d70-ab4d-4044ed00bd35 does not exist
Nov 24 20:30:55 compute-0 sudo[282633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:30:55 compute-0 sudo[282633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:55 compute-0 sudo[282633]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:55 compute-0 sudo[282658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:30:55 compute-0 sudo[282658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:30:55 compute-0 sudo[282658]: pam_unix(sudo:session): session closed for user root
Nov 24 20:30:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:56.273+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:56 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:30:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:30:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:56.377+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:56 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:57.237+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:57 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2377 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:30:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:57.342+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:57 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:57 compute-0 ceph-mon[75677]: pgmap v1434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:57 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2377 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:30:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:58.195+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:58 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:58.318+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:58 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:30:59.177+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:59 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:30:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:30:59.361+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:59 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:30:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:59 compute-0 ceph-mon[75677]: pgmap v1435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:30:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:30:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:30:59 compute-0 podman[282683]: 2025-11-24 20:30:59.889602267 +0000 UTC m=+0.107494705 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:31:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:00.209+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:00 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:00.315+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:00 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:01.221+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:01 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:01.320+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:01 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:01 compute-0 ceph-mon[75677]: pgmap v1436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:02.229+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:02 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:02.281+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:02 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2382 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:03.236+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:03.264+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:03 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:03 compute-0 ceph-mon[75677]: pgmap v1437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2382 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:03 compute-0 podman[282704]: 2025-11-24 20:31:03.905567557 +0000 UTC m=+0.127063005 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:31:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:04.247+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:04 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:04.266+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:04 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:05.236+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:05 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:05.296+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:05 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:05 compute-0 ceph-mon[75677]: pgmap v1438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:06.270+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:06 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:06.282+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:06 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:07.284+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:07 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:07.301+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:07 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:07 compute-0 ceph-mon[75677]: pgmap v1439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:08.259+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:08 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:08.285+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:08 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:09.226+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:09 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:09.301+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:09 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:31:09.387 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:31:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:31:09.388 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:31:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:31:09.388 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:31:09 compute-0 ceph-mon[75677]: pgmap v1440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:10.188+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:10 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:10.256+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:10 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:11.235+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:11 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:11.302+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:11 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:11 compute-0 ceph-mon[75677]: pgmap v1441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:12.185+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:12 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:12.255+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:12 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2387 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:12 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2387 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.623648) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016272623719, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2192, "num_deletes": 251, "total_data_size": 2747605, "memory_usage": 2790904, "flush_reason": "Manual Compaction"}
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016272685184, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2682526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38820, "largest_seqno": 41011, "table_properties": {"data_size": 2672967, "index_size": 5477, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26199, "raw_average_key_size": 22, "raw_value_size": 2651196, "raw_average_value_size": 2250, "num_data_blocks": 238, "num_entries": 1178, "num_filter_entries": 1178, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016118, "oldest_key_time": 1764016118, "file_creation_time": 1764016272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 61600 microseconds, and 11283 cpu microseconds.
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.685254) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2682526 bytes OK
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.685280) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.688981) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.689004) EVENT_LOG_v1 {"time_micros": 1764016272688996, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.689027) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2737626, prev total WAL file size 2737626, number of live WAL files 2.
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.690775) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2619KB)], [80(9349KB)]
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016272690814, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 12256231, "oldest_snapshot_seqno": -1}
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 10256 keys, 10774323 bytes, temperature: kUnknown
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016272839307, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 10774323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10713688, "index_size": 33899, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 276217, "raw_average_key_size": 26, "raw_value_size": 10533843, "raw_average_value_size": 1027, "num_data_blocks": 1298, "num_entries": 10256, "num_filter_entries": 10256, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.839708) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10774323 bytes
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.916863) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.5 rd, 72.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.1 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(8.6) write-amplify(4.0) OK, records in: 10770, records dropped: 514 output_compression: NoCompression
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.916903) EVENT_LOG_v1 {"time_micros": 1764016272916887, "job": 46, "event": "compaction_finished", "compaction_time_micros": 148580, "compaction_time_cpu_micros": 36428, "output_level": 6, "num_output_files": 1, "total_output_size": 10774323, "num_input_records": 10770, "num_output_records": 10256, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016272917661, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016272919347, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.690657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.919425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.919431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.919433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.919434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:12.919436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:13.211+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:13 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:13.220+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:13 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:13 compute-0 ceph-mon[75677]: pgmap v1442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:14.204+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:14 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:14.251+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:14 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:15.207+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:15 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:15.224+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:15 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:15 compute-0 ceph-mon[75677]: pgmap v1443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:16.218+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:16 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:16.226+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:16 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:31:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/877183811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:31:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:31:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/877183811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:31:16 compute-0 ceph-mon[75677]: pgmap v1444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/877183811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:31:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/877183811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:31:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:17.229+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:17 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:17.240+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:17 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2397 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:17 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2397 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:18.216+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:18 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:18.225+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:18 compute-0 podman[282730]: 2025-11-24 20:31:18.837687031 +0000 UTC m=+0.069169108 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:31:18 compute-0 ceph-mon[75677]: pgmap v1445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:19.178+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:19 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:19.215+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:19 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:20.145+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:20 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:20.202+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:20 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:21.190+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:21.208+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:21 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:21 compute-0 ceph-mon[75677]: pgmap v1446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:22.187+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:22 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:22 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:22.240+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2402 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:22 compute-0 ceph-mon[75677]: pgmap v1447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:23.204+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:23 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:23.256+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:23 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2402 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:24.244+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:24 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:24.258+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:24 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:31:24
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'backups', 'volumes', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta']
Nov 24 20:31:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:31:24 compute-0 ceph-mon[75677]: pgmap v1448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:25.233+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:25 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:25.303+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:25 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:26.261+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:26 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:26.341+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:26 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:27 compute-0 ceph-mon[75677]: pgmap v1449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:27.290+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:27 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:27.334+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:27 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:28.260+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:28 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:28.381+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:28 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:29.258+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:29 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:29.383+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:29 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:29 compute-0 ceph-mon[75677]: pgmap v1450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:30.270+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:30 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:30.376+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:30 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:30 compute-0 ceph-mon[75677]: pgmap v1451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:30 compute-0 podman[282748]: 2025-11-24 20:31:30.86155804 +0000 UTC m=+0.086480988 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 20:31:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:31.239+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:31 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:31.332+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:31 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:32.198+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:32 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2407 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:32.294+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:32 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:32 compute-0 ceph-mon[75677]: pgmap v1452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:32 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2407 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:33.148+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:33 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:33 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:33.264+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:34.111+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:34 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:34 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:34.243+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:34 compute-0 ceph-mon[75677]: pgmap v1453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:31:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:31:34 compute-0 podman[282768]: 2025-11-24 20:31:34.997268851 +0000 UTC m=+0.220873408 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 20:31:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:35.115+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:35 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:35.280+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:35 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:36.149+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:36 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:36.295+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:36 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:36 compute-0 ceph-mon[75677]: pgmap v1454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:37.102+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:37 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:37 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:37.254+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2417 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.271541) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016297271622, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 537, "num_deletes": 250, "total_data_size": 410184, "memory_usage": 420584, "flush_reason": "Manual Compaction"}
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016297276003, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 323644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41012, "largest_seqno": 41548, "table_properties": {"data_size": 320847, "index_size": 705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8219, "raw_average_key_size": 21, "raw_value_size": 314906, "raw_average_value_size": 815, "num_data_blocks": 31, "num_entries": 386, "num_filter_entries": 386, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016273, "oldest_key_time": 1764016273, "file_creation_time": 1764016297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 4505 microseconds, and 2379 cpu microseconds.
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.276050) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 323644 bytes OK
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.276071) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.277853) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.277876) EVENT_LOG_v1 {"time_micros": 1764016297277868, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.277899) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 407015, prev total WAL file size 407015, number of live WAL files 2.
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.279098) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303031' seq:72057594037927935, type:22 .. '6D6772737461740031323532' seq:0, type:0; will stop at (end)
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(316KB)], [83(10MB)]
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016297279151, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11097967, "oldest_snapshot_seqno": -1}
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 10141 keys, 7841871 bytes, temperature: kUnknown
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016297320473, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 7841871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7786623, "index_size": 28768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 274371, "raw_average_key_size": 27, "raw_value_size": 7613430, "raw_average_value_size": 750, "num_data_blocks": 1084, "num_entries": 10141, "num_filter_entries": 10141, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.320886) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 7841871 bytes
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.322485) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 267.4 rd, 189.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.3 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(58.5) write-amplify(24.2) OK, records in: 10642, records dropped: 501 output_compression: NoCompression
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.322515) EVENT_LOG_v1 {"time_micros": 1764016297322502, "job": 48, "event": "compaction_finished", "compaction_time_micros": 41499, "compaction_time_cpu_micros": 24488, "output_level": 6, "num_output_files": 1, "total_output_size": 7841871, "num_input_records": 10642, "num_output_records": 10141, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016297323074, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016297326861, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.278976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.327039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.327048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.327049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.327051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:31:37.327053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:31:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:37 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2417 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:38.129+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:38 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:38 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:38.292+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:38 compute-0 ceph-mon[75677]: pgmap v1455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:39.083+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:39 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:39 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:39.331+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:40.038+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:40 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:40.296+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:40 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:31:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:31:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:31:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:31:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:31:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:41.032+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:41 compute-0 ceph-mon[75677]: pgmap v1456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:41.254+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:41 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:41.994+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:42.232+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:42 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:42 compute-0 ceph-mon[75677]: pgmap v1457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:42.972+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:42 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:43 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:43.256+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:43.947+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:43 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:44 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:44.275+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:44 compute-0 ceph-mon[75677]: pgmap v1458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:44.989+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:44 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:45.255+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:45 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:45 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2427 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:46.033+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:46 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:46.273+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:46 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:47.032+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:47 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:47 compute-0 ceph-mon[75677]: pgmap v1459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:47 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2427 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:47.289+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:47 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:48.025+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:48 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:48.271+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:48 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:49 compute-0 ceph-mon[75677]: pgmap v1460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:49.070+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:49 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:49.288+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:49 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:49 compute-0 podman[282794]: 2025-11-24 20:31:49.871021654 +0000 UTC m=+0.094544293 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 20:31:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:50.063+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:50 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:50.292+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:50 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:51 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:51.076+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:51 compute-0 ceph-mon[75677]: pgmap v1461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:51.266+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:51 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:52.079+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2432 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:52.296+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:52 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:31:52.436 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:31:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:31:52.439 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:31:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:53.058+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:53 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:53 compute-0 ceph-mon[75677]: pgmap v1462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2432 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:53.289+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:53 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:54.054+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:54 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:54.302+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:54 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:54 compute-0 ceph-mon[75677]: pgmap v1463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:31:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:31:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:31:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:31:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:31:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:31:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:55.038+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:55 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:55.292+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:55 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:55 compute-0 sudo[282813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:31:55 compute-0 sudo[282813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:31:55 compute-0 sudo[282813]: pam_unix(sudo:session): session closed for user root
Nov 24 20:31:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:55 compute-0 sudo[282838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:31:55 compute-0 sudo[282838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:31:55 compute-0 sudo[282838]: pam_unix(sudo:session): session closed for user root
Nov 24 20:31:55 compute-0 sudo[282863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:31:55 compute-0 sudo[282863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:31:55 compute-0 sudo[282863]: pam_unix(sudo:session): session closed for user root
Nov 24 20:31:56 compute-0 sudo[282888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:31:56 compute-0 sudo[282888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:31:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:56.083+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:56 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:56.279+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:56 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:56 compute-0 sudo[282888]: pam_unix(sudo:session): session closed for user root
Nov 24 20:31:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:31:56 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:31:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:31:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:31:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:31:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:56 compute-0 ceph-mon[75677]: pgmap v1464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:31:56 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2b5815ec-a972-480a-a31c-6eabb4dc4e3f does not exist
Nov 24 20:31:56 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 12ea4b99-c189-4a7c-89c9-2b948136bc05 does not exist
Nov 24 20:31:56 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e9ce8606-46ad-4ca3-a3f1-5dc214cbf740 does not exist
Nov 24 20:31:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:31:56 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:31:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:31:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:31:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:31:56 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:31:56 compute-0 sudo[282945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:31:56 compute-0 sudo[282945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:31:56 compute-0 sudo[282945]: pam_unix(sudo:session): session closed for user root
Nov 24 20:31:56 compute-0 sudo[282970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:31:56 compute-0 sudo[282970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:31:56 compute-0 sudo[282970]: pam_unix(sudo:session): session closed for user root
Nov 24 20:31:56 compute-0 sudo[282995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:31:56 compute-0 sudo[282995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:31:56 compute-0 sudo[282995]: pam_unix(sudo:session): session closed for user root
Nov 24 20:31:57 compute-0 sudo[283020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:31:57 compute-0 sudo[283020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:31:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:57.081+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:57 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2437 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:31:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:57.273+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:57 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:57 compute-0 podman[283085]: 2025-11-24 20:31:57.379155748 +0000 UTC m=+0.029726260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:31:57 compute-0 podman[283085]: 2025-11-24 20:31:57.530297783 +0000 UTC m=+0.180868275 container create e7cb7433a0c34fc06ac43116abc2bffa1e8abb62dd2b597d28487a6d547933aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:31:57 compute-0 systemd[1]: Started libpod-conmon-e7cb7433a0c34fc06ac43116abc2bffa1e8abb62dd2b597d28487a6d547933aa.scope.
Nov 24 20:31:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:31:57 compute-0 podman[283085]: 2025-11-24 20:31:57.876942901 +0000 UTC m=+0.527513423 container init e7cb7433a0c34fc06ac43116abc2bffa1e8abb62dd2b597d28487a6d547933aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:31:57 compute-0 podman[283085]: 2025-11-24 20:31:57.885692063 +0000 UTC m=+0.536262575 container start e7cb7433a0c34fc06ac43116abc2bffa1e8abb62dd2b597d28487a6d547933aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:31:57 compute-0 funny_rubin[283101]: 167 167
Nov 24 20:31:57 compute-0 systemd[1]: libpod-e7cb7433a0c34fc06ac43116abc2bffa1e8abb62dd2b597d28487a6d547933aa.scope: Deactivated successfully.
Nov 24 20:31:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:31:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:31:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:31:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:31:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:31:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:31:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:57 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2437 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:31:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:58 compute-0 podman[283085]: 2025-11-24 20:31:58.026639537 +0000 UTC m=+0.677210079 container attach e7cb7433a0c34fc06ac43116abc2bffa1e8abb62dd2b597d28487a6d547933aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:31:58 compute-0 podman[283085]: 2025-11-24 20:31:58.028124277 +0000 UTC m=+0.678694809 container died e7cb7433a0c34fc06ac43116abc2bffa1e8abb62dd2b597d28487a6d547933aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 20:31:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:58.037+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:58 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-386aaf982854dd65da8d4edc028e640882b60ca7b18be5a69c63a0413eac2190-merged.mount: Deactivated successfully.
Nov 24 20:31:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:58.259+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:58 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:58 compute-0 podman[283085]: 2025-11-24 20:31:58.494345611 +0000 UTC m=+1.144916113 container remove e7cb7433a0c34fc06ac43116abc2bffa1e8abb62dd2b597d28487a6d547933aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 20:31:58 compute-0 systemd[1]: libpod-conmon-e7cb7433a0c34fc06ac43116abc2bffa1e8abb62dd2b597d28487a6d547933aa.scope: Deactivated successfully.
Nov 24 20:31:58 compute-0 podman[283127]: 2025-11-24 20:31:58.706900827 +0000 UTC m=+0.095804125 container create d5eac7ac46268df7385d1cdd399c1401ff6b46f4516bd89bf7ab8130eacb26b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:31:58 compute-0 podman[283127]: 2025-11-24 20:31:58.635311536 +0000 UTC m=+0.024214814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:31:58 compute-0 systemd[1]: Started libpod-conmon-d5eac7ac46268df7385d1cdd399c1401ff6b46f4516bd89bf7ab8130eacb26b2.scope.
Nov 24 20:31:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6537cf5d585ddab42e709b8436f8c6daaa11927bae1ee00aaf274667a74cc55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6537cf5d585ddab42e709b8436f8c6daaa11927bae1ee00aaf274667a74cc55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6537cf5d585ddab42e709b8436f8c6daaa11927bae1ee00aaf274667a74cc55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6537cf5d585ddab42e709b8436f8c6daaa11927bae1ee00aaf274667a74cc55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6537cf5d585ddab42e709b8436f8c6daaa11927bae1ee00aaf274667a74cc55/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:31:58 compute-0 podman[283127]: 2025-11-24 20:31:58.897465879 +0000 UTC m=+0.286369177 container init d5eac7ac46268df7385d1cdd399c1401ff6b46f4516bd89bf7ab8130eacb26b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:31:58 compute-0 podman[283127]: 2025-11-24 20:31:58.906452358 +0000 UTC m=+0.295355666 container start d5eac7ac46268df7385d1cdd399c1401ff6b46f4516bd89bf7ab8130eacb26b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:31:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:31:59.002+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:59 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:31:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:59 compute-0 podman[283127]: 2025-11-24 20:31:59.023185219 +0000 UTC m=+0.412088517 container attach d5eac7ac46268df7385d1cdd399c1401ff6b46f4516bd89bf7ab8130eacb26b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:31:59 compute-0 ceph-mon[75677]: pgmap v1465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:31:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:31:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:31:59.296+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:59 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:31:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:31:59 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:31:59.442 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:31:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:00.013+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:00 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:00 compute-0 fervent_agnesi[283144]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:32:00 compute-0 fervent_agnesi[283144]: --> relative data size: 1.0
Nov 24 20:32:00 compute-0 fervent_agnesi[283144]: --> All data devices are unavailable
Nov 24 20:32:00 compute-0 systemd[1]: libpod-d5eac7ac46268df7385d1cdd399c1401ff6b46f4516bd89bf7ab8130eacb26b2.scope: Deactivated successfully.
Nov 24 20:32:00 compute-0 systemd[1]: libpod-d5eac7ac46268df7385d1cdd399c1401ff6b46f4516bd89bf7ab8130eacb26b2.scope: Consumed 1.120s CPU time.
Nov 24 20:32:00 compute-0 podman[283127]: 2025-11-24 20:32:00.060077683 +0000 UTC m=+1.448980951 container died d5eac7ac46268df7385d1cdd399c1401ff6b46f4516bd89bf7ab8130eacb26b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:32:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:00.311+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:00 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6537cf5d585ddab42e709b8436f8c6daaa11927bae1ee00aaf274667a74cc55-merged.mount: Deactivated successfully.
Nov 24 20:32:00 compute-0 podman[283127]: 2025-11-24 20:32:00.396987152 +0000 UTC m=+1.785890420 container remove d5eac7ac46268df7385d1cdd399c1401ff6b46f4516bd89bf7ab8130eacb26b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_agnesi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:32:00 compute-0 systemd[1]: libpod-conmon-d5eac7ac46268df7385d1cdd399c1401ff6b46f4516bd89bf7ab8130eacb26b2.scope: Deactivated successfully.
Nov 24 20:32:00 compute-0 sudo[283020]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:00 compute-0 sudo[283186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:32:00 compute-0 sudo[283186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:32:00 compute-0 sudo[283186]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:00 compute-0 sudo[283211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:32:00 compute-0 sudo[283211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:32:00 compute-0 sudo[283211]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:00 compute-0 sudo[283236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:32:00 compute-0 sudo[283236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:32:00 compute-0 sudo[283236]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:00 compute-0 sudo[283261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:32:00 compute-0 sudo[283261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:32:01 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:01.051+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:01 compute-0 podman[283326]: 2025-11-24 20:32:01.141637453 +0000 UTC m=+0.058059662 container create 61330f4d674d06ef040c2197c75e336dbe13af717e3a4e9faba3b7b07377eb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:32:01 compute-0 systemd[1]: Started libpod-conmon-61330f4d674d06ef040c2197c75e336dbe13af717e3a4e9faba3b7b07377eb94.scope.
Nov 24 20:32:01 compute-0 podman[283326]: 2025-11-24 20:32:01.120145853 +0000 UTC m=+0.036568072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:32:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:32:01 compute-0 podman[283326]: 2025-11-24 20:32:01.236884523 +0000 UTC m=+0.153306722 container init 61330f4d674d06ef040c2197c75e336dbe13af717e3a4e9faba3b7b07377eb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:32:01 compute-0 podman[283326]: 2025-11-24 20:32:01.244525446 +0000 UTC m=+0.160947625 container start 61330f4d674d06ef040c2197c75e336dbe13af717e3a4e9faba3b7b07377eb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:32:01 compute-0 podman[283326]: 2025-11-24 20:32:01.248494602 +0000 UTC m=+0.164916771 container attach 61330f4d674d06ef040c2197c75e336dbe13af717e3a4e9faba3b7b07377eb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:32:01 compute-0 thirsty_spence[283343]: 167 167
Nov 24 20:32:01 compute-0 systemd[1]: libpod-61330f4d674d06ef040c2197c75e336dbe13af717e3a4e9faba3b7b07377eb94.scope: Deactivated successfully.
Nov 24 20:32:01 compute-0 podman[283326]: 2025-11-24 20:32:01.251184573 +0000 UTC m=+0.167606742 container died 61330f4d674d06ef040c2197c75e336dbe13af717e3a4e9faba3b7b07377eb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 20:32:01 compute-0 podman[283340]: 2025-11-24 20:32:01.268317049 +0000 UTC m=+0.081881637 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:32:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-70e65d821f0605903950b0dd9743c9e2e8cfc3cce9987cec313f9688d476b719-merged.mount: Deactivated successfully.
Nov 24 20:32:01 compute-0 podman[283326]: 2025-11-24 20:32:01.296005984 +0000 UTC m=+0.212428163 container remove 61330f4d674d06ef040c2197c75e336dbe13af717e3a4e9faba3b7b07377eb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:32:01 compute-0 systemd[1]: libpod-conmon-61330f4d674d06ef040c2197c75e336dbe13af717e3a4e9faba3b7b07377eb94.scope: Deactivated successfully.
Nov 24 20:32:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:01.330+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:01 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:01 compute-0 ceph-mon[75677]: pgmap v1466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:01 compute-0 podman[283387]: 2025-11-24 20:32:01.496787907 +0000 UTC m=+0.049765643 container create c0c78d0426241fd8c19095383abb5b36974e34d7a3af38d8c0fd09c2870d20e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:32:01 compute-0 systemd[1]: Started libpod-conmon-c0c78d0426241fd8c19095383abb5b36974e34d7a3af38d8c0fd09c2870d20e1.scope.
Nov 24 20:32:01 compute-0 podman[283387]: 2025-11-24 20:32:01.47243027 +0000 UTC m=+0.025408016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:32:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02b4f1ce5ba3d9e052ab7fdf577716c8a001542a5b50fd149afe4cf881b66f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02b4f1ce5ba3d9e052ab7fdf577716c8a001542a5b50fd149afe4cf881b66f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02b4f1ce5ba3d9e052ab7fdf577716c8a001542a5b50fd149afe4cf881b66f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02b4f1ce5ba3d9e052ab7fdf577716c8a001542a5b50fd149afe4cf881b66f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:32:01 compute-0 podman[283387]: 2025-11-24 20:32:01.622286101 +0000 UTC m=+0.175263877 container init c0c78d0426241fd8c19095383abb5b36974e34d7a3af38d8c0fd09c2870d20e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:32:01 compute-0 podman[283387]: 2025-11-24 20:32:01.636789837 +0000 UTC m=+0.189767573 container start c0c78d0426241fd8c19095383abb5b36974e34d7a3af38d8c0fd09c2870d20e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:32:01 compute-0 podman[283387]: 2025-11-24 20:32:01.641261676 +0000 UTC m=+0.194239472 container attach c0c78d0426241fd8c19095383abb5b36974e34d7a3af38d8c0fd09c2870d20e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:32:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:02 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:02.054+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2442 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:02 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:02.363+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:02 compute-0 ceph-mon[75677]: pgmap v1467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]: {
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:     "0": [
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:         {
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "devices": [
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "/dev/loop3"
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             ],
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_name": "ceph_lv0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_size": "21470642176",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "name": "ceph_lv0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "tags": {
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.cluster_name": "ceph",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.crush_device_class": "",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.encrypted": "0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.osd_id": "0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.type": "block",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.vdo": "0"
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             },
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "type": "block",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "vg_name": "ceph_vg0"
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:         }
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:     ],
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:     "1": [
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:         {
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "devices": [
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "/dev/loop4"
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             ],
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_name": "ceph_lv1",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_size": "21470642176",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "name": "ceph_lv1",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "tags": {
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.cluster_name": "ceph",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.crush_device_class": "",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.encrypted": "0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.osd_id": "1",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.type": "block",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.vdo": "0"
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             },
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "type": "block",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "vg_name": "ceph_vg1"
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:         }
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:     ],
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:     "2": [
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:         {
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "devices": [
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "/dev/loop5"
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             ],
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_name": "ceph_lv2",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_size": "21470642176",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "name": "ceph_lv2",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "tags": {
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.cluster_name": "ceph",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.crush_device_class": "",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.encrypted": "0",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.osd_id": "2",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.type": "block",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:                 "ceph.vdo": "0"
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             },
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "type": "block",
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:             "vg_name": "ceph_vg2"
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:         }
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]:     ]
Nov 24 20:32:02 compute-0 hopeful_murdock[283403]: }
Nov 24 20:32:02 compute-0 systemd[1]: libpod-c0c78d0426241fd8c19095383abb5b36974e34d7a3af38d8c0fd09c2870d20e1.scope: Deactivated successfully.
Nov 24 20:32:02 compute-0 podman[283387]: 2025-11-24 20:32:02.459167222 +0000 UTC m=+1.012144928 container died c0c78d0426241fd8c19095383abb5b36974e34d7a3af38d8c0fd09c2870d20e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e02b4f1ce5ba3d9e052ab7fdf577716c8a001542a5b50fd149afe4cf881b66f5-merged.mount: Deactivated successfully.
Nov 24 20:32:02 compute-0 podman[283387]: 2025-11-24 20:32:02.54415586 +0000 UTC m=+1.097133606 container remove c0c78d0426241fd8c19095383abb5b36974e34d7a3af38d8c0fd09c2870d20e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:32:02 compute-0 systemd[1]: libpod-conmon-c0c78d0426241fd8c19095383abb5b36974e34d7a3af38d8c0fd09c2870d20e1.scope: Deactivated successfully.
Nov 24 20:32:02 compute-0 sudo[283261]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:02 compute-0 sudo[283426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:32:02 compute-0 sudo[283426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:32:02 compute-0 sudo[283426]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:02 compute-0 sudo[283451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:32:02 compute-0 sudo[283451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:32:02 compute-0 sudo[283451]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:02 compute-0 sudo[283476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:32:02 compute-0 sudo[283476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:32:02 compute-0 sudo[283476]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:03 compute-0 sudo[283501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:32:03 compute-0 sudo[283501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:32:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:03.099+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2442 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:03.415+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:03 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:03 compute-0 podman[283566]: 2025-11-24 20:32:03.465494824 +0000 UTC m=+0.039827699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:32:03 compute-0 podman[283566]: 2025-11-24 20:32:03.62456589 +0000 UTC m=+0.198898725 container create 3a424d2969b411a997b4d927c0e5106dfe7d04256da026329bc0c397529cd467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hoover, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:32:03 compute-0 systemd[1]: Started libpod-conmon-3a424d2969b411a997b4d927c0e5106dfe7d04256da026329bc0c397529cd467.scope.
Nov 24 20:32:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:32:03 compute-0 podman[283566]: 2025-11-24 20:32:03.740713785 +0000 UTC m=+0.315046630 container init 3a424d2969b411a997b4d927c0e5106dfe7d04256da026329bc0c397529cd467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hoover, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:32:03 compute-0 podman[283566]: 2025-11-24 20:32:03.752355714 +0000 UTC m=+0.326688509 container start 3a424d2969b411a997b4d927c0e5106dfe7d04256da026329bc0c397529cd467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hoover, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 20:32:03 compute-0 podman[283566]: 2025-11-24 20:32:03.756985227 +0000 UTC m=+0.331318062 container attach 3a424d2969b411a997b4d927c0e5106dfe7d04256da026329bc0c397529cd467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hoover, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:32:03 compute-0 gifted_hoover[283582]: 167 167
Nov 24 20:32:03 compute-0 systemd[1]: libpod-3a424d2969b411a997b4d927c0e5106dfe7d04256da026329bc0c397529cd467.scope: Deactivated successfully.
Nov 24 20:32:03 compute-0 podman[283566]: 2025-11-24 20:32:03.761818546 +0000 UTC m=+0.336151401 container died 3a424d2969b411a997b4d927c0e5106dfe7d04256da026329bc0c397529cd467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2de5929a7b56d55b7e454773570b2f2f0236cddb59c0cb18b2631d186ba2874e-merged.mount: Deactivated successfully.
Nov 24 20:32:03 compute-0 podman[283566]: 2025-11-24 20:32:03.807951321 +0000 UTC m=+0.382284156 container remove 3a424d2969b411a997b4d927c0e5106dfe7d04256da026329bc0c397529cd467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hoover, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:32:03 compute-0 systemd[1]: libpod-conmon-3a424d2969b411a997b4d927c0e5106dfe7d04256da026329bc0c397529cd467.scope: Deactivated successfully.
Nov 24 20:32:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:04 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:04.065+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:04 compute-0 podman[283606]: 2025-11-24 20:32:04.105279999 +0000 UTC m=+0.077120359 container create 0e78c8e38e4619394fab5782d1d2ff19cafb2e89609bd9702b40f8d5ea091ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 20:32:04 compute-0 podman[283606]: 2025-11-24 20:32:04.062259297 +0000 UTC m=+0.034099737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:32:04 compute-0 systemd[1]: Started libpod-conmon-0e78c8e38e4619394fab5782d1d2ff19cafb2e89609bd9702b40f8d5ea091ad6.scope.
Nov 24 20:32:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6018566188bf57dcad0e730b1e7d3aa04a1e9e88e398e682e0856e08a0ae326e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6018566188bf57dcad0e730b1e7d3aa04a1e9e88e398e682e0856e08a0ae326e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6018566188bf57dcad0e730b1e7d3aa04a1e9e88e398e682e0856e08a0ae326e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6018566188bf57dcad0e730b1e7d3aa04a1e9e88e398e682e0856e08a0ae326e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:32:04 compute-0 podman[283606]: 2025-11-24 20:32:04.23331323 +0000 UTC m=+0.205153680 container init 0e78c8e38e4619394fab5782d1d2ff19cafb2e89609bd9702b40f8d5ea091ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:32:04 compute-0 podman[283606]: 2025-11-24 20:32:04.246914832 +0000 UTC m=+0.218755192 container start 0e78c8e38e4619394fab5782d1d2ff19cafb2e89609bd9702b40f8d5ea091ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cray, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:32:04 compute-0 podman[283606]: 2025-11-24 20:32:04.255701855 +0000 UTC m=+0.227542235 container attach 0e78c8e38e4619394fab5782d1d2ff19cafb2e89609bd9702b40f8d5ea091ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cray, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 20:32:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:04.402+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:04 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:04 compute-0 ceph-mon[75677]: pgmap v1468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:05 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:05.079+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:05 compute-0 upbeat_cray[283623]: {
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "osd_id": 2,
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "type": "bluestore"
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:     },
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "osd_id": 1,
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "type": "bluestore"
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:     },
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "osd_id": 0,
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:         "type": "bluestore"
Nov 24 20:32:05 compute-0 upbeat_cray[283623]:     }
Nov 24 20:32:05 compute-0 upbeat_cray[283623]: }
Nov 24 20:32:05 compute-0 systemd[1]: libpod-0e78c8e38e4619394fab5782d1d2ff19cafb2e89609bd9702b40f8d5ea091ad6.scope: Deactivated successfully.
Nov 24 20:32:05 compute-0 systemd[1]: libpod-0e78c8e38e4619394fab5782d1d2ff19cafb2e89609bd9702b40f8d5ea091ad6.scope: Consumed 1.145s CPU time.
Nov 24 20:32:05 compute-0 podman[283606]: 2025-11-24 20:32:05.387167931 +0000 UTC m=+1.359008331 container died 0e78c8e38e4619394fab5782d1d2ff19cafb2e89609bd9702b40f8d5ea091ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 20:32:05 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:05.428+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6018566188bf57dcad0e730b1e7d3aa04a1e9e88e398e682e0856e08a0ae326e-merged.mount: Deactivated successfully.
Nov 24 20:32:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:05 compute-0 podman[283606]: 2025-11-24 20:32:05.54786977 +0000 UTC m=+1.519710170 container remove 0e78c8e38e4619394fab5782d1d2ff19cafb2e89609bd9702b40f8d5ea091ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:32:05 compute-0 systemd[1]: libpod-conmon-0e78c8e38e4619394fab5782d1d2ff19cafb2e89609bd9702b40f8d5ea091ad6.scope: Deactivated successfully.
Nov 24 20:32:05 compute-0 podman[283657]: 2025-11-24 20:32:05.583846826 +0000 UTC m=+0.155033769 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 24 20:32:05 compute-0 sudo[283501]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:32:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:32:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:32:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:32:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 44db06bb-27fc-43f6-840b-1fe169df4114 does not exist
Nov 24 20:32:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ecb23b1d-154d-4089-8e51-6dc4f7c4fcf7 does not exist
Nov 24 20:32:05 compute-0 sudo[283698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:32:05 compute-0 sudo[283698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:32:05 compute-0 sudo[283698]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:05 compute-0 sudo[283723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:32:05 compute-0 sudo[283723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:32:05 compute-0 sudo[283723]: pam_unix(sudo:session): session closed for user root
Nov 24 20:32:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:06 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:06.108+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:06.470+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:06 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:32:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:32:06 compute-0 ceph-mon[75677]: pgmap v1469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:07 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:07.061+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:07.510+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:07 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:08 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:08.074+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:08.480+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:08 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:08 compute-0 ceph-mon[75677]: pgmap v1470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:09 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:09.126+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:32:09.388 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:32:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:32:09.389 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:32:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:32:09.389 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:32:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:09.443+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:09 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:10 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:10.151+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:10.406+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:10 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:10 compute-0 ceph-mon[75677]: pgmap v1471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:11.113+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:11 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:11.372+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:11 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:12.148+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:12 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2447 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:12.333+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:12 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:12 compute-0 ceph-mon[75677]: pgmap v1472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:12 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2447 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:13.167+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:13 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:13.319+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:13 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:14.195+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:14 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:14.319+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:14 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:14 compute-0 ceph-mon[75677]: pgmap v1473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:15.195+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:15 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:15.343+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:15 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:16.167+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:16 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:16.389+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:16 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:32:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/763737225' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:32:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:32:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/763737225' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:32:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:17.119+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:17 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2457 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:17 compute-0 ceph-mon[75677]: pgmap v1474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/763737225' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:32:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/763737225' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:32:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:17.379+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:17 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:18.115+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:18 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2457 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:18 compute-0 ceph-mon[75677]: pgmap v1475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:18.330+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:18 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:19.338+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:19 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:19.125+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:19 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:20.137+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:20 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:20.326+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:20 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:20 compute-0 ceph-mon[75677]: pgmap v1476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:20 compute-0 podman[283748]: 2025-11-24 20:32:20.825184945 +0000 UTC m=+0.056246237 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 24 20:32:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:21.145+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:21.325+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:21 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:22.122+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:22 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:22.302+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:22 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2462 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:22 compute-0 ceph-mon[75677]: pgmap v1477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:23.127+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:23 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:23.300+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:23 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2462 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:24.116+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:24 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:24.304+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:24 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:32:24
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'backups', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 24 20:32:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:32:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:24 compute-0 ceph-mon[75677]: pgmap v1478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:25.132+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:25 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:25.254+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:25 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:25 compute-0 ceph-mgr[75975]: client.0 ms_handle_reset on v2:192.168.122.100:6800/103018990
Nov 24 20:32:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:26.083+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:26 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:26.256+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:26 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:26 compute-0 ceph-mon[75677]: pgmap v1479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:27.096+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:27 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:27.221+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:27 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:28.090+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:28 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:28.191+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:28 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:28 compute-0 ceph-mon[75677]: pgmap v1480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:29.140+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:29 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:29.203+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:29 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:30.143+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:30 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:30.192+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:30 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:30 compute-0 ceph-mon[75677]: pgmap v1481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:31.110+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:31 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:31 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:31.156+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:31 compute-0 podman[283768]: 2025-11-24 20:32:31.885696874 +0000 UTC m=+0.112416641 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 20:32:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:32 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:32.126+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:32.148+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:32 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2467 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:32 compute-0 ceph-mon[75677]: pgmap v1482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:32 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2467 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:33.107+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:33 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:33.146+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:33 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:34.133+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:34 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:34.193+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:34 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:32:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:32:34 compute-0 ceph-mon[75677]: pgmap v1483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:35.130+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:35 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:35.238+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:35 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:35 compute-0 podman[283788]: 2025-11-24 20:32:35.882913734 +0000 UTC m=+0.117350124 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:32:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:36.093+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:36 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:36.222+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:36 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:36 compute-0 ceph-mon[75677]: pgmap v1484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:37.127+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:37 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:37.240+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:37 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2477 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:37 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2477 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:38.172+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:38 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:38.214+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:38 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:39 compute-0 ceph-mon[75677]: pgmap v1485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:39.212+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:39 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:39.251+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:39 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:40.237+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:40 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:40.252+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:40 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:32:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:32:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:32:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:32:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:32:41 compute-0 ceph-mon[75677]: pgmap v1486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:41.206+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:41.248+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:41 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:42.172+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:42 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:42.277+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:42 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2482 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:43 compute-0 ceph-mon[75677]: pgmap v1487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:43 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2482 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:43.186+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:43 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:43.316+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:43 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:44.167+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:44 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:44.316+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:44 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:45.134+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:45 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:45 compute-0 ceph-mon[75677]: pgmap v1488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:45.282+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:45 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:46.100+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:46 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:46.307+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:46 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:47.101+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:47 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:47 compute-0 ceph-mon[75677]: pgmap v1489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:47.263+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:47 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2487 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:47 compute-0 sshd-session[283816]: Received disconnect from 182.93.7.194 port 42678:11: Bye Bye [preauth]
Nov 24 20:32:47 compute-0 sshd-session[283816]: Disconnected from authenticating user root 182.93.7.194 port 42678 [preauth]
Nov 24 20:32:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:48.126+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:48 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:48 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2487 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:48.264+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:48 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:49.083+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:49 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:49 compute-0 ceph-mon[75677]: pgmap v1490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:49.297+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:49 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:50.069+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:50 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:50.254+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:50 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:51 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:51.043+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:51.301+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:51 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:51 compute-0 ceph-mon[75677]: pgmap v1491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:51 compute-0 podman[283818]: 2025-11-24 20:32:51.864720046 +0000 UTC m=+0.081809312 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 24 20:32:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:52.056+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:52.288+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:52 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2492 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:52 compute-0 ceph-mon[75677]: pgmap v1492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:53 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:53.033+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:53.272+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:53 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:53 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:32:53.340 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:32:53 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:32:53.342 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:32:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2492 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:32:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:54 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:54.081+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:54.313+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:54 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:32:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:32:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:32:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:32:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:32:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:32:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:54 compute-0 ceph-mon[75677]: pgmap v1493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:55.062+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:55 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:55.321+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:55 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:56.089+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:56 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:56.343+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:56 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:56 compute-0 ceph-mon[75677]: pgmap v1494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:57.091+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:57 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:32:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:57.325+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:57 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:58.121+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:58 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:58.307+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:58 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:58 compute-0 ceph-mon[75677]: pgmap v1495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:32:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:32:59.139+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:59 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:32:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:32:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:32:59.275+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:59 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:32:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:32:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:00.126+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:00 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:00.276+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:00 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:01 compute-0 ceph-mon[75677]: pgmap v1496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:01.141+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:01 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:01.254+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:01 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2502 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:02.098+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:02 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:02.270+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:02 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.286837) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016382286860, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1297, "num_deletes": 255, "total_data_size": 1481335, "memory_usage": 1516400, "flush_reason": "Manual Compaction"}
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016382298560, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 1458551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41549, "largest_seqno": 42845, "table_properties": {"data_size": 1452523, "index_size": 3039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16119, "raw_average_key_size": 21, "raw_value_size": 1439039, "raw_average_value_size": 1903, "num_data_blocks": 133, "num_entries": 756, "num_filter_entries": 756, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016297, "oldest_key_time": 1764016297, "file_creation_time": 1764016382, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 11838 microseconds, and 4517 cpu microseconds.
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.298664) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 1458551 bytes OK
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.298693) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.300768) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.300793) EVENT_LOG_v1 {"time_micros": 1764016382300785, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.300819) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1475025, prev total WAL file size 1475025, number of live WAL files 2.
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.301794) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373632' seq:72057594037927935, type:22 .. '6C6F676D0032303133' seq:0, type:0; will stop at (end)
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(1424KB)], [86(7658KB)]
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016382301890, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 9300422, "oldest_snapshot_seqno": -1}
Nov 24 20:33:02 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:33:02.343 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 10375 keys, 9101315 bytes, temperature: kUnknown
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016382486021, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9101315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9043371, "index_size": 30888, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25989, "raw_key_size": 281380, "raw_average_key_size": 27, "raw_value_size": 8864663, "raw_average_value_size": 854, "num_data_blocks": 1169, "num_entries": 10375, "num_filter_entries": 10375, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016382, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.486397) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9101315 bytes
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.492479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 50.5 rd, 49.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.5 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(12.6) write-amplify(6.2) OK, records in: 10897, records dropped: 522 output_compression: NoCompression
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.492547) EVENT_LOG_v1 {"time_micros": 1764016382492510, "job": 50, "event": "compaction_finished", "compaction_time_micros": 184249, "compaction_time_cpu_micros": 25481, "output_level": 6, "num_output_files": 1, "total_output_size": 9101315, "num_input_records": 10897, "num_output_records": 10375, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016382493275, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016382495283, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.301620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.495390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.495401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.495405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.495410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:33:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:33:02.495414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:33:02 compute-0 podman[283837]: 2025-11-24 20:33:02.860847077 +0000 UTC m=+0.089233670 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 20:33:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:03.111+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:03.263+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:03 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:03 compute-0 ceph-mon[75677]: pgmap v1497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2502 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:04.135+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:04 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:04.307+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:04 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:04 compute-0 ceph-mon[75677]: pgmap v1498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:05.101+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:05 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:05.330+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:05 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:05 compute-0 sudo[283859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:05 compute-0 sudo[283859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:05 compute-0 sudo[283859]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:06 compute-0 sudo[283885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:33:06 compute-0 sudo[283885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:06 compute-0 sudo[283885]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:06.101+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:06 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:06 compute-0 sudo[283923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:06 compute-0 sudo[283923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:06 compute-0 sudo[283923]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:06 compute-0 podman[283883]: 2025-11-24 20:33:06.152546327 +0000 UTC m=+0.173199218 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 20:33:06 compute-0 sudo[283957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 20:33:06 compute-0 sudo[283957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:06.313+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:06 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:06 compute-0 ceph-mon[75677]: pgmap v1499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:06 compute-0 podman[284052]: 2025-11-24 20:33:06.806692914 +0000 UTC m=+0.087404313 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 20:33:06 compute-0 podman[284052]: 2025-11-24 20:33:06.896033635 +0000 UTC m=+0.176745044 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 24 20:33:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:07.134+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:07 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2507 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:07.323+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:07 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:07 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2507 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:07 compute-0 sudo[283957]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:33:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:33:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:33:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:33:07 compute-0 sudo[284203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:07 compute-0 sudo[284203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:07 compute-0 sudo[284203]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:07 compute-0 sudo[284228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:33:07 compute-0 sudo[284228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:07 compute-0 sudo[284228]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:07 compute-0 sudo[284253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:07 compute-0 sudo[284253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:07 compute-0 sudo[284253]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:07 compute-0 sudo[284278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:33:07 compute-0 sudo[284278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:08.115+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:08 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:08 compute-0 sudo[284278]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:08.356+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:08 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:33:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:33:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:33:08 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:33:08 compute-0 ceph-mon[75677]: pgmap v1500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:33:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:33:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:33:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:33:08 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 8119b0c6-c191-4d9d-906a-2a177be3e359 does not exist
Nov 24 20:33:08 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 892f5fcd-7a70-43c4-bc63-516be1f67a41 does not exist
Nov 24 20:33:08 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3a8e7d64-2b8e-4bb6-a19d-c45448c38b9a does not exist
Nov 24 20:33:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:33:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:33:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:33:08 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:33:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:33:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:33:08 compute-0 sudo[284334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:08 compute-0 sudo[284334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:08 compute-0 sudo[284334]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:08 compute-0 sudo[284359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:33:08 compute-0 sudo[284359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:08 compute-0 sudo[284359]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:09 compute-0 sudo[284384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:09 compute-0 sudo[284384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:09 compute-0 sudo[284384]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:09.113+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:09 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:09 compute-0 sudo[284409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:33:09 compute-0 sudo[284409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:33:09.389 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:33:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:33:09.390 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:33:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:33:09.390 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:33:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:09.394+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:09 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:33:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:33:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:33:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:33:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:33:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:33:09 compute-0 podman[284472]: 2025-11-24 20:33:09.583879096 +0000 UTC m=+0.023421828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:33:09 compute-0 podman[284472]: 2025-11-24 20:33:09.690545722 +0000 UTC m=+0.130088474 container create 76d9e153afe9521f283e6ce4a72dccb0161782346a766c88a3bed2f1a7975d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:33:09 compute-0 systemd[1]: Started libpod-conmon-76d9e153afe9521f283e6ce4a72dccb0161782346a766c88a3bed2f1a7975d26.scope.
Nov 24 20:33:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:33:09 compute-0 podman[284472]: 2025-11-24 20:33:09.920781957 +0000 UTC m=+0.360324709 container init 76d9e153afe9521f283e6ce4a72dccb0161782346a766c88a3bed2f1a7975d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:33:09 compute-0 podman[284472]: 2025-11-24 20:33:09.933324613 +0000 UTC m=+0.372867355 container start 76d9e153afe9521f283e6ce4a72dccb0161782346a766c88a3bed2f1a7975d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:33:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:09 compute-0 hardcore_mendel[284489]: 167 167
Nov 24 20:33:09 compute-0 systemd[1]: libpod-76d9e153afe9521f283e6ce4a72dccb0161782346a766c88a3bed2f1a7975d26.scope: Deactivated successfully.
Nov 24 20:33:09 compute-0 podman[284472]: 2025-11-24 20:33:09.961698222 +0000 UTC m=+0.401241034 container attach 76d9e153afe9521f283e6ce4a72dccb0161782346a766c88a3bed2f1a7975d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:33:09 compute-0 podman[284472]: 2025-11-24 20:33:09.962297408 +0000 UTC m=+0.401840160 container died 76d9e153afe9521f283e6ce4a72dccb0161782346a766c88a3bed2f1a7975d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:33:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:10.090+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:10 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d048a97c692cae7b65406367491dd5d9830331a3e4d67a60e76be292b16553d4-merged.mount: Deactivated successfully.
Nov 24 20:33:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:10.396+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:10 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:10 compute-0 ceph-mon[75677]: pgmap v1501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:10 compute-0 podman[284472]: 2025-11-24 20:33:10.872385397 +0000 UTC m=+1.311928149 container remove 76d9e153afe9521f283e6ce4a72dccb0161782346a766c88a3bed2f1a7975d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:33:10 compute-0 systemd[1]: libpod-conmon-76d9e153afe9521f283e6ce4a72dccb0161782346a766c88a3bed2f1a7975d26.scope: Deactivated successfully.
Nov 24 20:33:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:11.070+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:11 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:11 compute-0 podman[284515]: 2025-11-24 20:33:11.139259922 +0000 UTC m=+0.071419073 container create 86cdeca016f63c997cfa2ed04f57e501c06b21b350e361b66654c716f489fb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:33:11 compute-0 systemd[1]: Started libpod-conmon-86cdeca016f63c997cfa2ed04f57e501c06b21b350e361b66654c716f489fb7e.scope.
Nov 24 20:33:11 compute-0 podman[284515]: 2025-11-24 20:33:11.110983195 +0000 UTC m=+0.043142366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:33:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d30c8b9fb729dde63b340087227ebc2471044a323e1a242338e11965ecb7657/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d30c8b9fb729dde63b340087227ebc2471044a323e1a242338e11965ecb7657/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d30c8b9fb729dde63b340087227ebc2471044a323e1a242338e11965ecb7657/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d30c8b9fb729dde63b340087227ebc2471044a323e1a242338e11965ecb7657/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d30c8b9fb729dde63b340087227ebc2471044a323e1a242338e11965ecb7657/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:11 compute-0 podman[284515]: 2025-11-24 20:33:11.276830866 +0000 UTC m=+0.208990047 container init 86cdeca016f63c997cfa2ed04f57e501c06b21b350e361b66654c716f489fb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 20:33:11 compute-0 podman[284515]: 2025-11-24 20:33:11.291033246 +0000 UTC m=+0.223192407 container start 86cdeca016f63c997cfa2ed04f57e501c06b21b350e361b66654c716f489fb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 20:33:11 compute-0 podman[284515]: 2025-11-24 20:33:11.299994575 +0000 UTC m=+0.232153766 container attach 86cdeca016f63c997cfa2ed04f57e501c06b21b350e361b66654c716f489fb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 20:33:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:11.356+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:11 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:12.041+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:12 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:12 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:12.338+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:12 compute-0 clever_bell[284532]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:33:12 compute-0 clever_bell[284532]: --> relative data size: 1.0
Nov 24 20:33:12 compute-0 clever_bell[284532]: --> All data devices are unavailable
Nov 24 20:33:12 compute-0 systemd[1]: libpod-86cdeca016f63c997cfa2ed04f57e501c06b21b350e361b66654c716f489fb7e.scope: Deactivated successfully.
Nov 24 20:33:12 compute-0 podman[284515]: 2025-11-24 20:33:12.483564398 +0000 UTC m=+1.415723589 container died 86cdeca016f63c997cfa2ed04f57e501c06b21b350e361b66654c716f489fb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 24 20:33:12 compute-0 systemd[1]: libpod-86cdeca016f63c997cfa2ed04f57e501c06b21b350e361b66654c716f489fb7e.scope: Consumed 1.124s CPU time.
Nov 24 20:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d30c8b9fb729dde63b340087227ebc2471044a323e1a242338e11965ecb7657-merged.mount: Deactivated successfully.
Nov 24 20:33:12 compute-0 podman[284515]: 2025-11-24 20:33:12.580257216 +0000 UTC m=+1.512416377 container remove 86cdeca016f63c997cfa2ed04f57e501c06b21b350e361b66654c716f489fb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 20:33:12 compute-0 systemd[1]: libpod-conmon-86cdeca016f63c997cfa2ed04f57e501c06b21b350e361b66654c716f489fb7e.scope: Deactivated successfully.
Nov 24 20:33:12 compute-0 sudo[284409]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:12 compute-0 sudo[284575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:12 compute-0 sudo[284575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:12 compute-0 sudo[284575]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:12 compute-0 sudo[284600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:33:12 compute-0 sudo[284600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:12 compute-0 sudo[284600]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:12 compute-0 sudo[284625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:12 compute-0 sudo[284625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:12 compute-0 sudo[284625]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2512 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:12 compute-0 ceph-mon[75677]: pgmap v1502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:12 compute-0 sudo[284650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:33:12 compute-0 sudo[284650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:13.077+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:13 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:13.385+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:13 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:13 compute-0 podman[284715]: 2025-11-24 20:33:13.386948086 +0000 UTC m=+0.057899240 container create 0eac99305f3d699c36c4a8f23a3df9db5af35bc6ca37427b9cfb218b020499d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:33:13 compute-0 systemd[1]: Started libpod-conmon-0eac99305f3d699c36c4a8f23a3df9db5af35bc6ca37427b9cfb218b020499d8.scope.
Nov 24 20:33:13 compute-0 podman[284715]: 2025-11-24 20:33:13.358270279 +0000 UTC m=+0.029221513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:33:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:33:13 compute-0 podman[284715]: 2025-11-24 20:33:13.493302684 +0000 UTC m=+0.164253918 container init 0eac99305f3d699c36c4a8f23a3df9db5af35bc6ca37427b9cfb218b020499d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:33:13 compute-0 podman[284715]: 2025-11-24 20:33:13.50765961 +0000 UTC m=+0.178610804 container start 0eac99305f3d699c36c4a8f23a3df9db5af35bc6ca37427b9cfb218b020499d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 20:33:13 compute-0 podman[284715]: 2025-11-24 20:33:13.512425616 +0000 UTC m=+0.183376850 container attach 0eac99305f3d699c36c4a8f23a3df9db5af35bc6ca37427b9cfb218b020499d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_turing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 20:33:13 compute-0 nostalgic_turing[284731]: 167 167
Nov 24 20:33:13 compute-0 systemd[1]: libpod-0eac99305f3d699c36c4a8f23a3df9db5af35bc6ca37427b9cfb218b020499d8.scope: Deactivated successfully.
Nov 24 20:33:13 compute-0 podman[284715]: 2025-11-24 20:33:13.518727215 +0000 UTC m=+0.189678429 container died 0eac99305f3d699c36c4a8f23a3df9db5af35bc6ca37427b9cfb218b020499d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_turing, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c443c023750f02af010790f96a6cf9f6d15c1442bb684711c92b17dbedeb5ebd-merged.mount: Deactivated successfully.
Nov 24 20:33:13 compute-0 podman[284715]: 2025-11-24 20:33:13.576930264 +0000 UTC m=+0.247881438 container remove 0eac99305f3d699c36c4a8f23a3df9db5af35bc6ca37427b9cfb218b020499d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 24 20:33:13 compute-0 systemd[1]: libpod-conmon-0eac99305f3d699c36c4a8f23a3df9db5af35bc6ca37427b9cfb218b020499d8.scope: Deactivated successfully.
Nov 24 20:33:13 compute-0 podman[284755]: 2025-11-24 20:33:13.786563627 +0000 UTC m=+0.056249097 container create 532422d9c6e82fb980d045851521318c49c4bc65beaf897d83b1d1cb968a49ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shannon, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:33:13 compute-0 systemd[1]: Started libpod-conmon-532422d9c6e82fb980d045851521318c49c4bc65beaf897d83b1d1cb968a49ef.scope.
Nov 24 20:33:13 compute-0 podman[284755]: 2025-11-24 20:33:13.769018737 +0000 UTC m=+0.038704187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:33:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960822f32e5ae71bb203712f325a06d16192afbb0febc0014cdde398a2317cc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960822f32e5ae71bb203712f325a06d16192afbb0febc0014cdde398a2317cc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960822f32e5ae71bb203712f325a06d16192afbb0febc0014cdde398a2317cc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960822f32e5ae71bb203712f325a06d16192afbb0febc0014cdde398a2317cc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:13 compute-0 podman[284755]: 2025-11-24 20:33:13.889527144 +0000 UTC m=+0.159212674 container init 532422d9c6e82fb980d045851521318c49c4bc65beaf897d83b1d1cb968a49ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:33:13 compute-0 podman[284755]: 2025-11-24 20:33:13.904608108 +0000 UTC m=+0.174293578 container start 532422d9c6e82fb980d045851521318c49c4bc65beaf897d83b1d1cb968a49ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:33:13 compute-0 podman[284755]: 2025-11-24 20:33:13.909321964 +0000 UTC m=+0.179007434 container attach 532422d9c6e82fb980d045851521318c49c4bc65beaf897d83b1d1cb968a49ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 24 20:33:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:13 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2512 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:14.071+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:14 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:14 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:14.364+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:14 compute-0 happy_shannon[284771]: {
Nov 24 20:33:14 compute-0 happy_shannon[284771]:     "0": [
Nov 24 20:33:14 compute-0 happy_shannon[284771]:         {
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "devices": [
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "/dev/loop3"
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             ],
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_name": "ceph_lv0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_size": "21470642176",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "name": "ceph_lv0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "tags": {
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.cluster_name": "ceph",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.crush_device_class": "",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.encrypted": "0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.osd_id": "0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.type": "block",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.vdo": "0"
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             },
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "type": "block",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "vg_name": "ceph_vg0"
Nov 24 20:33:14 compute-0 happy_shannon[284771]:         }
Nov 24 20:33:14 compute-0 happy_shannon[284771]:     ],
Nov 24 20:33:14 compute-0 happy_shannon[284771]:     "1": [
Nov 24 20:33:14 compute-0 happy_shannon[284771]:         {
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "devices": [
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "/dev/loop4"
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             ],
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_name": "ceph_lv1",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_size": "21470642176",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "name": "ceph_lv1",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "tags": {
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.cluster_name": "ceph",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.crush_device_class": "",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.encrypted": "0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.osd_id": "1",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.type": "block",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.vdo": "0"
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             },
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "type": "block",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "vg_name": "ceph_vg1"
Nov 24 20:33:14 compute-0 happy_shannon[284771]:         }
Nov 24 20:33:14 compute-0 happy_shannon[284771]:     ],
Nov 24 20:33:14 compute-0 happy_shannon[284771]:     "2": [
Nov 24 20:33:14 compute-0 happy_shannon[284771]:         {
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "devices": [
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "/dev/loop5"
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             ],
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_name": "ceph_lv2",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_size": "21470642176",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "name": "ceph_lv2",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "tags": {
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.cluster_name": "ceph",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.crush_device_class": "",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.encrypted": "0",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.osd_id": "2",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.type": "block",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:                 "ceph.vdo": "0"
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             },
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "type": "block",
Nov 24 20:33:14 compute-0 happy_shannon[284771]:             "vg_name": "ceph_vg2"
Nov 24 20:33:14 compute-0 happy_shannon[284771]:         }
Nov 24 20:33:14 compute-0 happy_shannon[284771]:     ]
Nov 24 20:33:14 compute-0 happy_shannon[284771]: }
Nov 24 20:33:14 compute-0 systemd[1]: libpod-532422d9c6e82fb980d045851521318c49c4bc65beaf897d83b1d1cb968a49ef.scope: Deactivated successfully.
Nov 24 20:33:14 compute-0 podman[284755]: 2025-11-24 20:33:14.686835822 +0000 UTC m=+0.956521272 container died 532422d9c6e82fb980d045851521318c49c4bc65beaf897d83b1d1cb968a49ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-960822f32e5ae71bb203712f325a06d16192afbb0febc0014cdde398a2317cc9-merged.mount: Deactivated successfully.
Nov 24 20:33:14 compute-0 podman[284755]: 2025-11-24 20:33:14.771270394 +0000 UTC m=+1.040955864 container remove 532422d9c6e82fb980d045851521318c49c4bc65beaf897d83b1d1cb968a49ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shannon, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 20:33:14 compute-0 systemd[1]: libpod-conmon-532422d9c6e82fb980d045851521318c49c4bc65beaf897d83b1d1cb968a49ef.scope: Deactivated successfully.
Nov 24 20:33:14 compute-0 sudo[284650]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:14 compute-0 sudo[284792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:14 compute-0 sudo[284792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:14 compute-0 sudo[284792]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:14 compute-0 ceph-mon[75677]: pgmap v1503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:14 compute-0 sudo[284817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:33:14 compute-0 sudo[284817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:15 compute-0 sudo[284817]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:15 compute-0 sudo[284842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:15 compute-0 sudo[284842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:15 compute-0 sudo[284842]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:15.077+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:15 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:15 compute-0 sudo[284867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:33:15 compute-0 sudo[284867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:15.375+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:15 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:15 compute-0 podman[284932]: 2025-11-24 20:33:15.58452993 +0000 UTC m=+0.067601381 container create bdbbe3e5d323e5039c140d423962d7d452a60fd4ec9e3c09c222727d1660bb34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:33:15 compute-0 systemd[1]: Started libpod-conmon-bdbbe3e5d323e5039c140d423962d7d452a60fd4ec9e3c09c222727d1660bb34.scope.
Nov 24 20:33:15 compute-0 podman[284932]: 2025-11-24 20:33:15.556890569 +0000 UTC m=+0.039962070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:33:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:33:15 compute-0 podman[284932]: 2025-11-24 20:33:15.68574279 +0000 UTC m=+0.168814221 container init bdbbe3e5d323e5039c140d423962d7d452a60fd4ec9e3c09c222727d1660bb34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:33:15 compute-0 podman[284932]: 2025-11-24 20:33:15.695222813 +0000 UTC m=+0.178294224 container start bdbbe3e5d323e5039c140d423962d7d452a60fd4ec9e3c09c222727d1660bb34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:33:15 compute-0 podman[284932]: 2025-11-24 20:33:15.698471241 +0000 UTC m=+0.181542652 container attach bdbbe3e5d323e5039c140d423962d7d452a60fd4ec9e3c09c222727d1660bb34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 20:33:15 compute-0 happy_goldberg[284948]: 167 167
Nov 24 20:33:15 compute-0 systemd[1]: libpod-bdbbe3e5d323e5039c140d423962d7d452a60fd4ec9e3c09c222727d1660bb34.scope: Deactivated successfully.
Nov 24 20:33:15 compute-0 podman[284932]: 2025-11-24 20:33:15.705269583 +0000 UTC m=+0.188340994 container died bdbbe3e5d323e5039c140d423962d7d452a60fd4ec9e3c09c222727d1660bb34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:33:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-05ab8bac37ffa8c0408a8a9f38dae02367cb34af39d746938e044f8e331d6996-merged.mount: Deactivated successfully.
Nov 24 20:33:15 compute-0 podman[284932]: 2025-11-24 20:33:15.750679998 +0000 UTC m=+0.233751439 container remove bdbbe3e5d323e5039c140d423962d7d452a60fd4ec9e3c09c222727d1660bb34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:33:15 compute-0 systemd[1]: libpod-conmon-bdbbe3e5d323e5039c140d423962d7d452a60fd4ec9e3c09c222727d1660bb34.scope: Deactivated successfully.
Nov 24 20:33:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:15 compute-0 podman[284972]: 2025-11-24 20:33:15.979781113 +0000 UTC m=+0.051430538 container create bea43ae51f12b2cd493243248d657ba11ad66459702446f5d2e39bd4fe6b2675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hoover, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 24 20:33:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:16 compute-0 systemd[1]: Started libpod-conmon-bea43ae51f12b2cd493243248d657ba11ad66459702446f5d2e39bd4fe6b2675.scope.
Nov 24 20:33:16 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:16.037+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:16 compute-0 podman[284972]: 2025-11-24 20:33:15.960185118 +0000 UTC m=+0.031834563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:33:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3ba12bba7f68f59f444d89ee1fe162b1822a03218c0b4dccf9d57e1ed43dd26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3ba12bba7f68f59f444d89ee1fe162b1822a03218c0b4dccf9d57e1ed43dd26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3ba12bba7f68f59f444d89ee1fe162b1822a03218c0b4dccf9d57e1ed43dd26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3ba12bba7f68f59f444d89ee1fe162b1822a03218c0b4dccf9d57e1ed43dd26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:33:16 compute-0 podman[284972]: 2025-11-24 20:33:16.09093598 +0000 UTC m=+0.162585415 container init bea43ae51f12b2cd493243248d657ba11ad66459702446f5d2e39bd4fe6b2675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hoover, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:33:16 compute-0 podman[284972]: 2025-11-24 20:33:16.101972945 +0000 UTC m=+0.173622400 container start bea43ae51f12b2cd493243248d657ba11ad66459702446f5d2e39bd4fe6b2675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hoover, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:33:16 compute-0 podman[284972]: 2025-11-24 20:33:16.110577106 +0000 UTC m=+0.182226511 container attach bea43ae51f12b2cd493243248d657ba11ad66459702446f5d2e39bd4fe6b2675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 20:33:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:16.340+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:16 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:33:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3326890323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:33:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:33:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3326890323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:33:17 compute-0 ceph-mon[75677]: pgmap v1504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3326890323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:33:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3326890323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:33:17 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:17.074+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:17 compute-0 stoic_hoover[284989]: {
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "osd_id": 2,
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "type": "bluestore"
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:     },
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "osd_id": 1,
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "type": "bluestore"
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:     },
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "osd_id": 0,
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:         "type": "bluestore"
Nov 24 20:33:17 compute-0 stoic_hoover[284989]:     }
Nov 24 20:33:17 compute-0 stoic_hoover[284989]: }
Nov 24 20:33:17 compute-0 systemd[1]: libpod-bea43ae51f12b2cd493243248d657ba11ad66459702446f5d2e39bd4fe6b2675.scope: Deactivated successfully.
Nov 24 20:33:17 compute-0 systemd[1]: libpod-bea43ae51f12b2cd493243248d657ba11ad66459702446f5d2e39bd4fe6b2675.scope: Consumed 1.017s CPU time.
Nov 24 20:33:17 compute-0 podman[284972]: 2025-11-24 20:33:17.106562204 +0000 UTC m=+1.178211679 container died bea43ae51f12b2cd493243248d657ba11ad66459702446f5d2e39bd4fe6b2675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3ba12bba7f68f59f444d89ee1fe162b1822a03218c0b4dccf9d57e1ed43dd26-merged.mount: Deactivated successfully.
Nov 24 20:33:17 compute-0 podman[284972]: 2025-11-24 20:33:17.221189963 +0000 UTC m=+1.292839388 container remove bea43ae51f12b2cd493243248d657ba11ad66459702446f5d2e39bd4fe6b2675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hoover, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 20:33:17 compute-0 systemd[1]: libpod-conmon-bea43ae51f12b2cd493243248d657ba11ad66459702446f5d2e39bd4fe6b2675.scope: Deactivated successfully.
Nov 24 20:33:17 compute-0 sudo[284867]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:33:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:33:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:33:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:17.389+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:17 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:33:17 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7fe327c2-7488-4514-ae21-f19eee443eac does not exist
Nov 24 20:33:17 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 480857af-e394-47b8-ba32-aef1a41270f1 does not exist
Nov 24 20:33:17 compute-0 sudo[285037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:33:17 compute-0 sudo[285037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:17 compute-0 sudo[285037]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:17 compute-0 sudo[285062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:33:17 compute-0 sudo[285062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:33:17 compute-0 sudo[285062]: pam_unix(sudo:session): session closed for user root
Nov 24 20:33:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:18.029+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:33:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:33:18 compute-0 ceph-mon[75677]: pgmap v1505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:18.417+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:18 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:18 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:18.987+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:19.375+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:19 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:19.980+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:19 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:20 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:20.419+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:20 compute-0 ceph-mon[75677]: pgmap v1506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:21.021+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:21.371+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:21 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:21.994+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:21 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2517 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:22.356+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:22 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:22 compute-0 ceph-mon[75677]: pgmap v1507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:22 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2517 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:22 compute-0 podman[285087]: 2025-11-24 20:33:22.86488946 +0000 UTC m=+0.084408001 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 20:33:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:22.973+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:22 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:23.397+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:23 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:23.993+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:23 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:33:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:24.417+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:24 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:33:24
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', '.mgr', 'backups']
Nov 24 20:33:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:33:24 compute-0 ceph-mon[75677]: pgmap v1508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:25.020+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:25 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:25.447+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:25 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:26.050+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:26 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:26.431+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:26 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:27.062+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:27 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2527 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:27 compute-0 ceph-mon[75677]: pgmap v1509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:27.455+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:27 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:28.105+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:28 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:28.493+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:28 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:28 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2527 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:28 compute-0 ceph-mon[75677]: pgmap v1510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:29.106+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:29 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:29 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:29.502+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:30.114+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:30 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:30 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:30.505+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:30 compute-0 ceph-mon[75677]: pgmap v1511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:31.080+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:31 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:31 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:31.500+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:32.047+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:32 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:32 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:32.492+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2532 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:32 compute-0 ceph-mon[75677]: pgmap v1512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:33.064+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:33 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:33 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:33.522+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:33 compute-0 podman[285107]: 2025-11-24 20:33:33.861985819 +0000 UTC m=+0.095562030 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 24 20:33:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:33 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2532 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:34.018+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:34 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:34 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:34.475+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:33:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:33:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:34 compute-0 ceph-mon[75677]: pgmap v1513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:35.043+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:35 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:35 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:35.452+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:36.016+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:36 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:36 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:36.417+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:36 compute-0 podman[285129]: 2025-11-24 20:33:36.913083595 +0000 UTC m=+0.145963469 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 20:33:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:37.052+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:37 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:37 compute-0 ceph-mon[75677]: pgmap v1514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:37 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:37.435+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:38.029+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:38 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:38 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:38.431+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:38.982+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:38 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:39 compute-0 ceph-mon[75677]: pgmap v1515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:39.468+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:39 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:40.027+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:40 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:40 compute-0 ceph-mon[75677]: pgmap v1516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:40.439+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:40 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:33:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:33:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:33:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:33:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:33:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:41.058+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:41 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:41.399+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:41 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:42.102+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:42 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2537 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:42 compute-0 ceph-mon[75677]: pgmap v1517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:42 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2537 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:42.410+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:42 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:43.096+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:43 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:43.447+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:43 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:44.135+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:44 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:44 compute-0 ceph-mon[75677]: pgmap v1518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:44.490+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:44 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:45.162+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:45 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:45.515+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:45 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:46.146+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:46 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:46 compute-0 ceph-mon[75677]: pgmap v1519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:46.520+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:46 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:47.181+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:47 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2546 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:47 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2546 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:47.498+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:47 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:48 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:48.213+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:48 compute-0 ceph-mon[75677]: pgmap v1520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:48.478+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:48 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:49.204+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:49 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:49.508+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:49 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:50.190+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:50 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:50 compute-0 ceph-mon[75677]: pgmap v1521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:50.523+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:50 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:51.166+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:51 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:51.476+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:51 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:52.132+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:52 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:52.430+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:52 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2551 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:52 compute-0 ceph-mon[75677]: pgmap v1522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:53.088+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:53 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:53.387+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:53 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2551 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:33:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:53 compute-0 podman[285156]: 2025-11-24 20:33:53.862208968 +0000 UTC m=+0.095313573 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:33:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:54.060+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:54 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:54.381+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:54 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:33:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:33:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:33:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:33:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:33:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:33:54 compute-0 ceph-mon[75677]: pgmap v1523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:55.108+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:55 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:55.349+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:55 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:56.156+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:56 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:56.309+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:56 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:56 compute-0 ceph-mon[75677]: pgmap v1524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:57.185+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:57 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:33:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:57.316+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:57 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:58.217+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:58 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:58.342+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:58 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:58 compute-0 ceph-mon[75677]: pgmap v1525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:33:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:59 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:33:59.058 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:33:59 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:33:59.060 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:33:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:33:59.233+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:59 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:33:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:33:59.355+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:59 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:33:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:33:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:33:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:00.236+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:00 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:00.354+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:00 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:00 compute-0 ceph-mon[75677]: pgmap v1526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:00.944821) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016440944916, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 995, "num_deletes": 251, "total_data_size": 1072504, "memory_usage": 1095400, "flush_reason": "Manual Compaction"}
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016440956208, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1045514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42846, "largest_seqno": 43840, "table_properties": {"data_size": 1040726, "index_size": 2184, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13013, "raw_average_key_size": 21, "raw_value_size": 1030104, "raw_average_value_size": 1683, "num_data_blocks": 96, "num_entries": 612, "num_filter_entries": 612, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016383, "oldest_key_time": 1764016383, "file_creation_time": 1764016440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 11482 microseconds, and 4634 cpu microseconds.
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:00.956291) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1045514 bytes OK
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:00.956329) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:00.957910) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:00.957933) EVENT_LOG_v1 {"time_micros": 1764016440957925, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:00.957962) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1067448, prev total WAL file size 1067448, number of live WAL files 2.
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:00.958826) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1021KB)], [89(8888KB)]
Nov 24 20:34:00 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016440958907, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 10146829, "oldest_snapshot_seqno": -1}
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 10473 keys, 8685463 bytes, temperature: kUnknown
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016441085077, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 8685463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8627484, "index_size": 30667, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26245, "raw_key_size": 284966, "raw_average_key_size": 27, "raw_value_size": 8447483, "raw_average_value_size": 806, "num_data_blocks": 1151, "num_entries": 10473, "num_filter_entries": 10473, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:01.085460) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 8685463 bytes
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:01.097618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.4 rd, 68.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(18.0) write-amplify(8.3) OK, records in: 10987, records dropped: 514 output_compression: NoCompression
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:01.097654) EVENT_LOG_v1 {"time_micros": 1764016441097639, "job": 52, "event": "compaction_finished", "compaction_time_micros": 126271, "compaction_time_cpu_micros": 39015, "output_level": 6, "num_output_files": 1, "total_output_size": 8685463, "num_input_records": 10987, "num_output_records": 10473, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016441097973, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016441099715, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:00.958707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:01.099835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:01.099843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:01.099845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:01.099846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:34:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:34:01.099849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:34:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:01.227+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:01 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:01.316+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:01 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2562 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:02.187+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:02 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:02.353+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:02 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:03 compute-0 ceph-mon[75677]: pgmap v1527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2562 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:03.176+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:03 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:03.370+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:03 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:04.130+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:04 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:04.353+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:04 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:04 compute-0 podman[285173]: 2025-11-24 20:34:04.856520733 +0000 UTC m=+0.086475616 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Nov 24 20:34:05 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:34:05.063 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:34:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:05.143+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:05 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:05.361+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:05 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:05 compute-0 ceph-mon[75677]: pgmap v1528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:06.111+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:06 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:06.386+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:06 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:06 compute-0 ceph-mon[75677]: pgmap v1529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:07.101+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:07 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2567 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:07.400+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:07 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:07 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2567 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:07 compute-0 podman[285194]: 2025-11-24 20:34:07.930193456 +0000 UTC m=+0.145750404 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 20:34:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:08.102+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:08 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:08.421+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:08 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:08 compute-0 ceph-mon[75677]: pgmap v1530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:09.150+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:09 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:09.375+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:09 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:34:09.391 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:34:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:34:09.391 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:34:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:34:09.392 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:34:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:10.108+0000 7f2ca3ee7640 -1 osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:10 compute-0 ceph-osd[88624]: osd.0 138 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:10.420+0000 7f1a67169640 -1 osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:10 compute-0 ceph-osd[89640]: osd.1 138 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 24 20:34:10 compute-0 ceph-mon[75677]: pgmap v1531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 24 20:34:10 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 24 20:34:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:11.067+0000 7f2ca3ee7640 -1 osd.0 139 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:11 compute-0 ceph-osd[88624]: osd.0 139 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:11.392+0000 7f1a67169640 -1 osd.1 139 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:11 compute-0 ceph-osd[89640]: osd.1 139 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:11 compute-0 ceph-mon[75677]: osdmap e139: 3 total, 3 up, 3 in
Nov 24 20:34:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 716 B/s wr, 3 op/s
Nov 24 20:34:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:12.029+0000 7f2ca3ee7640 -1 osd.0 139 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:12 compute-0 ceph-osd[88624]: osd.0 139 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:12.389+0000 7f1a67169640 -1 osd.1 139 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:12 compute-0 ceph-osd[89640]: osd.1 139 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2572 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:12 compute-0 ceph-mon[75677]: pgmap v1533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 716 B/s wr, 3 op/s
Nov 24 20:34:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:13.042+0000 7f2ca3ee7640 -1 osd.0 139 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:13 compute-0 ceph-osd[88624]: osd.0 139 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:13.397+0000 7f1a67169640 -1 osd.1 139 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:13 compute-0 ceph-osd[89640]: osd.1 139 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:13 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2572 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 24 20:34:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 24 20:34:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 716 B/s wr, 3 op/s
Nov 24 20:34:13 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 24 20:34:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:14.395+0000 7f1a67169640 -1 osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:14 compute-0 ceph-osd[89640]: osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:14 compute-0 ceph-mon[75677]: pgmap v1534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 716 B/s wr, 3 op/s
Nov 24 20:34:14 compute-0 ceph-mon[75677]: osdmap e140: 3 total, 3 up, 3 in
Nov 24 20:34:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:15.066+0000 7f2ca3ee7640 -1 osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:15 compute-0 ceph-osd[88624]: osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:15.405+0000 7f1a67169640 -1 osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:15 compute-0 ceph-osd[89640]: osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.6 KiB/s wr, 35 op/s
Nov 24 20:34:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:16.050+0000 7f2ca3ee7640 -1 osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:16 compute-0 ceph-osd[88624]: osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:16.419+0000 7f1a67169640 -1 osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:16 compute-0 ceph-osd[89640]: osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:34:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3414670976' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:34:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:34:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3414670976' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:34:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:17.050+0000 7f2ca3ee7640 -1 osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:17 compute-0 ceph-osd[88624]: osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:17 compute-0 ceph-mon[75677]: pgmap v1536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.6 KiB/s wr, 35 op/s
Nov 24 20:34:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3414670976' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:34:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3414670976' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:34:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:17.459+0000 7f1a67169640 -1 osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:17 compute-0 ceph-osd[89640]: osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:17 compute-0 sudo[285222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:34:17 compute-0 sudo[285222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:17 compute-0 sudo[285222]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:17 compute-0 sudo[285247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:34:17 compute-0 sudo[285247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:17 compute-0 sudo[285247]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:17 compute-0 sudo[285272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:34:17 compute-0 sudo[285272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:17 compute-0 sudo[285272]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:17 compute-0 sudo[285297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:34:17 compute-0 sudo[285297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.6 KiB/s wr, 35 op/s
Nov 24 20:34:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:18.076+0000 7f2ca3ee7640 -1 osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:18 compute-0 ceph-osd[88624]: osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:18 compute-0 ceph-osd[89640]: osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:18.505+0000 7f1a67169640 -1 osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:18 compute-0 sudo[285297]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:34:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:34:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:34:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:34:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:34:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:34:18 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d8d10ea9-03be-4628-a2ff-c0996f4648d9 does not exist
Nov 24 20:34:18 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c7fb2a83-bd26-45b9-9e1b-11ae4e21a60a does not exist
Nov 24 20:34:18 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 49106797-4a18-48f0-889b-888477cce551 does not exist
Nov 24 20:34:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:34:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:34:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:34:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:34:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:34:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:34:18 compute-0 sudo[285354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:34:18 compute-0 sudo[285354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:18 compute-0 sudo[285354]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:18 compute-0 sudo[285379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:34:18 compute-0 sudo[285379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:18 compute-0 sudo[285379]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:18 compute-0 sudo[285404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:34:18 compute-0 sudo[285404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:18 compute-0 sudo[285404]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:18 compute-0 sudo[285429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:34:18 compute-0 sudo[285429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:19.055+0000 7f2ca3ee7640 -1 osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:19 compute-0 ceph-osd[88624]: osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:19 compute-0 ceph-mon[75677]: pgmap v1537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.6 KiB/s wr, 35 op/s
Nov 24 20:34:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:34:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:34:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:34:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:34:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:34:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:34:19 compute-0 podman[285494]: 2025-11-24 20:34:19.330711347 +0000 UTC m=+0.040852874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:34:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:19.498+0000 7f1a67169640 -1 osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:19 compute-0 ceph-osd[89640]: osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:19 compute-0 podman[285494]: 2025-11-24 20:34:19.502846797 +0000 UTC m=+0.212988314 container create 02031db875f147b6846857e9f25581409f9f7e6ee9056ee4d8ca88b98ebd5e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 20:34:19 compute-0 systemd[1]: Started libpod-conmon-02031db875f147b6846857e9f25581409f9f7e6ee9056ee4d8ca88b98ebd5e8c.scope.
Nov 24 20:34:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:34:19 compute-0 podman[285494]: 2025-11-24 20:34:19.931434083 +0000 UTC m=+0.641575580 container init 02031db875f147b6846857e9f25581409f9f7e6ee9056ee4d8ca88b98ebd5e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:34:19 compute-0 podman[285494]: 2025-11-24 20:34:19.940053263 +0000 UTC m=+0.650194740 container start 02031db875f147b6846857e9f25581409f9f7e6ee9056ee4d8ca88b98ebd5e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:34:19 compute-0 nervous_lehmann[285511]: 167 167
Nov 24 20:34:19 compute-0 systemd[1]: libpod-02031db875f147b6846857e9f25581409f9f7e6ee9056ee4d8ca88b98ebd5e8c.scope: Deactivated successfully.
Nov 24 20:34:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.9 KiB/s wr, 40 op/s
Nov 24 20:34:20 compute-0 podman[285494]: 2025-11-24 20:34:20.044041538 +0000 UTC m=+0.754183025 container attach 02031db875f147b6846857e9f25581409f9f7e6ee9056ee4d8ca88b98ebd5e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:34:20 compute-0 podman[285494]: 2025-11-24 20:34:20.044695685 +0000 UTC m=+0.754837162 container died 02031db875f147b6846857e9f25581409f9f7e6ee9056ee4d8ca88b98ebd5e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 20:34:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:20.047+0000 7f2ca3ee7640 -1 osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:20 compute-0 ceph-osd[88624]: osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:20.451+0000 7f1a67169640 -1 osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:20 compute-0 ceph-osd[89640]: osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:20 compute-0 ceph-mon[75677]: pgmap v1538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.9 KiB/s wr, 40 op/s
Nov 24 20:34:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc65f5cf8dac0f57a3a8332bf015eba61cb3f153cf2ec36bdba45f18bb55749c-merged.mount: Deactivated successfully.
Nov 24 20:34:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:21.072+0000 7f2ca3ee7640 -1 osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:21 compute-0 ceph-osd[88624]: osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:21 compute-0 podman[285494]: 2025-11-24 20:34:21.307419805 +0000 UTC m=+2.017561322 container remove 02031db875f147b6846857e9f25581409f9f7e6ee9056ee4d8ca88b98ebd5e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:34:21 compute-0 systemd[1]: libpod-conmon-02031db875f147b6846857e9f25581409f9f7e6ee9056ee4d8ca88b98ebd5e8c.scope: Deactivated successfully.
Nov 24 20:34:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:21.483+0000 7f1a67169640 -1 osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:21 compute-0 ceph-osd[89640]: osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:21 compute-0 podman[285534]: 2025-11-24 20:34:21.518799965 +0000 UTC m=+0.085965502 container create 3d3a67149bb1edc8e00c5ab1535a38d97e80b4bf9c60313c034a20daab7e0d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:34:21 compute-0 podman[285534]: 2025-11-24 20:34:21.461265645 +0000 UTC m=+0.028431152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:34:21 compute-0 systemd[1]: Started libpod-conmon-3d3a67149bb1edc8e00c5ab1535a38d97e80b4bf9c60313c034a20daab7e0d80.scope.
Nov 24 20:34:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c75a55cf0f4f9a5e5ad5972a22ffcb9edd5236757a5d389418a0728de6d4ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c75a55cf0f4f9a5e5ad5972a22ffcb9edd5236757a5d389418a0728de6d4ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c75a55cf0f4f9a5e5ad5972a22ffcb9edd5236757a5d389418a0728de6d4ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c75a55cf0f4f9a5e5ad5972a22ffcb9edd5236757a5d389418a0728de6d4ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c75a55cf0f4f9a5e5ad5972a22ffcb9edd5236757a5d389418a0728de6d4ef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:21 compute-0 podman[285534]: 2025-11-24 20:34:21.755511184 +0000 UTC m=+0.322676721 container init 3d3a67149bb1edc8e00c5ab1535a38d97e80b4bf9c60313c034a20daab7e0d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:34:21 compute-0 podman[285534]: 2025-11-24 20:34:21.76323044 +0000 UTC m=+0.330395937 container start 3d3a67149bb1edc8e00c5ab1535a38d97e80b4bf9c60313c034a20daab7e0d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:34:21 compute-0 podman[285534]: 2025-11-24 20:34:21.904219226 +0000 UTC m=+0.471384713 container attach 3d3a67149bb1edc8e00c5ab1535a38d97e80b4bf9c60313c034a20daab7e0d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:34:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.6 KiB/s wr, 36 op/s
Nov 24 20:34:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:22.069+0000 7f2ca3ee7640 -1 osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:22 compute-0 ceph-osd[88624]: osd.0 140 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2577 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 24 20:34:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:22.469+0000 7f1a67169640 -1 osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:22 compute-0 ceph-osd[89640]: osd.1 140 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 24 20:34:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 24 20:34:22 compute-0 ceph-mon[75677]: pgmap v1539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.6 KiB/s wr, 36 op/s
Nov 24 20:34:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:22 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2577 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:22 compute-0 ceph-mon[75677]: osdmap e141: 3 total, 3 up, 3 in
Nov 24 20:34:22 compute-0 recursing_pasteur[285549]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:34:22 compute-0 recursing_pasteur[285549]: --> relative data size: 1.0
Nov 24 20:34:22 compute-0 recursing_pasteur[285549]: --> All data devices are unavailable
Nov 24 20:34:22 compute-0 systemd[1]: libpod-3d3a67149bb1edc8e00c5ab1535a38d97e80b4bf9c60313c034a20daab7e0d80.scope: Deactivated successfully.
Nov 24 20:34:22 compute-0 systemd[1]: libpod-3d3a67149bb1edc8e00c5ab1535a38d97e80b4bf9c60313c034a20daab7e0d80.scope: Consumed 1.074s CPU time.
Nov 24 20:34:22 compute-0 podman[285534]: 2025-11-24 20:34:22.90342458 +0000 UTC m=+1.470590107 container died 3d3a67149bb1edc8e00c5ab1535a38d97e80b4bf9c60313c034a20daab7e0d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 20:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-02c75a55cf0f4f9a5e5ad5972a22ffcb9edd5236757a5d389418a0728de6d4ef-merged.mount: Deactivated successfully.
Nov 24 20:34:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:23.076+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:23 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:23 compute-0 podman[285534]: 2025-11-24 20:34:23.258875138 +0000 UTC m=+1.826040655 container remove 3d3a67149bb1edc8e00c5ab1535a38d97e80b4bf9c60313c034a20daab7e0d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:34:23 compute-0 systemd[1]: libpod-conmon-3d3a67149bb1edc8e00c5ab1535a38d97e80b4bf9c60313c034a20daab7e0d80.scope: Deactivated successfully.
Nov 24 20:34:23 compute-0 sudo[285429]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:23 compute-0 sudo[285592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:34:23 compute-0 sudo[285592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:23 compute-0 sudo[285592]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:23 compute-0 sudo[285617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:34:23 compute-0 sudo[285617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:23 compute-0 sudo[285617]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:23.488+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:23 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:23 compute-0 sudo[285642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:34:23 compute-0 sudo[285642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:23 compute-0 sudo[285642]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:23 compute-0 sudo[285667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:34:23 compute-0 sudo[285667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 18 op/s
Nov 24 20:34:24 compute-0 podman[285733]: 2025-11-24 20:34:24.003058444 +0000 UTC m=+0.092230140 container create 6c0d460a6d2a17c5ac393ae8717bf17c935856541126484db0282161cb0bd499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 20:34:24 compute-0 podman[285733]: 2025-11-24 20:34:23.955819269 +0000 UTC m=+0.044991055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:34:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:24.046+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:24 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:24 compute-0 systemd[1]: Started libpod-conmon-6c0d460a6d2a17c5ac393ae8717bf17c935856541126484db0282161cb0bd499.scope.
Nov 24 20:34:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:34:24 compute-0 podman[285747]: 2025-11-24 20:34:24.180416513 +0000 UTC m=+0.124939796 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 20:34:24 compute-0 podman[285733]: 2025-11-24 20:34:24.193818572 +0000 UTC m=+0.282990298 container init 6c0d460a6d2a17c5ac393ae8717bf17c935856541126484db0282161cb0bd499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_taussig, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 20:34:24 compute-0 podman[285733]: 2025-11-24 20:34:24.206817521 +0000 UTC m=+0.295989217 container start 6c0d460a6d2a17c5ac393ae8717bf17c935856541126484db0282161cb0bd499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_taussig, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:34:24 compute-0 nervous_taussig[285767]: 167 167
Nov 24 20:34:24 compute-0 systemd[1]: libpod-6c0d460a6d2a17c5ac393ae8717bf17c935856541126484db0282161cb0bd499.scope: Deactivated successfully.
Nov 24 20:34:24 compute-0 podman[285733]: 2025-11-24 20:34:24.235853708 +0000 UTC m=+0.325025614 container attach 6c0d460a6d2a17c5ac393ae8717bf17c935856541126484db0282161cb0bd499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:34:24 compute-0 podman[285733]: 2025-11-24 20:34:24.236507545 +0000 UTC m=+0.325679241 container died 6c0d460a6d2a17c5ac393ae8717bf17c935856541126484db0282161cb0bd499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 20:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e9579d85eb198b30e4b2514820be5e9d64c18bd12e9467aa1c10e6ba41e5e2e-merged.mount: Deactivated successfully.
Nov 24 20:34:24 compute-0 podman[285733]: 2025-11-24 20:34:24.34644567 +0000 UTC m=+0.435617356 container remove 6c0d460a6d2a17c5ac393ae8717bf17c935856541126484db0282161cb0bd499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:34:24 compute-0 systemd[1]: libpod-conmon-6c0d460a6d2a17c5ac393ae8717bf17c935856541126484db0282161cb0bd499.scope: Deactivated successfully.
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:34:24
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'vms']
Nov 24 20:34:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:34:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:24.512+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:24 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:24 compute-0 podman[285792]: 2025-11-24 20:34:24.547813771 +0000 UTC m=+0.066251975 container create 0c9adbfb3fa7d113a00c49d73450f80f6c39d3d9694cf7310a91e199d13179d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:34:24 compute-0 podman[285792]: 2025-11-24 20:34:24.506023713 +0000 UTC m=+0.024461947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:34:24 compute-0 systemd[1]: Started libpod-conmon-0c9adbfb3fa7d113a00c49d73450f80f6c39d3d9694cf7310a91e199d13179d2.scope.
Nov 24 20:34:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64074020d65ef9038f8704aa2c086d017e58109d091b16a6a2310dcbf61e36e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64074020d65ef9038f8704aa2c086d017e58109d091b16a6a2310dcbf61e36e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64074020d65ef9038f8704aa2c086d017e58109d091b16a6a2310dcbf61e36e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64074020d65ef9038f8704aa2c086d017e58109d091b16a6a2310dcbf61e36e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:24 compute-0 podman[285792]: 2025-11-24 20:34:24.795671617 +0000 UTC m=+0.314109861 container init 0c9adbfb3fa7d113a00c49d73450f80f6c39d3d9694cf7310a91e199d13179d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 20:34:24 compute-0 podman[285792]: 2025-11-24 20:34:24.804022771 +0000 UTC m=+0.322460985 container start 0c9adbfb3fa7d113a00c49d73450f80f6c39d3d9694cf7310a91e199d13179d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mestorf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:34:24 compute-0 podman[285792]: 2025-11-24 20:34:24.814021199 +0000 UTC m=+0.332459443 container attach 0c9adbfb3fa7d113a00c49d73450f80f6c39d3d9694cf7310a91e199d13179d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mestorf, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 20:34:25 compute-0 ceph-mon[75677]: pgmap v1541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 18 op/s
Nov 24 20:34:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:25.018+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:25 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]: {
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:     "0": [
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:         {
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "devices": [
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "/dev/loop3"
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             ],
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_name": "ceph_lv0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_size": "21470642176",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "name": "ceph_lv0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "tags": {
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.cluster_name": "ceph",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.crush_device_class": "",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.encrypted": "0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.osd_id": "0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.type": "block",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.vdo": "0"
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             },
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "type": "block",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "vg_name": "ceph_vg0"
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:         }
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:     ],
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:     "1": [
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:         {
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "devices": [
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "/dev/loop4"
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             ],
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_name": "ceph_lv1",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_size": "21470642176",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "name": "ceph_lv1",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "tags": {
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.cluster_name": "ceph",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.crush_device_class": "",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.encrypted": "0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.osd_id": "1",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.type": "block",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.vdo": "0"
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             },
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "type": "block",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "vg_name": "ceph_vg1"
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:         }
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:     ],
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:     "2": [
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:         {
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "devices": [
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "/dev/loop5"
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             ],
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_name": "ceph_lv2",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_size": "21470642176",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "name": "ceph_lv2",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "tags": {
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.cluster_name": "ceph",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.crush_device_class": "",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.encrypted": "0",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.osd_id": "2",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.type": "block",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:                 "ceph.vdo": "0"
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             },
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "type": "block",
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:             "vg_name": "ceph_vg2"
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:         }
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]:     ]
Nov 24 20:34:25 compute-0 fervent_mestorf[285808]: }
Nov 24 20:34:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:25.550+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:25 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:25 compute-0 systemd[1]: libpod-0c9adbfb3fa7d113a00c49d73450f80f6c39d3d9694cf7310a91e199d13179d2.scope: Deactivated successfully.
Nov 24 20:34:25 compute-0 podman[285792]: 2025-11-24 20:34:25.568346737 +0000 UTC m=+1.086784981 container died 0c9adbfb3fa7d113a00c49d73450f80f6c39d3d9694cf7310a91e199d13179d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mestorf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b64074020d65ef9038f8704aa2c086d017e58109d091b16a6a2310dcbf61e36e-merged.mount: Deactivated successfully.
Nov 24 20:34:25 compute-0 podman[285792]: 2025-11-24 20:34:25.627527312 +0000 UTC m=+1.145965516 container remove 0c9adbfb3fa7d113a00c49d73450f80f6c39d3d9694cf7310a91e199d13179d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:34:25 compute-0 systemd[1]: libpod-conmon-0c9adbfb3fa7d113a00c49d73450f80f6c39d3d9694cf7310a91e199d13179d2.scope: Deactivated successfully.
Nov 24 20:34:25 compute-0 sudo[285667]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:25 compute-0 sudo[285828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:34:25 compute-0 sudo[285828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:25 compute-0 sudo[285828]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:25 compute-0 sudo[285853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:34:25 compute-0 sudo[285853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:25 compute-0 sudo[285853]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:25 compute-0 sudo[285878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:34:25 compute-0 sudo[285878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:25 compute-0 sudo[285878]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:25 compute-0 sudo[285903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:34:25 compute-0 sudo[285903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 409 B/s wr, 11 op/s
Nov 24 20:34:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:26.002+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:26 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:26 compute-0 podman[285968]: 2025-11-24 20:34:26.334299166 +0000 UTC m=+0.053131034 container create ce9543928347a8e53823a0b583997ab746e0d59090dc262f05e1d8109c16de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:34:26 compute-0 systemd[1]: Started libpod-conmon-ce9543928347a8e53823a0b583997ab746e0d59090dc262f05e1d8109c16de87.scope.
Nov 24 20:34:26 compute-0 podman[285968]: 2025-11-24 20:34:26.305637639 +0000 UTC m=+0.024469537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:34:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:34:26 compute-0 podman[285968]: 2025-11-24 20:34:26.424166782 +0000 UTC m=+0.142998680 container init ce9543928347a8e53823a0b583997ab746e0d59090dc262f05e1d8109c16de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:34:26 compute-0 podman[285968]: 2025-11-24 20:34:26.433092831 +0000 UTC m=+0.151924699 container start ce9543928347a8e53823a0b583997ab746e0d59090dc262f05e1d8109c16de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:34:26 compute-0 podman[285968]: 2025-11-24 20:34:26.436944775 +0000 UTC m=+0.155776673 container attach ce9543928347a8e53823a0b583997ab746e0d59090dc262f05e1d8109c16de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:34:26 compute-0 nostalgic_wing[285984]: 167 167
Nov 24 20:34:26 compute-0 systemd[1]: libpod-ce9543928347a8e53823a0b583997ab746e0d59090dc262f05e1d8109c16de87.scope: Deactivated successfully.
Nov 24 20:34:26 compute-0 podman[285968]: 2025-11-24 20:34:26.439456682 +0000 UTC m=+0.158288570 container died ce9543928347a8e53823a0b583997ab746e0d59090dc262f05e1d8109c16de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f90ee278e48d15e605f45197ab39611a94f8aed8b2523ab7eb0a784e5650818b-merged.mount: Deactivated successfully.
Nov 24 20:34:26 compute-0 podman[285968]: 2025-11-24 20:34:26.480745847 +0000 UTC m=+0.199577715 container remove ce9543928347a8e53823a0b583997ab746e0d59090dc262f05e1d8109c16de87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 20:34:26 compute-0 systemd[1]: libpod-conmon-ce9543928347a8e53823a0b583997ab746e0d59090dc262f05e1d8109c16de87.scope: Deactivated successfully.
Nov 24 20:34:26 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:26.525+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:26 compute-0 podman[286009]: 2025-11-24 20:34:26.650326708 +0000 UTC m=+0.053809861 container create e6f510fedca5f20a6a09305412bdc471d9b5f4fd5c8c9037c7f62e6f2a062700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:34:26 compute-0 systemd[1]: Started libpod-conmon-e6f510fedca5f20a6a09305412bdc471d9b5f4fd5c8c9037c7f62e6f2a062700.scope.
Nov 24 20:34:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:34:26 compute-0 podman[286009]: 2025-11-24 20:34:26.62237765 +0000 UTC m=+0.025860823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26467c942445f06a7c9c8b944bc01961950775e0e0a8fa01aaf2cc95c91bb994/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26467c942445f06a7c9c8b944bc01961950775e0e0a8fa01aaf2cc95c91bb994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26467c942445f06a7c9c8b944bc01961950775e0e0a8fa01aaf2cc95c91bb994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26467c942445f06a7c9c8b944bc01961950775e0e0a8fa01aaf2cc95c91bb994/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:34:26 compute-0 podman[286009]: 2025-11-24 20:34:26.738530389 +0000 UTC m=+0.142013542 container init e6f510fedca5f20a6a09305412bdc471d9b5f4fd5c8c9037c7f62e6f2a062700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 20:34:26 compute-0 podman[286009]: 2025-11-24 20:34:26.755897225 +0000 UTC m=+0.159380378 container start e6f510fedca5f20a6a09305412bdc471d9b5f4fd5c8c9037c7f62e6f2a062700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wilson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:34:26 compute-0 podman[286009]: 2025-11-24 20:34:26.760037866 +0000 UTC m=+0.163521079 container attach e6f510fedca5f20a6a09305412bdc471d9b5f4fd5c8c9037c7f62e6f2a062700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:34:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:27.015+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:27 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:27 compute-0 ceph-mon[75677]: pgmap v1542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 409 B/s wr, 11 op/s
Nov 24 20:34:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2586 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:27 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:27.493+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:27 compute-0 loving_wilson[286026]: {
Nov 24 20:34:27 compute-0 loving_wilson[286026]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "osd_id": 2,
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "type": "bluestore"
Nov 24 20:34:27 compute-0 loving_wilson[286026]:     },
Nov 24 20:34:27 compute-0 loving_wilson[286026]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "osd_id": 1,
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "type": "bluestore"
Nov 24 20:34:27 compute-0 loving_wilson[286026]:     },
Nov 24 20:34:27 compute-0 loving_wilson[286026]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "osd_id": 0,
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:34:27 compute-0 loving_wilson[286026]:         "type": "bluestore"
Nov 24 20:34:27 compute-0 loving_wilson[286026]:     }
Nov 24 20:34:27 compute-0 loving_wilson[286026]: }
Nov 24 20:34:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 409 B/s wr, 11 op/s
Nov 24 20:34:28 compute-0 systemd[1]: libpod-e6f510fedca5f20a6a09305412bdc471d9b5f4fd5c8c9037c7f62e6f2a062700.scope: Deactivated successfully.
Nov 24 20:34:28 compute-0 podman[286009]: 2025-11-24 20:34:28.015352878 +0000 UTC m=+1.418836001 container died e6f510fedca5f20a6a09305412bdc471d9b5f4fd5c8c9037c7f62e6f2a062700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:34:28 compute-0 systemd[1]: libpod-e6f510fedca5f20a6a09305412bdc471d9b5f4fd5c8c9037c7f62e6f2a062700.scope: Consumed 1.199s CPU time.
Nov 24 20:34:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:28 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2586 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:28.052+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:28 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-26467c942445f06a7c9c8b944bc01961950775e0e0a8fa01aaf2cc95c91bb994-merged.mount: Deactivated successfully.
Nov 24 20:34:28 compute-0 podman[286009]: 2025-11-24 20:34:28.098484774 +0000 UTC m=+1.501967907 container remove e6f510fedca5f20a6a09305412bdc471d9b5f4fd5c8c9037c7f62e6f2a062700 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:34:28 compute-0 systemd[1]: libpod-conmon-e6f510fedca5f20a6a09305412bdc471d9b5f4fd5c8c9037c7f62e6f2a062700.scope: Deactivated successfully.
Nov 24 20:34:28 compute-0 sudo[285903]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:34:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:34:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:34:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:34:28 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev de3cd47a-6d98-457d-b807-7f5a140a0b00 does not exist
Nov 24 20:34:28 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 69ee4d7f-fffd-40c4-82ca-203118c03dca does not exist
Nov 24 20:34:28 compute-0 sudo[286071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:34:28 compute-0 sudo[286071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:28 compute-0 sudo[286071]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:28 compute-0 sudo[286096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:34:28 compute-0 sudo[286096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:34:28 compute-0 sudo[286096]: pam_unix(sudo:session): session closed for user root
Nov 24 20:34:28 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:28.499+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:29.055+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:29 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:29 compute-0 ceph-mon[75677]: pgmap v1543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 409 B/s wr, 11 op/s
Nov 24 20:34:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:34:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:34:29 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:29.517+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:30.066+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:30 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:30 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:30.491+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:31.017+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:31 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:31 compute-0 ceph-mon[75677]: pgmap v1544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:31 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:31.541+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:32.060+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:32 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2592 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:32 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:32.530+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:33.073+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:33 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:33 compute-0 ceph-mon[75677]: pgmap v1545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:33 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2592 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:33.531+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:33 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:34.101+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:34 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:34.553+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:34 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:34:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:34:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:35.076+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:35 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:35 compute-0 ceph-mon[75677]: pgmap v1546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:35.556+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:35 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:35 compute-0 podman[286121]: 2025-11-24 20:34:35.892022544 +0000 UTC m=+0.112855212 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:34:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:36.120+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:36 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:36.586+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:36 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:37.105+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:37 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:37 compute-0 ceph-mon[75677]: pgmap v1547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2596 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:37.568+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:37 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:38.057+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:38 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:38 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2596 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:38.559+0000 7f1a67169640 -1 osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:38 compute-0 ceph-osd[89640]: osd.1 141 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:38 compute-0 podman[286141]: 2025-11-24 20:34:38.921206954 +0000 UTC m=+0.150559132 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 20:34:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:39.024+0000 7f2ca3ee7640 -1 osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:39 compute-0 ceph-osd[88624]: osd.0 141 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 24 20:34:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:39 compute-0 ceph-mon[75677]: pgmap v1548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 24 20:34:39 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 24 20:34:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:39.559+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:39 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 24 20:34:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:40.066+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:40 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:40 compute-0 ceph-mon[75677]: osdmap e142: 3 total, 3 up, 3 in
Nov 24 20:34:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:34:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:34:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:34:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:34:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:34:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:40.571+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:40 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:41.039+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:41 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:41 compute-0 ceph-mon[75677]: pgmap v1550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 24 20:34:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:41.530+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:41 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:34:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:42.073+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:42 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:42 compute-0 ceph-mon[75677]: pgmap v1551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:34:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2601 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:42.508+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:42 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:43.050+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:43 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:43 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2601 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:43.491+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:43 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:34:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:44.031+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:44 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:44 compute-0 ceph-mon[75677]: pgmap v1552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:34:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:44.525+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:44 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:45.022+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:45 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:45.481+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:45 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:34:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:46.010+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:46 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:46 compute-0 ceph-mon[75677]: pgmap v1553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:34:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:46.460+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:46 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:46.994+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:46 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2607 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:47.510+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:47 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:34:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:48.012+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:48 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:48 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2607 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:48 compute-0 ceph-mon[75677]: pgmap v1554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:34:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:48.495+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:48 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:49 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:49.008+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:49.474+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:49 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.4 KiB/s wr, 13 op/s
Nov 24 20:34:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:50.033+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:50 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:50.485+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:50 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:50 compute-0 ceph-mon[75677]: pgmap v1555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.4 KiB/s wr, 13 op/s
Nov 24 20:34:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:51.059+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:51 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:51.463+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:51 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.2 KiB/s wr, 12 op/s
Nov 24 20:34:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:52.015+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:52 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:52.449+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:52 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2612 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:52 compute-0 ceph-mon[75677]: pgmap v1556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.2 KiB/s wr, 12 op/s
Nov 24 20:34:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:53.000+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:53 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:53.414+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:53 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2612 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:34:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:53.989+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:53 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:54.374+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:54 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:34:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:34:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:34:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:34:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:34:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:34:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:54 compute-0 ceph-mon[75677]: pgmap v1557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:54 compute-0 podman[286169]: 2025-11-24 20:34:54.864182535 +0000 UTC m=+0.084859023 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 20:34:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:55.034+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:55 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:55.409+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:55 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:56.079+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:56 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:56.436+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:56 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:56 compute-0 sshd-session[286188]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Nov 24 20:34:56 compute-0 ceph-mon[75677]: pgmap v1558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:57.129+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:57 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:34:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:57.481+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:57 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:58.175+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:58 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:58.523+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:58 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:58 compute-0 ceph-mon[75677]: pgmap v1559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:34:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:34:59.206+0000 7f2ca3ee7640 -1 osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:59 compute-0 ceph-osd[88624]: osd.0 142 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:34:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:34:59.557+0000 7f1a67169640 -1 osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:59 compute-0 ceph-osd[89640]: osd.1 142 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:34:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 24 20:34:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:34:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:34:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 24 20:34:59 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 24 20:34:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 24 20:35:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:00.194+0000 7f2ca3ee7640 -1 osd.0 143 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:00 compute-0 ceph-osd[88624]: osd.0 143 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:00.530+0000 7f1a67169640 -1 osd.1 143 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:00 compute-0 ceph-osd[89640]: osd.1 143 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 24 20:35:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:00 compute-0 ceph-mon[75677]: osdmap e143: 3 total, 3 up, 3 in
Nov 24 20:35:00 compute-0 ceph-mon[75677]: pgmap v1561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 24 20:35:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 24 20:35:00 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 24 20:35:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:01.146+0000 7f2ca3ee7640 -1 osd.0 144 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:01 compute-0 ceph-osd[88624]: osd.0 144 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:01.570+0000 7f1a67169640 -1 osd.1 144 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:01 compute-0 ceph-osd[89640]: osd.1 144 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:01 compute-0 ceph-mon[75677]: osdmap e144: 3 total, 3 up, 3 in
Nov 24 20:35:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.7 KiB/s wr, 29 op/s
Nov 24 20:35:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:02.147+0000 7f2ca3ee7640 -1 osd.0 144 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:02 compute-0 ceph-osd[88624]: osd.0 144 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2617 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:02 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:35:02.598 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:35:02 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:35:02.600 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:35:02 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:35:02.600 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:35:02 compute-0 ceph-osd[89640]: osd.1 144 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:02.609+0000 7f1a67169640 -1 osd.1 144 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 24 20:35:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 24 20:35:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 24 20:35:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:02 compute-0 ceph-mon[75677]: pgmap v1563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.7 KiB/s wr, 29 op/s
Nov 24 20:35:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:02 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2617 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:03.157+0000 7f2ca3ee7640 -1 osd.0 145 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:03 compute-0 ceph-osd[88624]: osd.0 145 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:03.611+0000 7f1a67169640 -1 osd.1 145 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:03 compute-0 ceph-osd[89640]: osd.1 145 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:03 compute-0 ceph-mon[75677]: osdmap e145: 3 total, 3 up, 3 in
Nov 24 20:35:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.3 KiB/s wr, 41 op/s
Nov 24 20:35:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:04.200+0000 7f2ca3ee7640 -1 osd.0 145 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:04 compute-0 ceph-osd[88624]: osd.0 145 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:04 compute-0 ceph-osd[89640]: osd.1 145 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:04.577+0000 7f1a67169640 -1 osd.1 145 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:05 compute-0 ceph-mon[75677]: pgmap v1565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.3 KiB/s wr, 41 op/s
Nov 24 20:35:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:05.250+0000 7f2ca3ee7640 -1 osd.0 145 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:05 compute-0 ceph-osd[88624]: osd.0 145 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:05 compute-0 ceph-osd[89640]: osd.1 145 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:05.613+0000 7f1a67169640 -1 osd.1 145 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 9.0 KiB/s wr, 105 op/s
Nov 24 20:35:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 24 20:35:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 24 20:35:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 24 20:35:06 compute-0 sshd-session[286188]: Connection closed by authenticating user root 139.19.117.130 port 44892 [preauth]
Nov 24 20:35:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:06.300+0000 7f2ca3ee7640 -1 osd.0 145 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:06 compute-0 ceph-osd[88624]: osd.0 145 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:06 compute-0 ceph-osd[89640]: osd.1 146 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:06.569+0000 7f1a67169640 -1 osd.1 146 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:06 compute-0 podman[286190]: 2025-11-24 20:35:06.86793324 +0000 UTC m=+0.093418612 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 20:35:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:07 compute-0 ceph-mon[75677]: pgmap v1566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 9.0 KiB/s wr, 105 op/s
Nov 24 20:35:07 compute-0 ceph-mon[75677]: osdmap e146: 3 total, 3 up, 3 in
Nov 24 20:35:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:07.349+0000 7f2ca3ee7640 -1 osd.0 146 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:07 compute-0 ceph-osd[88624]: osd.0 146 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2626 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 24 20:35:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 24 20:35:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 24 20:35:07 compute-0 ceph-osd[89640]: osd.1 146 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:07.579+0000 7f1a67169640 -1 osd.1 146 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 5.7 KiB/s wr, 68 op/s
Nov 24 20:35:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:08 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2626 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:08 compute-0 ceph-mon[75677]: osdmap e147: 3 total, 3 up, 3 in
Nov 24 20:35:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:08.347+0000 7f2ca3ee7640 -1 osd.0 146 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:08 compute-0 ceph-osd[88624]: osd.0 146 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:08 compute-0 ceph-osd[89640]: osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:08.551+0000 7f1a67169640 -1 osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:09 compute-0 ceph-mon[75677]: pgmap v1569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 5.7 KiB/s wr, 68 op/s
Nov 24 20:35:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:09.374+0000 7f2ca3ee7640 -1 osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:09 compute-0 ceph-osd[88624]: osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:35:09.392 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:35:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:35:09.392 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:35:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:35:09.392 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:35:09 compute-0 ceph-osd[89640]: osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:09.598+0000 7f1a67169640 -1 osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:09 compute-0 podman[286210]: 2025-11-24 20:35:09.888613901 +0000 UTC m=+0.120042225 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 24 20:35:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 4.9 KiB/s wr, 58 op/s
Nov 24 20:35:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:10.329+0000 7f2ca3ee7640 -1 osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:10 compute-0 ceph-osd[88624]: osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:10 compute-0 ceph-osd[89640]: osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:10.550+0000 7f1a67169640 -1 osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:11 compute-0 ceph-mon[75677]: pgmap v1570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 4.9 KiB/s wr, 58 op/s
Nov 24 20:35:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:11.355+0000 7f2ca3ee7640 -1 osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:11 compute-0 ceph-osd[88624]: osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:11 compute-0 ceph-osd[89640]: osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:11.540+0000 7f1a67169640 -1 osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.5 KiB/s wr, 56 op/s
Nov 24 20:35:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:12 compute-0 ceph-mon[75677]: pgmap v1571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.5 KiB/s wr, 56 op/s
Nov 24 20:35:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:12.370+0000 7f2ca3ee7640 -1 osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:12 compute-0 ceph-osd[88624]: osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2631 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:12 compute-0 ceph-osd[89640]: osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:12.532+0000 7f1a67169640 -1 osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:13 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2631 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:13.378+0000 7f2ca3ee7640 -1 osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:13 compute-0 ceph-osd[88624]: osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:13 compute-0 ceph-osd[89640]: osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:13.545+0000 7f1a67169640 -1 osd.1 147 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 767 B/s wr, 6 op/s
Nov 24 20:35:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 24 20:35:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:14.367+0000 7f2ca3ee7640 -1 osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:14 compute-0 ceph-osd[88624]: osd.0 147 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 24 20:35:14 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 24 20:35:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:14 compute-0 ceph-mon[75677]: pgmap v1572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 767 B/s wr, 6 op/s
Nov 24 20:35:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:14.527+0000 7f1a67169640 -1 osd.1 148 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:14 compute-0 ceph-osd[89640]: osd.1 148 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:15.378+0000 7f2ca3ee7640 -1 osd.0 148 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:15 compute-0 ceph-osd[88624]: osd.0 148 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 24 20:35:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:15 compute-0 ceph-mon[75677]: osdmap e148: 3 total, 3 up, 3 in
Nov 24 20:35:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 24 20:35:15 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 24 20:35:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:15.531+0000 7f1a67169640 -1 osd.1 149 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:15 compute-0 ceph-osd[89640]: osd.1 149 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 2 active+clean+laggy, 292 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 57 op/s
Nov 24 20:35:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:16 compute-0 ceph-mon[75677]: osdmap e149: 3 total, 3 up, 3 in
Nov 24 20:35:16 compute-0 ceph-mon[75677]: pgmap v1575: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 2 active+clean+laggy, 292 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 57 op/s
Nov 24 20:35:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:16.421+0000 7f2ca3ee7640 -1 osd.0 149 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:16 compute-0 ceph-osd[88624]: osd.0 149 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:35:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2370323447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:35:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:35:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2370323447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:35:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:16.545+0000 7f1a67169640 -1 osd.1 149 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:16 compute-0 ceph-osd[89640]: osd.1 149 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:17.375+0000 7f2ca3ee7640 -1 osd.0 149 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:17 compute-0 ceph-osd[88624]: osd.0 149 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 24 20:35:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2370323447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:35:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2370323447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:35:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 24 20:35:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 24 20:35:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2636 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:17.510+0000 7f1a67169640 -1 osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:17 compute-0 ceph-osd[89640]: osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 2 active+clean+laggy, 292 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.0 KiB/s wr, 67 op/s
Nov 24 20:35:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:18.401+0000 7f2ca3ee7640 -1 osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:18 compute-0 ceph-osd[88624]: osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:18 compute-0 ceph-mon[75677]: osdmap e150: 3 total, 3 up, 3 in
Nov 24 20:35:18 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2636 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:18 compute-0 ceph-mon[75677]: pgmap v1577: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 2 active+clean+laggy, 292 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.0 KiB/s wr, 67 op/s
Nov 24 20:35:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:18.464+0000 7f1a67169640 -1 osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:18 compute-0 ceph-osd[89640]: osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:19.415+0000 7f2ca3ee7640 -1 osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:19 compute-0 ceph-osd[88624]: osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:19.456+0000 7f1a67169640 -1 osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:19 compute-0 ceph-osd[89640]: osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 2 active+clean+laggy, 292 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.5 KiB/s wr, 74 op/s
Nov 24 20:35:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:20.399+0000 7f2ca3ee7640 -1 osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:20 compute-0 ceph-osd[88624]: osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:20.441+0000 7f1a67169640 -1 osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:20 compute-0 ceph-osd[89640]: osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:20 compute-0 ceph-mon[75677]: pgmap v1578: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 2 active+clean+laggy, 292 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.5 KiB/s wr, 74 op/s
Nov 24 20:35:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:21.366+0000 7f2ca3ee7640 -1 osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:21 compute-0 ceph-osd[88624]: osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:21.436+0000 7f1a67169640 -1 osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:21 compute-0 ceph-osd[89640]: osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 5.5 KiB/s wr, 98 op/s
Nov 24 20:35:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:22.322+0000 7f2ca3ee7640 -1 osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:22 compute-0 ceph-osd[88624]: osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:22.458+0000 7f1a67169640 -1 osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:22 compute-0 ceph-osd[89640]: osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 24 20:35:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 24 20:35:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 24 20:35:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2642 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:22 compute-0 ceph-mon[75677]: pgmap v1579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 5.5 KiB/s wr, 98 op/s
Nov 24 20:35:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:22 compute-0 ceph-mon[75677]: osdmap e151: 3 total, 3 up, 3 in
Nov 24 20:35:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:23.302+0000 7f2ca3ee7640 -1 osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:23 compute-0 ceph-osd[88624]: osd.0 150 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:23.455+0000 7f1a67169640 -1 osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:23 compute-0 ceph-osd[89640]: osd.1 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2642 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.0 KiB/s wr, 42 op/s
Nov 24 20:35:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:24.348+0000 7f2ca3ee7640 -1 osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:24 compute-0 ceph-osd[88624]: osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:35:24
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.control', 'backups', 'images']
Nov 24 20:35:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:35:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:24.500+0000 7f1a67169640 -1 osd.1 151 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:24 compute-0 ceph-osd[89640]: osd.1 151 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:24 compute-0 ceph-mon[75677]: pgmap v1581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.0 KiB/s wr, 42 op/s
Nov 24 20:35:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:25.336+0000 7f2ca3ee7640 -1 osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:25 compute-0 ceph-osd[88624]: osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:25.476+0000 7f1a67169640 -1 osd.1 151 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:25 compute-0 ceph-osd[89640]: osd.1 151 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:25 compute-0 podman[286236]: 2025-11-24 20:35:25.848141859 +0000 UTC m=+0.073394117 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 24 20:35:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.8 KiB/s wr, 39 op/s
Nov 24 20:35:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:26.329+0000 7f2ca3ee7640 -1 osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:26 compute-0 ceph-osd[88624]: osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:26.492+0000 7f1a67169640 -1 osd.1 151 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:26 compute-0 ceph-osd[89640]: osd.1 151 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:26 compute-0 ceph-mon[75677]: pgmap v1582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.8 KiB/s wr, 39 op/s
Nov 24 20:35:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:27.357+0000 7f2ca3ee7640 -1 osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:27 compute-0 ceph-osd[88624]: osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 24 20:35:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 24 20:35:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 24 20:35:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:27.529+0000 7f1a67169640 -1 osd.1 151 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:27 compute-0 ceph-osd[89640]: osd.1 151 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Nov 24 20:35:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:28.338+0000 7f2ca3ee7640 -1 osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:28 compute-0 ceph-osd[88624]: osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:28 compute-0 sudo[286255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:35:28 compute-0 sudo[286255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:28 compute-0 sudo[286255]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:28 compute-0 ceph-mon[75677]: osdmap e152: 3 total, 3 up, 3 in
Nov 24 20:35:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:28 compute-0 ceph-mon[75677]: pgmap v1584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Nov 24 20:35:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:28.531+0000 7f1a67169640 -1 osd.1 151 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:28 compute-0 ceph-osd[89640]: osd.1 151 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:28 compute-0 sudo[286280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:35:28 compute-0 sudo[286280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:28 compute-0 sudo[286280]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:28 compute-0 sudo[286305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:35:28 compute-0 sudo[286305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:28 compute-0 sudo[286305]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:28 compute-0 sudo[286330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:35:28 compute-0 sudo[286330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:29.290+0000 7f2ca3ee7640 -1 osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:29 compute-0 ceph-osd[88624]: osd.0 151 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:29 compute-0 sudo[286330]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:35:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:35:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:35:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:35:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:35:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:35:29 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0c42e442-8ee7-44c2-8da6-756f4240508b does not exist
Nov 24 20:35:29 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 75d4ee0c-456a-48e8-9c9e-745504a84b45 does not exist
Nov 24 20:35:29 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e8f3aed5-6014-4ce3-97bd-cfb1a8efc2f1 does not exist
Nov 24 20:35:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:35:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:35:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:35:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:35:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:35:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:35:29 compute-0 sudo[286385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:35:29 compute-0 sudo[286385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:29 compute-0 sudo[286385]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:29 compute-0 sudo[286410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:35:29 compute-0 sudo[286410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:29 compute-0 sudo[286410]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:29.519+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:29 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:35:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:35:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:35:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:35:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:35:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:35:29 compute-0 sudo[286435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:35:29 compute-0 sudo[286435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:29 compute-0 sudo[286435]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:29 compute-0 sudo[286460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:35:29 compute-0 sudo[286460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:30 compute-0 podman[286526]: 2025-11-24 20:35:30.088761615 +0000 UTC m=+0.052127736 container create 34e3db4d347d2d78c5d24aefebf46c3c633ec9dd42224a8cb25a86832b2bc099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:35:30 compute-0 systemd[1]: Started libpod-conmon-34e3db4d347d2d78c5d24aefebf46c3c633ec9dd42224a8cb25a86832b2bc099.scope.
Nov 24 20:35:30 compute-0 podman[286526]: 2025-11-24 20:35:30.070170068 +0000 UTC m=+0.033536179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:35:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:35:30 compute-0 podman[286526]: 2025-11-24 20:35:30.19309932 +0000 UTC m=+0.156465481 container init 34e3db4d347d2d78c5d24aefebf46c3c633ec9dd42224a8cb25a86832b2bc099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:35:30 compute-0 podman[286526]: 2025-11-24 20:35:30.20323446 +0000 UTC m=+0.166600581 container start 34e3db4d347d2d78c5d24aefebf46c3c633ec9dd42224a8cb25a86832b2bc099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:35:30 compute-0 podman[286526]: 2025-11-24 20:35:30.207202687 +0000 UTC m=+0.170568798 container attach 34e3db4d347d2d78c5d24aefebf46c3c633ec9dd42224a8cb25a86832b2bc099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 20:35:30 compute-0 cool_lalande[286543]: 167 167
Nov 24 20:35:30 compute-0 systemd[1]: libpod-34e3db4d347d2d78c5d24aefebf46c3c633ec9dd42224a8cb25a86832b2bc099.scope: Deactivated successfully.
Nov 24 20:35:30 compute-0 podman[286526]: 2025-11-24 20:35:30.212046626 +0000 UTC m=+0.175412737 container died 34e3db4d347d2d78c5d24aefebf46c3c633ec9dd42224a8cb25a86832b2bc099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2bb618176d39d80d5422d6426ca90396d647b5f31a6f51bc3cbd972622e8069-merged.mount: Deactivated successfully.
Nov 24 20:35:30 compute-0 podman[286526]: 2025-11-24 20:35:30.260558696 +0000 UTC m=+0.223924787 container remove 34e3db4d347d2d78c5d24aefebf46c3c633ec9dd42224a8cb25a86832b2bc099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:35:30 compute-0 systemd[1]: libpod-conmon-34e3db4d347d2d78c5d24aefebf46c3c633ec9dd42224a8cb25a86832b2bc099.scope: Deactivated successfully.
Nov 24 20:35:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:30.279+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:30 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:30 compute-0 podman[286567]: 2025-11-24 20:35:30.513972071 +0000 UTC m=+0.069227815 container create fad70257f8149265ec6755bbf53848e00c3e1522106c9ba4e7c293a48d4cd444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 20:35:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:30.514+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:30 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:30 compute-0 ceph-mon[75677]: pgmap v1585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:30 compute-0 systemd[1]: Started libpod-conmon-fad70257f8149265ec6755bbf53848e00c3e1522106c9ba4e7c293a48d4cd444.scope.
Nov 24 20:35:30 compute-0 podman[286567]: 2025-11-24 20:35:30.487010049 +0000 UTC m=+0.042265803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:35:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4849d6186ee0f98738ae5382f80b792b69eef2670a06c94d275f0ee43ca2c47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4849d6186ee0f98738ae5382f80b792b69eef2670a06c94d275f0ee43ca2c47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4849d6186ee0f98738ae5382f80b792b69eef2670a06c94d275f0ee43ca2c47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4849d6186ee0f98738ae5382f80b792b69eef2670a06c94d275f0ee43ca2c47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4849d6186ee0f98738ae5382f80b792b69eef2670a06c94d275f0ee43ca2c47/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:30 compute-0 podman[286567]: 2025-11-24 20:35:30.674706555 +0000 UTC m=+0.229962279 container init fad70257f8149265ec6755bbf53848e00c3e1522106c9ba4e7c293a48d4cd444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 20:35:30 compute-0 podman[286567]: 2025-11-24 20:35:30.682958205 +0000 UTC m=+0.238213899 container start fad70257f8149265ec6755bbf53848e00c3e1522106c9ba4e7c293a48d4cd444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_neumann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:35:30 compute-0 podman[286567]: 2025-11-24 20:35:30.721333543 +0000 UTC m=+0.276589257 container attach fad70257f8149265ec6755bbf53848e00c3e1522106c9ba4e7c293a48d4cd444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_neumann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:35:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:31.275+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:31 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:31.554+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:31 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:31 compute-0 inspiring_neumann[286584]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:35:31 compute-0 inspiring_neumann[286584]: --> relative data size: 1.0
Nov 24 20:35:31 compute-0 inspiring_neumann[286584]: --> All data devices are unavailable
Nov 24 20:35:31 compute-0 systemd[1]: libpod-fad70257f8149265ec6755bbf53848e00c3e1522106c9ba4e7c293a48d4cd444.scope: Deactivated successfully.
Nov 24 20:35:31 compute-0 systemd[1]: libpod-fad70257f8149265ec6755bbf53848e00c3e1522106c9ba4e7c293a48d4cd444.scope: Consumed 1.034s CPU time.
Nov 24 20:35:31 compute-0 podman[286567]: 2025-11-24 20:35:31.758550616 +0000 UTC m=+1.313806340 container died fad70257f8149265ec6755bbf53848e00c3e1522106c9ba4e7c293a48d4cd444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4849d6186ee0f98738ae5382f80b792b69eef2670a06c94d275f0ee43ca2c47-merged.mount: Deactivated successfully.
Nov 24 20:35:31 compute-0 podman[286567]: 2025-11-24 20:35:31.826058744 +0000 UTC m=+1.381314448 container remove fad70257f8149265ec6755bbf53848e00c3e1522106c9ba4e7c293a48d4cd444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:35:31 compute-0 systemd[1]: libpod-conmon-fad70257f8149265ec6755bbf53848e00c3e1522106c9ba4e7c293a48d4cd444.scope: Deactivated successfully.
Nov 24 20:35:31 compute-0 sudo[286460]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:31 compute-0 sudo[286625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:35:31 compute-0 sudo[286625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:31 compute-0 sudo[286625]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:32 compute-0 sudo[286650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:35:32 compute-0 sudo[286650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:32 compute-0 sudo[286650]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:32 compute-0 sudo[286675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:35:32 compute-0 sudo[286675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:32 compute-0 sudo[286675]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:32 compute-0 sudo[286700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:35:32 compute-0 sudo[286700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:32.300+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:32 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2647 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:32 compute-0 ceph-mon[75677]: pgmap v1586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:32 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2647 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:32 compute-0 podman[286765]: 2025-11-24 20:35:32.572011698 +0000 UTC m=+0.040048503 container create 67bad0a4a5583ad6c0df7807f548a5751c7e3519c5a26d3efb67339c79c4641a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_greider, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:35:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:32.580+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:32 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:32 compute-0 systemd[1]: Started libpod-conmon-67bad0a4a5583ad6c0df7807f548a5751c7e3519c5a26d3efb67339c79c4641a.scope.
Nov 24 20:35:32 compute-0 podman[286765]: 2025-11-24 20:35:32.554779046 +0000 UTC m=+0.022815881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:35:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:35:32 compute-0 podman[286765]: 2025-11-24 20:35:32.667277669 +0000 UTC m=+0.135314524 container init 67bad0a4a5583ad6c0df7807f548a5751c7e3519c5a26d3efb67339c79c4641a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_greider, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:35:32 compute-0 podman[286765]: 2025-11-24 20:35:32.680878063 +0000 UTC m=+0.148914878 container start 67bad0a4a5583ad6c0df7807f548a5751c7e3519c5a26d3efb67339c79c4641a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_greider, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:35:32 compute-0 sleepy_greider[286782]: 167 167
Nov 24 20:35:32 compute-0 podman[286765]: 2025-11-24 20:35:32.684659334 +0000 UTC m=+0.152696199 container attach 67bad0a4a5583ad6c0df7807f548a5751c7e3519c5a26d3efb67339c79c4641a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 20:35:32 compute-0 systemd[1]: libpod-67bad0a4a5583ad6c0df7807f548a5751c7e3519c5a26d3efb67339c79c4641a.scope: Deactivated successfully.
Nov 24 20:35:32 compute-0 podman[286765]: 2025-11-24 20:35:32.685713522 +0000 UTC m=+0.153750327 container died 67bad0a4a5583ad6c0df7807f548a5751c7e3519c5a26d3efb67339c79c4641a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_greider, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 20:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae4697e418effa9d3c21bb5745cef4743151b4ce956b18db8501dd0470510ee2-merged.mount: Deactivated successfully.
Nov 24 20:35:32 compute-0 podman[286765]: 2025-11-24 20:35:32.756526858 +0000 UTC m=+0.224563673 container remove 67bad0a4a5583ad6c0df7807f548a5751c7e3519c5a26d3efb67339c79c4641a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_greider, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:35:32 compute-0 systemd[1]: libpod-conmon-67bad0a4a5583ad6c0df7807f548a5751c7e3519c5a26d3efb67339c79c4641a.scope: Deactivated successfully.
Nov 24 20:35:33 compute-0 podman[286805]: 2025-11-24 20:35:33.013804157 +0000 UTC m=+0.117420715 container create ffcc866e0ce015a6948d1284d5faf6606c70135d8d0a2ef35ab60b8df95b94f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 20:35:33 compute-0 podman[286805]: 2025-11-24 20:35:32.93921254 +0000 UTC m=+0.042829178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:35:33 compute-0 systemd[1]: Started libpod-conmon-ffcc866e0ce015a6948d1284d5faf6606c70135d8d0a2ef35ab60b8df95b94f3.scope.
Nov 24 20:35:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c81eb70712834162d20304784cd269a58873cce441c371c8f375b06d4db8d4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c81eb70712834162d20304784cd269a58873cce441c371c8f375b06d4db8d4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c81eb70712834162d20304784cd269a58873cce441c371c8f375b06d4db8d4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c81eb70712834162d20304784cd269a58873cce441c371c8f375b06d4db8d4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:33 compute-0 podman[286805]: 2025-11-24 20:35:33.201176814 +0000 UTC m=+0.304793412 container init ffcc866e0ce015a6948d1284d5faf6606c70135d8d0a2ef35ab60b8df95b94f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:35:33 compute-0 podman[286805]: 2025-11-24 20:35:33.214450449 +0000 UTC m=+0.318067027 container start ffcc866e0ce015a6948d1284d5faf6606c70135d8d0a2ef35ab60b8df95b94f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:35:33 compute-0 podman[286805]: 2025-11-24 20:35:33.257108201 +0000 UTC m=+0.360724799 container attach ffcc866e0ce015a6948d1284d5faf6606c70135d8d0a2ef35ab60b8df95b94f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:35:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:33.347+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:33 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:33.537+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:33 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:34 compute-0 laughing_franklin[286821]: {
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:     "0": [
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:         {
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "devices": [
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "/dev/loop3"
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             ],
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_name": "ceph_lv0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_size": "21470642176",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "name": "ceph_lv0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "tags": {
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.cluster_name": "ceph",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.crush_device_class": "",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.encrypted": "0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.osd_id": "0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.type": "block",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.vdo": "0"
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             },
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "type": "block",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "vg_name": "ceph_vg0"
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:         }
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:     ],
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:     "1": [
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:         {
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "devices": [
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "/dev/loop4"
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             ],
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_name": "ceph_lv1",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_size": "21470642176",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "name": "ceph_lv1",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "tags": {
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.cluster_name": "ceph",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.crush_device_class": "",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.encrypted": "0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.osd_id": "1",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.type": "block",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.vdo": "0"
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             },
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "type": "block",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "vg_name": "ceph_vg1"
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:         }
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:     ],
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:     "2": [
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:         {
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "devices": [
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "/dev/loop5"
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             ],
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_name": "ceph_lv2",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_size": "21470642176",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "name": "ceph_lv2",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "tags": {
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.cluster_name": "ceph",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.crush_device_class": "",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.encrypted": "0",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.osd_id": "2",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.type": "block",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:                 "ceph.vdo": "0"
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             },
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "type": "block",
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:             "vg_name": "ceph_vg2"
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:         }
Nov 24 20:35:34 compute-0 laughing_franklin[286821]:     ]
Nov 24 20:35:34 compute-0 laughing_franklin[286821]: }
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:34 compute-0 systemd[1]: libpod-ffcc866e0ce015a6948d1284d5faf6606c70135d8d0a2ef35ab60b8df95b94f3.scope: Deactivated successfully.
Nov 24 20:35:34 compute-0 podman[286805]: 2025-11-24 20:35:34.059048995 +0000 UTC m=+1.162665553 container died ffcc866e0ce015a6948d1284d5faf6606c70135d8d0a2ef35ab60b8df95b94f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c81eb70712834162d20304784cd269a58873cce441c371c8f375b06d4db8d4c-merged.mount: Deactivated successfully.
Nov 24 20:35:34 compute-0 podman[286805]: 2025-11-24 20:35:34.178901134 +0000 UTC m=+1.282517722 container remove ffcc866e0ce015a6948d1284d5faf6606c70135d8d0a2ef35ab60b8df95b94f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:35:34 compute-0 systemd[1]: libpod-conmon-ffcc866e0ce015a6948d1284d5faf6606c70135d8d0a2ef35ab60b8df95b94f3.scope: Deactivated successfully.
Nov 24 20:35:34 compute-0 sudo[286700]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:34 compute-0 sudo[286844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:35:34 compute-0 sudo[286844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:34 compute-0 sudo[286844]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:34.381+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:34 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:34 compute-0 sudo[286869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:35:34 compute-0 sudo[286869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:34 compute-0 sudo[286869]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:34 compute-0 sudo[286894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:35:34 compute-0 sudo[286894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:34 compute-0 sudo[286894]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:34.514+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:34 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:34 compute-0 sudo[286919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:35:34 compute-0 sudo[286919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:34 compute-0 ceph-mon[75677]: pgmap v1587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:34 compute-0 podman[286985]: 2025-11-24 20:35:34.916263818 +0000 UTC m=+0.061561409 container create 572b55fd3b22808356d0c29071feddce2e23c2e8eeaf181413e1719f54ae97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:35:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:35:34 compute-0 podman[286985]: 2025-11-24 20:35:34.895501792 +0000 UTC m=+0.040799413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:35:35 compute-0 systemd[1]: Started libpod-conmon-572b55fd3b22808356d0c29071feddce2e23c2e8eeaf181413e1719f54ae97d7.scope.
Nov 24 20:35:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:35:35 compute-0 podman[286985]: 2025-11-24 20:35:35.120216358 +0000 UTC m=+0.265514049 container init 572b55fd3b22808356d0c29071feddce2e23c2e8eeaf181413e1719f54ae97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:35:35 compute-0 podman[286985]: 2025-11-24 20:35:35.130557876 +0000 UTC m=+0.275855517 container start 572b55fd3b22808356d0c29071feddce2e23c2e8eeaf181413e1719f54ae97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:35:35 compute-0 podman[286985]: 2025-11-24 20:35:35.136876375 +0000 UTC m=+0.282174026 container attach 572b55fd3b22808356d0c29071feddce2e23c2e8eeaf181413e1719f54ae97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:35:35 compute-0 determined_wilbur[287001]: 167 167
Nov 24 20:35:35 compute-0 systemd[1]: libpod-572b55fd3b22808356d0c29071feddce2e23c2e8eeaf181413e1719f54ae97d7.scope: Deactivated successfully.
Nov 24 20:35:35 compute-0 podman[286985]: 2025-11-24 20:35:35.139099994 +0000 UTC m=+0.284397655 container died 572b55fd3b22808356d0c29071feddce2e23c2e8eeaf181413e1719f54ae97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fbdd79354cef57ddec4dc681fffea420b434bc459bb8082c6da993cd509edc2-merged.mount: Deactivated successfully.
Nov 24 20:35:35 compute-0 podman[286985]: 2025-11-24 20:35:35.195053183 +0000 UTC m=+0.340350784 container remove 572b55fd3b22808356d0c29071feddce2e23c2e8eeaf181413e1719f54ae97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 20:35:35 compute-0 systemd[1]: libpod-conmon-572b55fd3b22808356d0c29071feddce2e23c2e8eeaf181413e1719f54ae97d7.scope: Deactivated successfully.
Nov 24 20:35:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:35.418+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:35 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:35 compute-0 podman[287023]: 2025-11-24 20:35:35.442692454 +0000 UTC m=+0.097372669 container create 72ef93f38c830c38d61b779046fbbbb055bd8e130d88e16db909dc235b651b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:35:35 compute-0 podman[287023]: 2025-11-24 20:35:35.384892476 +0000 UTC m=+0.039572721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:35:35 compute-0 systemd[1]: Started libpod-conmon-72ef93f38c830c38d61b779046fbbbb055bd8e130d88e16db909dc235b651b1e.scope.
Nov 24 20:35:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6863d8e39d1b8f869204be0382818a6f4de1b3a4c702a1fec8dc3755e60d1d76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6863d8e39d1b8f869204be0382818a6f4de1b3a4c702a1fec8dc3755e60d1d76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6863d8e39d1b8f869204be0382818a6f4de1b3a4c702a1fec8dc3755e60d1d76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6863d8e39d1b8f869204be0382818a6f4de1b3a4c702a1fec8dc3755e60d1d76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:35:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:35.549+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:35 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:35 compute-0 podman[287023]: 2025-11-24 20:35:35.575928211 +0000 UTC m=+0.230608486 container init 72ef93f38c830c38d61b779046fbbbb055bd8e130d88e16db909dc235b651b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 20:35:35 compute-0 podman[287023]: 2025-11-24 20:35:35.583904735 +0000 UTC m=+0.238584950 container start 72ef93f38c830c38d61b779046fbbbb055bd8e130d88e16db909dc235b651b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:35:35 compute-0 podman[287023]: 2025-11-24 20:35:35.591683663 +0000 UTC m=+0.246363888 container attach 72ef93f38c830c38d61b779046fbbbb055bd8e130d88e16db909dc235b651b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:35:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:36.450+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:36 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:36.598+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:36 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:36 compute-0 practical_pare[287040]: {
Nov 24 20:35:36 compute-0 practical_pare[287040]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "osd_id": 2,
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "type": "bluestore"
Nov 24 20:35:36 compute-0 practical_pare[287040]:     },
Nov 24 20:35:36 compute-0 practical_pare[287040]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "osd_id": 1,
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "type": "bluestore"
Nov 24 20:35:36 compute-0 practical_pare[287040]:     },
Nov 24 20:35:36 compute-0 practical_pare[287040]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "osd_id": 0,
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:35:36 compute-0 practical_pare[287040]:         "type": "bluestore"
Nov 24 20:35:36 compute-0 practical_pare[287040]:     }
Nov 24 20:35:36 compute-0 practical_pare[287040]: }
Nov 24 20:35:36 compute-0 systemd[1]: libpod-72ef93f38c830c38d61b779046fbbbb055bd8e130d88e16db909dc235b651b1e.scope: Deactivated successfully.
Nov 24 20:35:36 compute-0 podman[287023]: 2025-11-24 20:35:36.737043362 +0000 UTC m=+1.391723587 container died 72ef93f38c830c38d61b779046fbbbb055bd8e130d88e16db909dc235b651b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:35:36 compute-0 systemd[1]: libpod-72ef93f38c830c38d61b779046fbbbb055bd8e130d88e16db909dc235b651b1e.scope: Consumed 1.156s CPU time.
Nov 24 20:35:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:36 compute-0 ceph-mon[75677]: pgmap v1588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6863d8e39d1b8f869204be0382818a6f4de1b3a4c702a1fec8dc3755e60d1d76-merged.mount: Deactivated successfully.
Nov 24 20:35:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:37.422+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:37 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2656 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:37 compute-0 podman[287023]: 2025-11-24 20:35:37.608648539 +0000 UTC m=+2.263328734 container remove 72ef93f38c830c38d61b779046fbbbb055bd8e130d88e16db909dc235b651b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:35:37 compute-0 systemd[1]: libpod-conmon-72ef93f38c830c38d61b779046fbbbb055bd8e130d88e16db909dc235b651b1e.scope: Deactivated successfully.
Nov 24 20:35:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:37.637+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:37 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:37 compute-0 sudo[286919]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:35:37 compute-0 podman[287085]: 2025-11-24 20:35:37.723990358 +0000 UTC m=+0.648722512 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Nov 24 20:35:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:35:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:35:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:35:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7e80f39e-4407-4723-9793-c88de42389f3 does not exist
Nov 24 20:35:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6554ac55-cb2d-45b2-bdfd-e15459cad2b2 does not exist
Nov 24 20:35:37 compute-0 sudo[287107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:35:37 compute-0 sudo[287107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:37 compute-0 sudo[287107]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:38 compute-0 sudo[287132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:35:38 compute-0 sudo[287132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:35:38 compute-0 sudo[287132]: pam_unix(sudo:session): session closed for user root
Nov 24 20:35:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:38 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2656 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:35:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:35:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:38.409+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:38 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:38.623+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:38 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:39 compute-0 ceph-mon[75677]: pgmap v1589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:39.371+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:39 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:39.590+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:39 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:40.403+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:40 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:35:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:35:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:35:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:35:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:35:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:40.576+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:40 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:40 compute-0 podman[287157]: 2025-11-24 20:35:40.949494923 +0000 UTC m=+0.171006200 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:35:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:41 compute-0 ceph-mon[75677]: pgmap v1590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:41.422+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:41 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:41.579+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:41 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:42.395+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:42 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2661 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:42.592+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:42 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:43 compute-0 ceph-mon[75677]: pgmap v1591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:43 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2661 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:43 compute-0 sshd-session[287183]: Received disconnect from 182.93.7.194 port 43406:11: Bye Bye [preauth]
Nov 24 20:35:43 compute-0 sshd-session[287183]: Disconnected from authenticating user root 182.93.7.194 port 43406 [preauth]
Nov 24 20:35:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:43.418+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:43 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:43.617+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:43 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:44.463+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:44 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:44.662+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:44 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:45 compute-0 ceph-mon[75677]: pgmap v1592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:45.443+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:45 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:45.701+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:45 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:46 compute-0 ceph-mon[75677]: pgmap v1593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:46.462+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:46 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:46.725+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:46 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:47.461+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:47 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2666 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:47.718+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:47 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:48.510+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:48 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:48 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2666 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:48 compute-0 ceph-mon[75677]: pgmap v1594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:48.681+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:48 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:49.511+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:49 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:49.687+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:49 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:50.507+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:50 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:50 compute-0 ceph-mon[75677]: pgmap v1595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:50.667+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:50 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:51.464+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:51 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:51.647+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:51 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:52.495+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:52 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2671 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:52 compute-0 ceph-mon[75677]: pgmap v1596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:52.682+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:52 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:53.460+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:53 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2671 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:35:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:53.691+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:53 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:35:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:35:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:35:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:35:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:35:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:35:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:54.510+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:54 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:54 compute-0 ceph-mon[75677]: pgmap v1597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:54.656+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:54 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:55.498+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:55 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:55.645+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:55 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:56.547+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:56 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:56 compute-0 ceph-mon[75677]: pgmap v1598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:56.616+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:56 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:56 compute-0 podman[287185]: 2025-11-24 20:35:56.871712529 +0000 UTC m=+0.088987334 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:35:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:35:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:57.552+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:57 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:57.597+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:57 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:58.575+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:58 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:58.582+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:58 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:58 compute-0 ceph-mon[75677]: pgmap v1599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:35:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:35:59.589+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:59 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:35:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:35:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:35:59.622+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:59 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:35:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:35:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:00.591+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:00 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:00.671+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:00 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:00 compute-0 ceph-mon[75677]: pgmap v1600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:01.639+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:01 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:01.717+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:01 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2676 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:02.597+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:02 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:02.671+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:02 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:02 compute-0 ceph-mon[75677]: pgmap v1601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:02 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2676 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:02 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:36:02.991 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:36:02 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:36:02.996 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:36:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:03.581+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:03 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:03.639+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:03 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:04.603+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:04 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:04.675+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:04 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:04 compute-0 ceph-mon[75677]: pgmap v1602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:05.645+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:05 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:05.707+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:05 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:06.653+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:06 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:06.664+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:06 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:06 compute-0 ceph-mon[75677]: pgmap v1603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2686 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:07.654+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:07 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:07.656+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:07 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:07 compute-0 podman[287204]: 2025-11-24 20:36:07.872069007 +0000 UTC m=+0.091493581 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 24 20:36:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:07 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2686 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:08.654+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:08 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:08.686+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:08 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:08 compute-0 ceph-mon[75677]: pgmap v1604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:36:09.394 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:36:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:36:09.394 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:36:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:36:09.394 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:36:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:09.635+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:09 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:09.732+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:09 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:10.676+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:10 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:10.779+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:10 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:11 compute-0 ceph-mon[75677]: pgmap v1605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:11.669+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:11 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:11.791+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:11 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:11 compute-0 podman[287224]: 2025-11-24 20:36:11.927665461 +0000 UTC m=+0.154132338 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 24 20:36:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:12.699+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:12 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:12.803+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:12 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:13 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:36:13.000 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:36:13 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2691 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:13 compute-0 ceph-mon[75677]: pgmap v1606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:13.673+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:13 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:13.774+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:13 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:14 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2691 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:14.691+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:14 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:14.747+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:14 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:15 compute-0 ceph-mon[75677]: pgmap v1607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:15.662+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:15 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:15.719+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:15 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:36:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/197141311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:36:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:36:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/197141311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:36:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:16.708+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:16.708+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:16 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:16 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:17 compute-0 ceph-mon[75677]: pgmap v1608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/197141311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:36:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/197141311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:36:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:17.725+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:17 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:17.749+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:17 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:18.709+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:18 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:18.769+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:18 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:19 compute-0 ceph-mon[75677]: pgmap v1609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:19.748+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:19 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:19.751+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:19 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:20.721+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:20 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:20.771+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:20 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:21 compute-0 ceph-mon[75677]: pgmap v1610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:21.693+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:21 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:21.812+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:21 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2701 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:22.688+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:22 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:22.831+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:22 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:23 compute-0 ceph-mon[75677]: pgmap v1611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2701 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:23.727+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:23 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:23.879+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:23 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:36:24
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.log']
Nov 24 20:36:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:36:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:24.690+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:24 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:24.928+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:24 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:25 compute-0 ceph-mon[75677]: pgmap v1612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:25.689+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:25 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:25.895+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:25 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:26 compute-0 ceph-mon[75677]: pgmap v1613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:26.697+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:26 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:26.884+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:26 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2707 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:27.736+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:27 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:27 compute-0 podman[287251]: 2025-11-24 20:36:27.847057232 +0000 UTC m=+0.075961545 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 24 20:36:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:27.910+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:27 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:28 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2707 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:28 compute-0 ceph-mon[75677]: pgmap v1614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:28.758+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:28 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:28.866+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:28 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:29.719+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:29 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:29.823+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:29 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:30 compute-0 ceph-mon[75677]: pgmap v1615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:30.728+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:30 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:30.803+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:30 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:31.706+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:31 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:31.817+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:31 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2712 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:32 compute-0 ceph-mon[75677]: pgmap v1616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:32.662+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:32 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:32.825+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:32 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:33 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2712 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:33.666+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:33 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:33.861+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:33 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:34.622+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:34 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:34 compute-0 ceph-mon[75677]: pgmap v1617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:34.895+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:34 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:36:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:36:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:35.641+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:35 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:35.871+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:35 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:36.599+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:36 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:36 compute-0 ceph-mon[75677]: pgmap v1618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:36.910+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:36 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:37.555+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:37 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:37.887+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:37 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:38 compute-0 sudo[287270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:36:38 compute-0 sudo[287270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:38 compute-0 sudo[287270]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:38 compute-0 podman[287294]: 2025-11-24 20:36:38.229113886 +0000 UTC m=+0.058147686 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 24 20:36:38 compute-0 sudo[287301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:36:38 compute-0 sudo[287301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:38 compute-0 sudo[287301]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:38 compute-0 sudo[287340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:36:38 compute-0 sudo[287340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:38 compute-0 sudo[287340]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:38 compute-0 sudo[287365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:36:38 compute-0 sudo[287365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:38.525+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:38 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:38.901+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:38 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:38 compute-0 sudo[287365]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:36:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:36:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:36:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:36:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:36:39 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:36:39 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 17dcd25f-5c8c-4128-a2bc-dba6cb0d87e3 does not exist
Nov 24 20:36:39 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3f3528e4-8f04-4e00-9d7b-0745916ea88e does not exist
Nov 24 20:36:39 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2ea98bb8-9bf3-453d-88d1-92b37b4dd800 does not exist
Nov 24 20:36:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:36:39 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:36:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:36:39 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:36:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:36:39 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:36:39 compute-0 sudo[287420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:36:39 compute-0 sudo[287420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:39 compute-0 sudo[287420]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:39 compute-0 sudo[287445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:36:39 compute-0 sudo[287445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:39 compute-0 sudo[287445]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:39 compute-0 sudo[287470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:36:39 compute-0 sudo[287470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:39 compute-0 sudo[287470]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:39 compute-0 sudo[287495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:36:39 compute-0 sudo[287495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:39 compute-0 ceph-mon[75677]: pgmap v1619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:36:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:36:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:36:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:36:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:36:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:36:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:39.523+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:39 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:39 compute-0 podman[287560]: 2025-11-24 20:36:39.764805517 +0000 UTC m=+0.061390573 container create e2949ee59ba7bdefcd3d01e523ab41644742cb69bf37e00441726c997c68c010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:36:39 compute-0 systemd[1]: Started libpod-conmon-e2949ee59ba7bdefcd3d01e523ab41644742cb69bf37e00441726c997c68c010.scope.
Nov 24 20:36:39 compute-0 podman[287560]: 2025-11-24 20:36:39.73689084 +0000 UTC m=+0.033475966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:36:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:36:39 compute-0 podman[287560]: 2025-11-24 20:36:39.860279211 +0000 UTC m=+0.156864297 container init e2949ee59ba7bdefcd3d01e523ab41644742cb69bf37e00441726c997c68c010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:36:39 compute-0 podman[287560]: 2025-11-24 20:36:39.872031016 +0000 UTC m=+0.168616102 container start e2949ee59ba7bdefcd3d01e523ab41644742cb69bf37e00441726c997c68c010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:36:39 compute-0 podman[287560]: 2025-11-24 20:36:39.87742401 +0000 UTC m=+0.174009096 container attach e2949ee59ba7bdefcd3d01e523ab41644742cb69bf37e00441726c997c68c010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:36:39 compute-0 admiring_darwin[287576]: 167 167
Nov 24 20:36:39 compute-0 systemd[1]: libpod-e2949ee59ba7bdefcd3d01e523ab41644742cb69bf37e00441726c997c68c010.scope: Deactivated successfully.
Nov 24 20:36:39 compute-0 podman[287560]: 2025-11-24 20:36:39.880877853 +0000 UTC m=+0.177462929 container died e2949ee59ba7bdefcd3d01e523ab41644742cb69bf37e00441726c997c68c010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:36:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:39.895+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:39 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e6e80ee165117a518038e758b60327ab58e2273d236d78d0d7cba443ead52e2-merged.mount: Deactivated successfully.
Nov 24 20:36:39 compute-0 podman[287560]: 2025-11-24 20:36:39.942033659 +0000 UTC m=+0.238618705 container remove e2949ee59ba7bdefcd3d01e523ab41644742cb69bf37e00441726c997c68c010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_darwin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:36:39 compute-0 systemd[1]: libpod-conmon-e2949ee59ba7bdefcd3d01e523ab41644742cb69bf37e00441726c997c68c010.scope: Deactivated successfully.
Nov 24 20:36:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:40 compute-0 podman[287602]: 2025-11-24 20:36:40.147575577 +0000 UTC m=+0.053481342 container create d16254686fd2a3c2469b2b9b7db8b948265aa8742c6e6a0a74ea8aa7f37c8bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 20:36:40 compute-0 systemd[1]: Started libpod-conmon-d16254686fd2a3c2469b2b9b7db8b948265aa8742c6e6a0a74ea8aa7f37c8bae.scope.
Nov 24 20:36:40 compute-0 podman[287602]: 2025-11-24 20:36:40.125938428 +0000 UTC m=+0.031844203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:36:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15acd799da12d40a24f565946a278c915516b487f74c5ee849027f32e83ec687/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15acd799da12d40a24f565946a278c915516b487f74c5ee849027f32e83ec687/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15acd799da12d40a24f565946a278c915516b487f74c5ee849027f32e83ec687/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15acd799da12d40a24f565946a278c915516b487f74c5ee849027f32e83ec687/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15acd799da12d40a24f565946a278c915516b487f74c5ee849027f32e83ec687/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:40 compute-0 podman[287602]: 2025-11-24 20:36:40.259981934 +0000 UTC m=+0.165887739 container init d16254686fd2a3c2469b2b9b7db8b948265aa8742c6e6a0a74ea8aa7f37c8bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:36:40 compute-0 podman[287602]: 2025-11-24 20:36:40.274635436 +0000 UTC m=+0.180541201 container start d16254686fd2a3c2469b2b9b7db8b948265aa8742c6e6a0a74ea8aa7f37c8bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:36:40 compute-0 podman[287602]: 2025-11-24 20:36:40.279889856 +0000 UTC m=+0.185795641 container attach d16254686fd2a3c2469b2b9b7db8b948265aa8742c6e6a0a74ea8aa7f37c8bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:36:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:40 compute-0 ceph-mon[75677]: pgmap v1620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:40.512+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:40 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:36:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:36:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:36:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:36:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:36:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:40.909+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:40 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:41 compute-0 beautiful_pare[287619]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:36:41 compute-0 beautiful_pare[287619]: --> relative data size: 1.0
Nov 24 20:36:41 compute-0 beautiful_pare[287619]: --> All data devices are unavailable
Nov 24 20:36:41 compute-0 systemd[1]: libpod-d16254686fd2a3c2469b2b9b7db8b948265aa8742c6e6a0a74ea8aa7f37c8bae.scope: Deactivated successfully.
Nov 24 20:36:41 compute-0 systemd[1]: libpod-d16254686fd2a3c2469b2b9b7db8b948265aa8742c6e6a0a74ea8aa7f37c8bae.scope: Consumed 1.037s CPU time.
Nov 24 20:36:41 compute-0 podman[287648]: 2025-11-24 20:36:41.40139795 +0000 UTC m=+0.028154775 container died d16254686fd2a3c2469b2b9b7db8b948265aa8742c6e6a0a74ea8aa7f37c8bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:36:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-15acd799da12d40a24f565946a278c915516b487f74c5ee849027f32e83ec687-merged.mount: Deactivated successfully.
Nov 24 20:36:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:41 compute-0 podman[287648]: 2025-11-24 20:36:41.49040174 +0000 UTC m=+0.117158555 container remove d16254686fd2a3c2469b2b9b7db8b948265aa8742c6e6a0a74ea8aa7f37c8bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:36:41 compute-0 systemd[1]: libpod-conmon-d16254686fd2a3c2469b2b9b7db8b948265aa8742c6e6a0a74ea8aa7f37c8bae.scope: Deactivated successfully.
Nov 24 20:36:41 compute-0 sudo[287495]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:41.539+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:41 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:41 compute-0 sudo[287663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:36:41 compute-0 sudo[287663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:41 compute-0 sudo[287663]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:41 compute-0 sudo[287688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:36:41 compute-0 sudo[287688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:36:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 9169 writes, 45K keys, 9169 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 9169 writes, 9169 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1838 writes, 9135 keys, 1838 commit groups, 1.0 writes per commit group, ingest: 10.22 MB, 0.02 MB/s
                                           Interval WAL: 1838 writes, 1838 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.0      0.68              0.19        26    0.026       0      0       0.0       0.0
                                             L6      1/0    8.28 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.6     96.4     82.0      2.44              0.85        25    0.097    199K    14K       0.0       0.0
                                            Sum      1/0    8.28 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.6     75.4     78.0      3.12              1.04        51    0.061    199K    14K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     50.9     51.3      1.19              0.26        12    0.099     63K   3085       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     96.4     82.0      2.44              0.85        25    0.097    199K    14K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.2      0.68              0.19        25    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.043, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.24 GB write, 0.08 MB/s write, 0.23 GB read, 0.08 MB/s read, 3.1 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b73e09f1f0#2 capacity: 304.00 MB usage: 23.85 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000198 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1609,22.54 MB,7.41354%) FilterBlock(52,556.73 KB,0.178844%) IndexBlock(52,787.66 KB,0.253025%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 20:36:41 compute-0 sudo[287688]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:41 compute-0 sudo[287713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:36:41 compute-0 sudo[287713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:41 compute-0 sudo[287713]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:41 compute-0 sudo[287738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:36:41 compute-0 sudo[287738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:41.862+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:41 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:42 compute-0 podman[287804]: 2025-11-24 20:36:42.203687552 +0000 UTC m=+0.072996834 container create c85f5ba808b7e0c85e8857335a13a5aa8a8b48036d273fbe9f48a617f2d3fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:36:42 compute-0 systemd[1]: Started libpod-conmon-c85f5ba808b7e0c85e8857335a13a5aa8a8b48036d273fbe9f48a617f2d3fe26.scope.
Nov 24 20:36:42 compute-0 podman[287804]: 2025-11-24 20:36:42.177041678 +0000 UTC m=+0.046350980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:36:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:36:42 compute-0 podman[287804]: 2025-11-24 20:36:42.285760517 +0000 UTC m=+0.155069879 container init c85f5ba808b7e0c85e8857335a13a5aa8a8b48036d273fbe9f48a617f2d3fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 20:36:42 compute-0 podman[287804]: 2025-11-24 20:36:42.301181 +0000 UTC m=+0.170490292 container start c85f5ba808b7e0c85e8857335a13a5aa8a8b48036d273fbe9f48a617f2d3fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:36:42 compute-0 systemd[1]: libpod-c85f5ba808b7e0c85e8857335a13a5aa8a8b48036d273fbe9f48a617f2d3fe26.scope: Deactivated successfully.
Nov 24 20:36:42 compute-0 quizzical_solomon[287821]: 167 167
Nov 24 20:36:42 compute-0 podman[287804]: 2025-11-24 20:36:42.305715551 +0000 UTC m=+0.175024803 container attach c85f5ba808b7e0c85e8857335a13a5aa8a8b48036d273fbe9f48a617f2d3fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:36:42 compute-0 conmon[287821]: conmon c85f5ba808b7e0c85e88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c85f5ba808b7e0c85e8857335a13a5aa8a8b48036d273fbe9f48a617f2d3fe26.scope/container/memory.events
Nov 24 20:36:42 compute-0 podman[287804]: 2025-11-24 20:36:42.306743949 +0000 UTC m=+0.176053201 container died c85f5ba808b7e0c85e8857335a13a5aa8a8b48036d273fbe9f48a617f2d3fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a628cbcd0b987bf4facae9cac8d115102e17c68743683112c5e68cfb5647a43-merged.mount: Deactivated successfully.
Nov 24 20:36:42 compute-0 podman[287804]: 2025-11-24 20:36:42.344158689 +0000 UTC m=+0.213467941 container remove c85f5ba808b7e0c85e8857335a13a5aa8a8b48036d273fbe9f48a617f2d3fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_solomon, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:36:42 compute-0 systemd[1]: libpod-conmon-c85f5ba808b7e0c85e8857335a13a5aa8a8b48036d273fbe9f48a617f2d3fe26.scope: Deactivated successfully.
Nov 24 20:36:42 compute-0 podman[287818]: 2025-11-24 20:36:42.392729978 +0000 UTC m=+0.144719261 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 20:36:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2722 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:42 compute-0 ceph-mon[75677]: pgmap v1621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.451375) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016602451406, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2406, "num_deletes": 256, "total_data_size": 3032948, "memory_usage": 3085648, "flush_reason": "Manual Compaction"}
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016602471200, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 2973016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43841, "largest_seqno": 46246, "table_properties": {"data_size": 2962385, "index_size": 6357, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 28490, "raw_average_key_size": 22, "raw_value_size": 2938467, "raw_average_value_size": 2313, "num_data_blocks": 274, "num_entries": 1270, "num_filter_entries": 1270, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016441, "oldest_key_time": 1764016441, "file_creation_time": 1764016602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 19885 microseconds, and 6094 cpu microseconds.
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.471252) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 2973016 bytes OK
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.471277) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.472621) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.472635) EVENT_LOG_v1 {"time_micros": 1764016602472630, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.472653) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3022042, prev total WAL file size 3022042, number of live WAL files 2.
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.473460) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(2903KB)], [92(8481KB)]
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016602473488, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 11658479, "oldest_snapshot_seqno": -1}
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 11220 keys, 10151193 bytes, temperature: kUnknown
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016602542133, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10151193, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10087316, "index_size": 34709, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 303686, "raw_average_key_size": 27, "raw_value_size": 9892999, "raw_average_value_size": 881, "num_data_blocks": 1313, "num_entries": 11220, "num_filter_entries": 11220, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.542475) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10151193 bytes
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.543744) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.6 rd, 147.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 11743, records dropped: 523 output_compression: NoCompression
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.543776) EVENT_LOG_v1 {"time_micros": 1764016602543760, "job": 54, "event": "compaction_finished", "compaction_time_micros": 68752, "compaction_time_cpu_micros": 27009, "output_level": 6, "num_output_files": 1, "total_output_size": 10151193, "num_input_records": 11743, "num_output_records": 11220, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016602544953, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016602548099, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.473378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.548216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.548224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.548227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.548230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:36:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:36:42.548233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:36:42 compute-0 podman[287869]: 2025-11-24 20:36:42.550280294 +0000 UTC m=+0.047716558 container create 99cde27385bf234cee4c172f11ec00f24b80eb530a4eb23ff1ae13cb0524be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 20:36:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:42.580+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:42 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:42 compute-0 systemd[1]: Started libpod-conmon-99cde27385bf234cee4c172f11ec00f24b80eb530a4eb23ff1ae13cb0524be97.scope.
Nov 24 20:36:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f70a08dd74e5480dbf1f31d2b3473f88ea8e15f85b1761681a896968e8dd6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f70a08dd74e5480dbf1f31d2b3473f88ea8e15f85b1761681a896968e8dd6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f70a08dd74e5480dbf1f31d2b3473f88ea8e15f85b1761681a896968e8dd6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f70a08dd74e5480dbf1f31d2b3473f88ea8e15f85b1761681a896968e8dd6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:42 compute-0 podman[287869]: 2025-11-24 20:36:42.533075514 +0000 UTC m=+0.030511778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:36:42 compute-0 podman[287869]: 2025-11-24 20:36:42.670792228 +0000 UTC m=+0.168228502 container init 99cde27385bf234cee4c172f11ec00f24b80eb530a4eb23ff1ae13cb0524be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:36:42 compute-0 podman[287869]: 2025-11-24 20:36:42.678965156 +0000 UTC m=+0.176401420 container start 99cde27385bf234cee4c172f11ec00f24b80eb530a4eb23ff1ae13cb0524be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:36:42 compute-0 podman[287869]: 2025-11-24 20:36:42.746199095 +0000 UTC m=+0.243635349 container attach 99cde27385bf234cee4c172f11ec00f24b80eb530a4eb23ff1ae13cb0524be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:36:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:42.827+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:42 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:43 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2722 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:43 compute-0 nice_wing[287885]: {
Nov 24 20:36:43 compute-0 nice_wing[287885]:     "0": [
Nov 24 20:36:43 compute-0 nice_wing[287885]:         {
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "devices": [
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "/dev/loop3"
Nov 24 20:36:43 compute-0 nice_wing[287885]:             ],
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_name": "ceph_lv0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_size": "21470642176",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "name": "ceph_lv0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "tags": {
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.cluster_name": "ceph",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.crush_device_class": "",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.encrypted": "0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.osd_id": "0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.type": "block",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.vdo": "0"
Nov 24 20:36:43 compute-0 nice_wing[287885]:             },
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "type": "block",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "vg_name": "ceph_vg0"
Nov 24 20:36:43 compute-0 nice_wing[287885]:         }
Nov 24 20:36:43 compute-0 nice_wing[287885]:     ],
Nov 24 20:36:43 compute-0 nice_wing[287885]:     "1": [
Nov 24 20:36:43 compute-0 nice_wing[287885]:         {
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "devices": [
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "/dev/loop4"
Nov 24 20:36:43 compute-0 nice_wing[287885]:             ],
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_name": "ceph_lv1",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_size": "21470642176",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "name": "ceph_lv1",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "tags": {
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.cluster_name": "ceph",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.crush_device_class": "",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.encrypted": "0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.osd_id": "1",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.type": "block",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.vdo": "0"
Nov 24 20:36:43 compute-0 nice_wing[287885]:             },
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "type": "block",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "vg_name": "ceph_vg1"
Nov 24 20:36:43 compute-0 nice_wing[287885]:         }
Nov 24 20:36:43 compute-0 nice_wing[287885]:     ],
Nov 24 20:36:43 compute-0 nice_wing[287885]:     "2": [
Nov 24 20:36:43 compute-0 nice_wing[287885]:         {
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "devices": [
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "/dev/loop5"
Nov 24 20:36:43 compute-0 nice_wing[287885]:             ],
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_name": "ceph_lv2",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_size": "21470642176",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "name": "ceph_lv2",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "tags": {
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.cluster_name": "ceph",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.crush_device_class": "",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.encrypted": "0",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.osd_id": "2",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.type": "block",
Nov 24 20:36:43 compute-0 nice_wing[287885]:                 "ceph.vdo": "0"
Nov 24 20:36:43 compute-0 nice_wing[287885]:             },
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "type": "block",
Nov 24 20:36:43 compute-0 nice_wing[287885]:             "vg_name": "ceph_vg2"
Nov 24 20:36:43 compute-0 nice_wing[287885]:         }
Nov 24 20:36:43 compute-0 nice_wing[287885]:     ]
Nov 24 20:36:43 compute-0 nice_wing[287885]: }
Nov 24 20:36:43 compute-0 systemd[1]: libpod-99cde27385bf234cee4c172f11ec00f24b80eb530a4eb23ff1ae13cb0524be97.scope: Deactivated successfully.
Nov 24 20:36:43 compute-0 podman[287869]: 2025-11-24 20:36:43.529027506 +0000 UTC m=+1.026463750 container died 99cde27385bf234cee4c172f11ec00f24b80eb530a4eb23ff1ae13cb0524be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:36:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-16f70a08dd74e5480dbf1f31d2b3473f88ea8e15f85b1761681a896968e8dd6e-merged.mount: Deactivated successfully.
Nov 24 20:36:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:43.579+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:43 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:43 compute-0 podman[287869]: 2025-11-24 20:36:43.587796868 +0000 UTC m=+1.085233112 container remove 99cde27385bf234cee4c172f11ec00f24b80eb530a4eb23ff1ae13cb0524be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wing, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:36:43 compute-0 systemd[1]: libpod-conmon-99cde27385bf234cee4c172f11ec00f24b80eb530a4eb23ff1ae13cb0524be97.scope: Deactivated successfully.
Nov 24 20:36:43 compute-0 sudo[287738]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:43 compute-0 sudo[287905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:36:43 compute-0 sudo[287905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:43 compute-0 sudo[287905]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:43 compute-0 sudo[287930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:36:43 compute-0 sudo[287930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:43 compute-0 sudo[287930]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:43.823+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:43 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:43 compute-0 sudo[287955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:36:43 compute-0 sudo[287955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:43 compute-0 sudo[287955]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:43 compute-0 sudo[287980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:36:43 compute-0 sudo[287980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:44 compute-0 podman[288045]: 2025-11-24 20:36:44.300426411 +0000 UTC m=+0.053995895 container create b7fc964e18550c5fc78c4d85a8bbb88c7a3707ca8265c650acaeba8ce6512bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noether, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:36:44 compute-0 systemd[1]: Started libpod-conmon-b7fc964e18550c5fc78c4d85a8bbb88c7a3707ca8265c650acaeba8ce6512bf8.scope.
Nov 24 20:36:44 compute-0 podman[288045]: 2025-11-24 20:36:44.273236254 +0000 UTC m=+0.026805798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:36:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:36:44 compute-0 podman[288045]: 2025-11-24 20:36:44.394831747 +0000 UTC m=+0.148401211 container init b7fc964e18550c5fc78c4d85a8bbb88c7a3707ca8265c650acaeba8ce6512bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noether, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:36:44 compute-0 podman[288045]: 2025-11-24 20:36:44.404682811 +0000 UTC m=+0.158252255 container start b7fc964e18550c5fc78c4d85a8bbb88c7a3707ca8265c650acaeba8ce6512bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noether, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:36:44 compute-0 podman[288045]: 2025-11-24 20:36:44.407988849 +0000 UTC m=+0.161558383 container attach b7fc964e18550c5fc78c4d85a8bbb88c7a3707ca8265c650acaeba8ce6512bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:36:44 compute-0 jovial_noether[288062]: 167 167
Nov 24 20:36:44 compute-0 systemd[1]: libpod-b7fc964e18550c5fc78c4d85a8bbb88c7a3707ca8265c650acaeba8ce6512bf8.scope: Deactivated successfully.
Nov 24 20:36:44 compute-0 podman[288045]: 2025-11-24 20:36:44.413090636 +0000 UTC m=+0.166660110 container died b7fc964e18550c5fc78c4d85a8bbb88c7a3707ca8265c650acaeba8ce6512bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noether, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3923f2cd5ed232955fec98507e8317f9e4487a8447426f95591d8be28f733be7-merged.mount: Deactivated successfully.
Nov 24 20:36:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:44 compute-0 ceph-mon[75677]: pgmap v1622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:44 compute-0 podman[288045]: 2025-11-24 20:36:44.559749039 +0000 UTC m=+0.313318523 container remove b7fc964e18550c5fc78c4d85a8bbb88c7a3707ca8265c650acaeba8ce6512bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:36:44 compute-0 systemd[1]: libpod-conmon-b7fc964e18550c5fc78c4d85a8bbb88c7a3707ca8265c650acaeba8ce6512bf8.scope: Deactivated successfully.
Nov 24 20:36:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:44.600+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:44 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:44.807+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:44 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:44 compute-0 podman[288086]: 2025-11-24 20:36:44.833714558 +0000 UTC m=+0.093416211 container create ac6da10ad0a04588b13250246319f505f909a4cb9bb0ff33fcf3a351fed5c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:36:44 compute-0 podman[288086]: 2025-11-24 20:36:44.776330582 +0000 UTC m=+0.036032235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:36:44 compute-0 systemd[1]: Started libpod-conmon-ac6da10ad0a04588b13250246319f505f909a4cb9bb0ff33fcf3a351fed5c764.scope.
Nov 24 20:36:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:36:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c4df6cc2fff83e4baa7aafbae0352102e85e0b72d8d9cd98a33175dd1591ead/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c4df6cc2fff83e4baa7aafbae0352102e85e0b72d8d9cd98a33175dd1591ead/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c4df6cc2fff83e4baa7aafbae0352102e85e0b72d8d9cd98a33175dd1591ead/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c4df6cc2fff83e4baa7aafbae0352102e85e0b72d8d9cd98a33175dd1591ead/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:36:44 compute-0 podman[288086]: 2025-11-24 20:36:44.93997206 +0000 UTC m=+0.199673753 container init ac6da10ad0a04588b13250246319f505f909a4cb9bb0ff33fcf3a351fed5c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 20:36:44 compute-0 podman[288086]: 2025-11-24 20:36:44.950946414 +0000 UTC m=+0.210648067 container start ac6da10ad0a04588b13250246319f505f909a4cb9bb0ff33fcf3a351fed5c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:36:44 compute-0 podman[288086]: 2025-11-24 20:36:44.954890169 +0000 UTC m=+0.214591882 container attach ac6da10ad0a04588b13250246319f505f909a4cb9bb0ff33fcf3a351fed5c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:36:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:45.626+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:45 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:45.783+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:45 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:45 compute-0 practical_liskov[288103]: {
Nov 24 20:36:45 compute-0 practical_liskov[288103]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "osd_id": 2,
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "type": "bluestore"
Nov 24 20:36:45 compute-0 practical_liskov[288103]:     },
Nov 24 20:36:45 compute-0 practical_liskov[288103]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "osd_id": 1,
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "type": "bluestore"
Nov 24 20:36:45 compute-0 practical_liskov[288103]:     },
Nov 24 20:36:45 compute-0 practical_liskov[288103]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "osd_id": 0,
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:36:45 compute-0 practical_liskov[288103]:         "type": "bluestore"
Nov 24 20:36:45 compute-0 practical_liskov[288103]:     }
Nov 24 20:36:45 compute-0 practical_liskov[288103]: }
Nov 24 20:36:45 compute-0 systemd[1]: libpod-ac6da10ad0a04588b13250246319f505f909a4cb9bb0ff33fcf3a351fed5c764.scope: Deactivated successfully.
Nov 24 20:36:45 compute-0 podman[288086]: 2025-11-24 20:36:45.963280624 +0000 UTC m=+1.222982237 container died ac6da10ad0a04588b13250246319f505f909a4cb9bb0ff33fcf3a351fed5c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:36:45 compute-0 systemd[1]: libpod-ac6da10ad0a04588b13250246319f505f909a4cb9bb0ff33fcf3a351fed5c764.scope: Consumed 1.020s CPU time.
Nov 24 20:36:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c4df6cc2fff83e4baa7aafbae0352102e85e0b72d8d9cd98a33175dd1591ead-merged.mount: Deactivated successfully.
Nov 24 20:36:46 compute-0 podman[288086]: 2025-11-24 20:36:46.014695319 +0000 UTC m=+1.274396932 container remove ac6da10ad0a04588b13250246319f505f909a4cb9bb0ff33fcf3a351fed5c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:36:46 compute-0 systemd[1]: libpod-conmon-ac6da10ad0a04588b13250246319f505f909a4cb9bb0ff33fcf3a351fed5c764.scope: Deactivated successfully.
Nov 24 20:36:46 compute-0 sudo[287980]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:36:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:36:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:36:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:36:46 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 34da1274-6b35-43a7-92ce-b0deb6eb1933 does not exist
Nov 24 20:36:46 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1afc393b-b731-490c-90da-b278cb2c6367 does not exist
Nov 24 20:36:46 compute-0 sudo[288149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:36:46 compute-0 sudo[288149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:46 compute-0 sudo[288149]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:46 compute-0 sudo[288174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:36:46 compute-0 sudo[288174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:36:46 compute-0 sudo[288174]: pam_unix(sudo:session): session closed for user root
Nov 24 20:36:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:36:46 compute-0 ceph-mon[75677]: pgmap v1623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:36:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:46.597+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:46 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:46.741+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:46 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2727 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:47.586+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:47 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:47.716+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:47 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:48 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2727 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:48.549+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:48 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:48.746+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:48 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:49 compute-0 ceph-mon[75677]: pgmap v1624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:49.567+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:49 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:49.736+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:49 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:50.537+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:50 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:50.735+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:50 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:51 compute-0 ceph-mon[75677]: pgmap v1625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:51.575+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:51 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:51.739+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:51 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2731 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:52.607+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:52 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:52.734+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:52 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:53 compute-0 ceph-mon[75677]: pgmap v1626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2731 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:53.606+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:53 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:53.724+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:53 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:36:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:36:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:36:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:36:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:36:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:36:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:54.570+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:54 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:54.724+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:54 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:55 compute-0 ceph-mon[75677]: pgmap v1627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:55.554+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:55 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:55 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:55.740+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:56.546+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:56 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:56 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:56.742+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:57 compute-0 ceph-mon[75677]: pgmap v1628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2736 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:36:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:57.566+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:57 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:57.724+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:57 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:58 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2736 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:36:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:58.563+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:58 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:58.725+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:58 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:58 compute-0 podman[288199]: 2025-11-24 20:36:58.885790533 +0000 UTC m=+0.106514810 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:36:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:36:59 compute-0 ceph-mon[75677]: pgmap v1629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:36:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:36:59.606+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:59 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:36:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:36:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:36:59.712+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:59 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:36:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:00.582+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:00 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:00.687+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:00 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:01 compute-0 ceph-mon[75677]: pgmap v1630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:01.538+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:01 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:01.677+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:01 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:02.504+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:02 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2741 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:02.708+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:02 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:03 compute-0 ceph-mon[75677]: pgmap v1631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2741 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:03.547+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:03 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:03.728+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:03 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:04 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:37:04.218 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:37:04 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:37:04.220 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:37:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:04.595+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:04 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:04.681+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:04 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:05 compute-0 ceph-mon[75677]: pgmap v1632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:05.623+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:05 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:05.721+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:05 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:06.621+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:06 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:06.721+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:06 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:07 compute-0 ceph-mon[75677]: pgmap v1633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2746 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:07.651+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:07 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:07.761+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:07 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:08 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2746 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:08.690+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:08 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:08.756+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:08 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:08 compute-0 podman[288217]: 2025-11-24 20:37:08.884668485 +0000 UTC m=+0.105166594 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, tcib_managed=true)
Nov 24 20:37:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:09 compute-0 ceph-mon[75677]: pgmap v1634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:37:09.394 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:37:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:37:09.395 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:37:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:37:09.395 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:37:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:09.739+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:09 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:09.758+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:09 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:10 compute-0 ceph-mon[75677]: pgmap v1635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:10.741+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:10 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:10.765+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:10 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:11.694+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:11 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:11.789+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:11 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:37:12.222 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:37:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:12 compute-0 ceph-mon[75677]: pgmap v1636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2751 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:12.688+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:12 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:12 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:12.769+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:12 compute-0 podman[288237]: 2025-11-24 20:37:12.918898743 +0000 UTC m=+0.148394270 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 24 20:37:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:13 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2751 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:13.726+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:13 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:13.732+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:13 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:14 compute-0 ceph-mon[75677]: pgmap v1637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:14.685+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:14 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:14.768+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:14 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:15.679+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:15 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:15.737+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:15 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:16 compute-0 ceph-mon[75677]: pgmap v1638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:37:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939111092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:37:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:37:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939111092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:37:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:16.719+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:16 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:16.778+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:16 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/939111092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:37:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/939111092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:37:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2756 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:17.733+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:17 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:17.744+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:17 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:18 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2756 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:18 compute-0 ceph-mon[75677]: pgmap v1639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:18.695+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:18 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:18.738+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:18 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:19.701+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:19 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:19.773+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:19 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:20 compute-0 ceph-mon[75677]: pgmap v1640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:20.688+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:20 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:20.794+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:20 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:21.676+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:21 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:21.832+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:21 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2761 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:22 compute-0 ceph-mon[75677]: pgmap v1641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:22 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:22.695+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:22.788+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:22 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2761 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:23.740+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:23 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:23.741+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:23 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:37:24
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.log', '.mgr']
Nov 24 20:37:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:37:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:24 compute-0 ceph-mon[75677]: pgmap v1642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:24.714+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:24 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:24.716+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:24 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:25.714+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:25 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:25.722+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:25 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:26.709+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:26 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:26 compute-0 ceph-mon[75677]: pgmap v1643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:26.730+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:26 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.566057) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016647566106, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 257, "total_data_size": 790011, "memory_usage": 804776, "flush_reason": "Manual Compaction"}
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016647573832, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 778092, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46247, "largest_seqno": 47060, "table_properties": {"data_size": 774109, "index_size": 1571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10766, "raw_average_key_size": 20, "raw_value_size": 765291, "raw_average_value_size": 1449, "num_data_blocks": 68, "num_entries": 528, "num_filter_entries": 528, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016603, "oldest_key_time": 1764016603, "file_creation_time": 1764016647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 7838 microseconds, and 3066 cpu microseconds.
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.573887) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 778092 bytes OK
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.573910) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.575680) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.575708) EVENT_LOG_v1 {"time_micros": 1764016647575698, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.575732) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 785688, prev total WAL file size 785688, number of live WAL files 2.
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.576400) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303132' seq:72057594037927935, type:22 .. '6C6F676D0032323635' seq:0, type:0; will stop at (end)
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(759KB)], [95(9913KB)]
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016647576551, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 10929285, "oldest_snapshot_seqno": -1}
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 11223 keys, 10728379 bytes, temperature: kUnknown
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016647649745, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10728379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10663874, "index_size": 35337, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 305325, "raw_average_key_size": 27, "raw_value_size": 10468713, "raw_average_value_size": 932, "num_data_blocks": 1334, "num_entries": 11223, "num_filter_entries": 11223, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.650144) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10728379 bytes
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.651458) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.3 rd, 146.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.7 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(27.8) write-amplify(13.8) OK, records in: 11748, records dropped: 525 output_compression: NoCompression
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.651489) EVENT_LOG_v1 {"time_micros": 1764016647651476, "job": 56, "event": "compaction_finished", "compaction_time_micros": 73224, "compaction_time_cpu_micros": 31104, "output_level": 6, "num_output_files": 1, "total_output_size": 10728379, "num_input_records": 11748, "num_output_records": 11223, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016647651928, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016647655688, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.576291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.655811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.655819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.655825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.655830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:37:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:37:27.655833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:37:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:27.720+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:27 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:27.725+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:27 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:28 compute-0 ceph-mon[75677]: pgmap v1644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:28.671+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:28 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:28.714+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:28 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:29.690+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:29 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:29.713+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:29 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:29 compute-0 podman[288263]: 2025-11-24 20:37:29.841943122 +0000 UTC m=+0.067909437 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 20:37:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:30 compute-0 ceph-mon[75677]: pgmap v1645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:30.668+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:30 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:30.761+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:30 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:31.665+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:31 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:31.793+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:31 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2766 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:32 compute-0 ceph-mon[75677]: pgmap v1646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:32 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2766 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:32.649+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:32 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:32.771+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:32 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:33.669+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:33 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:33.808+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:33 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:34 compute-0 ceph-mon[75677]: pgmap v1647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:34.674+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:34 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:34.821+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:34 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:37:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:37:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:35.723+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:35 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:35.849+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:35 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:36 compute-0 ceph-mon[75677]: pgmap v1648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:36.690+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:36 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:36.840+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:36 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2777 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:37 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2777 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:37.671+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:37 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:37.798+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:37 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:38 compute-0 ceph-mon[75677]: pgmap v1649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:38.680+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:38 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:38.798+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:38 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:39.701+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:39 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:39.807+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:39 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:39 compute-0 podman[288282]: 2025-11-24 20:37:39.872056307 +0000 UTC m=+0.101943098 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 24 20:37:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:37:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:37:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:37:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:37:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:37:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:40 compute-0 ceph-mon[75677]: pgmap v1650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:40.714+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:40 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:40.801+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:40 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:41 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:41.675+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:41.801+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:41 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:42.632+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:42 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2782 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:42 compute-0 ceph-mon[75677]: pgmap v1651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:42.830+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:42 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:43.621+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:43 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:43 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2782 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:43.854+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:43 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:43 compute-0 podman[288302]: 2025-11-24 20:37:43.898354233 +0000 UTC m=+0.123895485 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:37:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:44.659+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:44 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:44 compute-0 ceph-mon[75677]: pgmap v1652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:44.837+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:44 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:45.687+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:45 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:45.791+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:45 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:46 compute-0 sudo[288328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:37:46 compute-0 sudo[288328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:46 compute-0 sudo[288328]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:46 compute-0 sudo[288353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:37:46 compute-0 sudo[288353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:46 compute-0 sudo[288353]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:46 compute-0 sudo[288378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:37:46 compute-0 sudo[288378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:46 compute-0 sudo[288378]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:46 compute-0 sudo[288403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:37:46 compute-0 sudo[288403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:46.674+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:46 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:46 compute-0 ceph-mon[75677]: pgmap v1653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:46.760+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:46 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:47 compute-0 sudo[288403]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:37:47 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:37:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:37:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:37:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:37:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:37:47 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ce62616f-be0c-4fed-bfe4-f997d6b25895 does not exist
Nov 24 20:37:47 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 22406ce7-cf29-46df-a279-e696c0bfb166 does not exist
Nov 24 20:37:47 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3e7d6c3a-b5ad-41a6-811f-61ed10569ae6 does not exist
Nov 24 20:37:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:37:47 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:37:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:37:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:37:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:37:47 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:37:47 compute-0 sudo[288459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:37:47 compute-0 sudo[288459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:47 compute-0 sudo[288459]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:47 compute-0 sudo[288484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:37:47 compute-0 sudo[288484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:47 compute-0 sudo[288484]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:47 compute-0 sudo[288509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:37:47 compute-0 sudo[288509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:47 compute-0 sudo[288509]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:47 compute-0 sudo[288534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:37:47 compute-0 sudo[288534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:47.667+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:47 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:47.808+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:47 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:37:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:37:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:37:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:37:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:37:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:37:48 compute-0 podman[288599]: 2025-11-24 20:37:47.909258538 +0000 UTC m=+0.040364082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:37:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:48 compute-0 podman[288599]: 2025-11-24 20:37:48.167421743 +0000 UTC m=+0.298527257 container create d065ec54bbb919c48c15d6a4430242f300eb6aa6fdb5e42ae4445e263996ee6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:37:48 compute-0 systemd[1]: Started libpod-conmon-d065ec54bbb919c48c15d6a4430242f300eb6aa6fdb5e42ae4445e263996ee6e.scope.
Nov 24 20:37:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:37:48 compute-0 podman[288599]: 2025-11-24 20:37:48.352029222 +0000 UTC m=+0.483134776 container init d065ec54bbb919c48c15d6a4430242f300eb6aa6fdb5e42ae4445e263996ee6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:37:48 compute-0 podman[288599]: 2025-11-24 20:37:48.36131201 +0000 UTC m=+0.492417514 container start d065ec54bbb919c48c15d6a4430242f300eb6aa6fdb5e42ae4445e263996ee6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:37:48 compute-0 podman[288599]: 2025-11-24 20:37:48.366042747 +0000 UTC m=+0.497148271 container attach d065ec54bbb919c48c15d6a4430242f300eb6aa6fdb5e42ae4445e263996ee6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 20:37:48 compute-0 brave_keller[288615]: 167 167
Nov 24 20:37:48 compute-0 systemd[1]: libpod-d065ec54bbb919c48c15d6a4430242f300eb6aa6fdb5e42ae4445e263996ee6e.scope: Deactivated successfully.
Nov 24 20:37:48 compute-0 podman[288599]: 2025-11-24 20:37:48.371130783 +0000 UTC m=+0.502236347 container died d065ec54bbb919c48c15d6a4430242f300eb6aa6fdb5e42ae4445e263996ee6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5d9c945184808751f5087bdfcdef47834e28d45f15c291c5c34516acd4aac62-merged.mount: Deactivated successfully.
Nov 24 20:37:48 compute-0 podman[288599]: 2025-11-24 20:37:48.415653693 +0000 UTC m=+0.546759207 container remove d065ec54bbb919c48c15d6a4430242f300eb6aa6fdb5e42ae4445e263996ee6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keller, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:37:48 compute-0 systemd[1]: libpod-conmon-d065ec54bbb919c48c15d6a4430242f300eb6aa6fdb5e42ae4445e263996ee6e.scope: Deactivated successfully.
Nov 24 20:37:48 compute-0 podman[288640]: 2025-11-24 20:37:48.681381372 +0000 UTC m=+0.074102153 container create c3b46953fe1606d3414839a0ee08499f594f0a03291343c1f08a192eb1d19094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:37:48 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:48.717+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:48 compute-0 systemd[1]: Started libpod-conmon-c3b46953fe1606d3414839a0ee08499f594f0a03291343c1f08a192eb1d19094.scope.
Nov 24 20:37:48 compute-0 podman[288640]: 2025-11-24 20:37:48.655718385 +0000 UTC m=+0.048439156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:37:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:48.761+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:48 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2045a05aa78dd0bfe463320a6f10ee4fea8b0dcbeec9af7772246751070a273c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2045a05aa78dd0bfe463320a6f10ee4fea8b0dcbeec9af7772246751070a273c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2045a05aa78dd0bfe463320a6f10ee4fea8b0dcbeec9af7772246751070a273c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2045a05aa78dd0bfe463320a6f10ee4fea8b0dcbeec9af7772246751070a273c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2045a05aa78dd0bfe463320a6f10ee4fea8b0dcbeec9af7772246751070a273c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:48 compute-0 podman[288640]: 2025-11-24 20:37:48.794382425 +0000 UTC m=+0.187103186 container init c3b46953fe1606d3414839a0ee08499f594f0a03291343c1f08a192eb1d19094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 20:37:48 compute-0 podman[288640]: 2025-11-24 20:37:48.809839128 +0000 UTC m=+0.202559869 container start c3b46953fe1606d3414839a0ee08499f594f0a03291343c1f08a192eb1d19094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:37:48 compute-0 podman[288640]: 2025-11-24 20:37:48.813563877 +0000 UTC m=+0.206284648 container attach c3b46953fe1606d3414839a0ee08499f594f0a03291343c1f08a192eb1d19094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 20:37:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:48 compute-0 ceph-mon[75677]: pgmap v1654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:49.689+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:49 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:49.722+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:49 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:49 compute-0 sweet_booth[288656]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:37:49 compute-0 sweet_booth[288656]: --> relative data size: 1.0
Nov 24 20:37:49 compute-0 sweet_booth[288656]: --> All data devices are unavailable
Nov 24 20:37:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:49 compute-0 systemd[1]: libpod-c3b46953fe1606d3414839a0ee08499f594f0a03291343c1f08a192eb1d19094.scope: Deactivated successfully.
Nov 24 20:37:49 compute-0 podman[288640]: 2025-11-24 20:37:49.854925265 +0000 UTC m=+1.247646046 container died c3b46953fe1606d3414839a0ee08499f594f0a03291343c1f08a192eb1d19094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2045a05aa78dd0bfe463320a6f10ee4fea8b0dcbeec9af7772246751070a273c-merged.mount: Deactivated successfully.
Nov 24 20:37:49 compute-0 podman[288640]: 2025-11-24 20:37:49.909725611 +0000 UTC m=+1.302446352 container remove c3b46953fe1606d3414839a0ee08499f594f0a03291343c1f08a192eb1d19094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:37:49 compute-0 systemd[1]: libpod-conmon-c3b46953fe1606d3414839a0ee08499f594f0a03291343c1f08a192eb1d19094.scope: Deactivated successfully.
Nov 24 20:37:49 compute-0 sudo[288534]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:50 compute-0 sudo[288698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:37:50 compute-0 sudo[288698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:50 compute-0 sudo[288698]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:50 compute-0 sudo[288723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:37:50 compute-0 sudo[288723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:50 compute-0 sudo[288723]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:50 compute-0 sudo[288748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:37:50 compute-0 sudo[288748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:50 compute-0 sudo[288748]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:50 compute-0 sudo[288773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:37:50 compute-0 sudo[288773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:50.645+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:50 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:50 compute-0 podman[288838]: 2025-11-24 20:37:50.730864388 +0000 UTC m=+0.051932391 container create 6c195fff4c10938d94bcec4cc56a5603a4f79f758be4ba63cde31a786a7b6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:37:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:50.750+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:50 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:50 compute-0 systemd[1]: Started libpod-conmon-6c195fff4c10938d94bcec4cc56a5603a4f79f758be4ba63cde31a786a7b6c1b.scope.
Nov 24 20:37:50 compute-0 podman[288838]: 2025-11-24 20:37:50.70406078 +0000 UTC m=+0.025128873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:37:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:37:50 compute-0 podman[288838]: 2025-11-24 20:37:50.825752466 +0000 UTC m=+0.146820509 container init 6c195fff4c10938d94bcec4cc56a5603a4f79f758be4ba63cde31a786a7b6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:37:50 compute-0 podman[288838]: 2025-11-24 20:37:50.839991977 +0000 UTC m=+0.161059970 container start 6c195fff4c10938d94bcec4cc56a5603a4f79f758be4ba63cde31a786a7b6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:37:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:50 compute-0 ceph-mon[75677]: pgmap v1655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:50 compute-0 podman[288838]: 2025-11-24 20:37:50.8446116 +0000 UTC m=+0.165679603 container attach 6c195fff4c10938d94bcec4cc56a5603a4f79f758be4ba63cde31a786a7b6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:37:50 compute-0 systemd[1]: libpod-6c195fff4c10938d94bcec4cc56a5603a4f79f758be4ba63cde31a786a7b6c1b.scope: Deactivated successfully.
Nov 24 20:37:50 compute-0 focused_roentgen[288854]: 167 167
Nov 24 20:37:50 compute-0 conmon[288854]: conmon 6c195fff4c10938d94bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c195fff4c10938d94bcec4cc56a5603a4f79f758be4ba63cde31a786a7b6c1b.scope/container/memory.events
Nov 24 20:37:50 compute-0 podman[288838]: 2025-11-24 20:37:50.847184199 +0000 UTC m=+0.168252212 container died 6c195fff4c10938d94bcec4cc56a5603a4f79f758be4ba63cde31a786a7b6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-64375a71a007f25af8b6e66ca70454c1581231f85ec7e8f4b44409b38f8ca4b3-merged.mount: Deactivated successfully.
Nov 24 20:37:50 compute-0 podman[288838]: 2025-11-24 20:37:50.895331637 +0000 UTC m=+0.216399680 container remove 6c195fff4c10938d94bcec4cc56a5603a4f79f758be4ba63cde31a786a7b6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:37:50 compute-0 systemd[1]: libpod-conmon-6c195fff4c10938d94bcec4cc56a5603a4f79f758be4ba63cde31a786a7b6c1b.scope: Deactivated successfully.
Nov 24 20:37:51 compute-0 podman[288877]: 2025-11-24 20:37:51.129753958 +0000 UTC m=+0.055189137 container create cf2d27229d562cdc81091b0073d113ccf8c0a1b0da2f85b06e28c20a9cd97fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:37:51 compute-0 systemd[1]: Started libpod-conmon-cf2d27229d562cdc81091b0073d113ccf8c0a1b0da2f85b06e28c20a9cd97fad.scope.
Nov 24 20:37:51 compute-0 podman[288877]: 2025-11-24 20:37:51.105313725 +0000 UTC m=+0.030748924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:37:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19caac6bce5d1808f7ba9afa5d07fbd019d99a50c0e23e5b4cdceba907669442/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19caac6bce5d1808f7ba9afa5d07fbd019d99a50c0e23e5b4cdceba907669442/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19caac6bce5d1808f7ba9afa5d07fbd019d99a50c0e23e5b4cdceba907669442/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19caac6bce5d1808f7ba9afa5d07fbd019d99a50c0e23e5b4cdceba907669442/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:51 compute-0 podman[288877]: 2025-11-24 20:37:51.230868933 +0000 UTC m=+0.156304092 container init cf2d27229d562cdc81091b0073d113ccf8c0a1b0da2f85b06e28c20a9cd97fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 20:37:51 compute-0 podman[288877]: 2025-11-24 20:37:51.238913998 +0000 UTC m=+0.164349147 container start cf2d27229d562cdc81091b0073d113ccf8c0a1b0da2f85b06e28c20a9cd97fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:37:51 compute-0 podman[288877]: 2025-11-24 20:37:51.242852584 +0000 UTC m=+0.168287723 container attach cf2d27229d562cdc81091b0073d113ccf8c0a1b0da2f85b06e28c20a9cd97fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 24 20:37:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:51.687+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:51 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:51.753+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:51 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:51 compute-0 fervent_kare[288893]: {
Nov 24 20:37:51 compute-0 fervent_kare[288893]:     "0": [
Nov 24 20:37:51 compute-0 fervent_kare[288893]:         {
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "devices": [
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "/dev/loop3"
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             ],
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_name": "ceph_lv0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_size": "21470642176",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "name": "ceph_lv0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "tags": {
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.cluster_name": "ceph",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.crush_device_class": "",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.encrypted": "0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.osd_id": "0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.type": "block",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.vdo": "0"
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             },
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "type": "block",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "vg_name": "ceph_vg0"
Nov 24 20:37:51 compute-0 fervent_kare[288893]:         }
Nov 24 20:37:51 compute-0 fervent_kare[288893]:     ],
Nov 24 20:37:51 compute-0 fervent_kare[288893]:     "1": [
Nov 24 20:37:51 compute-0 fervent_kare[288893]:         {
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "devices": [
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "/dev/loop4"
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             ],
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_name": "ceph_lv1",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_size": "21470642176",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "name": "ceph_lv1",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "tags": {
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.cluster_name": "ceph",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.crush_device_class": "",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.encrypted": "0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.osd_id": "1",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.type": "block",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.vdo": "0"
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             },
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "type": "block",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "vg_name": "ceph_vg1"
Nov 24 20:37:51 compute-0 fervent_kare[288893]:         }
Nov 24 20:37:51 compute-0 fervent_kare[288893]:     ],
Nov 24 20:37:51 compute-0 fervent_kare[288893]:     "2": [
Nov 24 20:37:51 compute-0 fervent_kare[288893]:         {
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "devices": [
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "/dev/loop5"
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             ],
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_name": "ceph_lv2",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_size": "21470642176",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "name": "ceph_lv2",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "tags": {
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.cluster_name": "ceph",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.crush_device_class": "",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.encrypted": "0",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.osd_id": "2",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.type": "block",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:                 "ceph.vdo": "0"
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             },
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "type": "block",
Nov 24 20:37:51 compute-0 fervent_kare[288893]:             "vg_name": "ceph_vg2"
Nov 24 20:37:51 compute-0 fervent_kare[288893]:         }
Nov 24 20:37:51 compute-0 fervent_kare[288893]:     ]
Nov 24 20:37:51 compute-0 fervent_kare[288893]: }
Nov 24 20:37:52 compute-0 systemd[1]: libpod-cf2d27229d562cdc81091b0073d113ccf8c0a1b0da2f85b06e28c20a9cd97fad.scope: Deactivated successfully.
Nov 24 20:37:52 compute-0 podman[288902]: 2025-11-24 20:37:52.091321391 +0000 UTC m=+0.044198493 container died cf2d27229d562cdc81091b0073d113ccf8c0a1b0da2f85b06e28c20a9cd97fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:37:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-19caac6bce5d1808f7ba9afa5d07fbd019d99a50c0e23e5b4cdceba907669442-merged.mount: Deactivated successfully.
Nov 24 20:37:52 compute-0 podman[288902]: 2025-11-24 20:37:52.162333401 +0000 UTC m=+0.115210433 container remove cf2d27229d562cdc81091b0073d113ccf8c0a1b0da2f85b06e28c20a9cd97fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:37:52 compute-0 systemd[1]: libpod-conmon-cf2d27229d562cdc81091b0073d113ccf8c0a1b0da2f85b06e28c20a9cd97fad.scope: Deactivated successfully.
Nov 24 20:37:52 compute-0 sudo[288773]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:52 compute-0 sudo[288917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:37:52 compute-0 sudo[288917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:52 compute-0 sudo[288917]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:52 compute-0 sudo[288942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:37:52 compute-0 sudo[288942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:52 compute-0 sudo[288942]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:52 compute-0 sudo[288967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:37:52 compute-0 sudo[288967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:52 compute-0 sudo[288967]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:52 compute-0 sudo[288992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:37:52 compute-0 sudo[288992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2787 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:52.709+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:52 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:52.715+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:52 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:52 compute-0 ceph-mon[75677]: pgmap v1656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:52 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2787 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:53 compute-0 podman[289057]: 2025-11-24 20:37:53.013971903 +0000 UTC m=+0.068324420 container create 7012e3b782d948cdb5d56b100ac8f483fca95f40caaac18fa3d3eaedd523110b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 24 20:37:53 compute-0 systemd[1]: Started libpod-conmon-7012e3b782d948cdb5d56b100ac8f483fca95f40caaac18fa3d3eaedd523110b.scope.
Nov 24 20:37:53 compute-0 podman[289057]: 2025-11-24 20:37:52.985322176 +0000 UTC m=+0.039674734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:37:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:37:53 compute-0 podman[289057]: 2025-11-24 20:37:53.137874617 +0000 UTC m=+0.192227184 container init 7012e3b782d948cdb5d56b100ac8f483fca95f40caaac18fa3d3eaedd523110b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:37:53 compute-0 podman[289057]: 2025-11-24 20:37:53.151503302 +0000 UTC m=+0.205855789 container start 7012e3b782d948cdb5d56b100ac8f483fca95f40caaac18fa3d3eaedd523110b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:37:53 compute-0 podman[289057]: 2025-11-24 20:37:53.15590017 +0000 UTC m=+0.210252697 container attach 7012e3b782d948cdb5d56b100ac8f483fca95f40caaac18fa3d3eaedd523110b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:37:53 compute-0 confident_lamarr[289073]: 167 167
Nov 24 20:37:53 compute-0 systemd[1]: libpod-7012e3b782d948cdb5d56b100ac8f483fca95f40caaac18fa3d3eaedd523110b.scope: Deactivated successfully.
Nov 24 20:37:53 compute-0 podman[289057]: 2025-11-24 20:37:53.159776993 +0000 UTC m=+0.214129490 container died 7012e3b782d948cdb5d56b100ac8f483fca95f40caaac18fa3d3eaedd523110b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b39e78cfa936b976e642e70865eed377616232d7f5611fec1cbf21478f3f55f-merged.mount: Deactivated successfully.
Nov 24 20:37:53 compute-0 podman[289057]: 2025-11-24 20:37:53.20938733 +0000 UTC m=+0.263739817 container remove 7012e3b782d948cdb5d56b100ac8f483fca95f40caaac18fa3d3eaedd523110b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamarr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:37:53 compute-0 systemd[1]: libpod-conmon-7012e3b782d948cdb5d56b100ac8f483fca95f40caaac18fa3d3eaedd523110b.scope: Deactivated successfully.
Nov 24 20:37:53 compute-0 podman[289097]: 2025-11-24 20:37:53.457949199 +0000 UTC m=+0.066012126 container create 9e7e40ed1d4eb1b2b8a496c3297a5ab78a4767751dd5b4bc4f70a0679766329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:37:53 compute-0 systemd[1]: Started libpod-conmon-9e7e40ed1d4eb1b2b8a496c3297a5ab78a4767751dd5b4bc4f70a0679766329e.scope.
Nov 24 20:37:53 compute-0 podman[289097]: 2025-11-24 20:37:53.429431656 +0000 UTC m=+0.037494633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:37:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8572d3ee7a179cc3af06d663c954bcf4aaf922ffbfab7244dfc30c6631add25d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8572d3ee7a179cc3af06d663c954bcf4aaf922ffbfab7244dfc30c6631add25d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8572d3ee7a179cc3af06d663c954bcf4aaf922ffbfab7244dfc30c6631add25d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8572d3ee7a179cc3af06d663c954bcf4aaf922ffbfab7244dfc30c6631add25d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:37:53 compute-0 podman[289097]: 2025-11-24 20:37:53.568351743 +0000 UTC m=+0.176414680 container init 9e7e40ed1d4eb1b2b8a496c3297a5ab78a4767751dd5b4bc4f70a0679766329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:37:53 compute-0 podman[289097]: 2025-11-24 20:37:53.581767372 +0000 UTC m=+0.189830259 container start 9e7e40ed1d4eb1b2b8a496c3297a5ab78a4767751dd5b4bc4f70a0679766329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:37:53 compute-0 podman[289097]: 2025-11-24 20:37:53.585416699 +0000 UTC m=+0.193479596 container attach 9e7e40ed1d4eb1b2b8a496c3297a5ab78a4767751dd5b4bc4f70a0679766329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:37:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:53.667+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:53 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:53.698+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:53 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:37:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:37:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:37:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:37:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:37:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:37:54 compute-0 reverent_gauss[289113]: {
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "osd_id": 2,
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "type": "bluestore"
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:     },
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "osd_id": 1,
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "type": "bluestore"
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:     },
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "osd_id": 0,
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:         "type": "bluestore"
Nov 24 20:37:54 compute-0 reverent_gauss[289113]:     }
Nov 24 20:37:54 compute-0 reverent_gauss[289113]: }
Nov 24 20:37:54 compute-0 systemd[1]: libpod-9e7e40ed1d4eb1b2b8a496c3297a5ab78a4767751dd5b4bc4f70a0679766329e.scope: Deactivated successfully.
Nov 24 20:37:54 compute-0 systemd[1]: libpod-9e7e40ed1d4eb1b2b8a496c3297a5ab78a4767751dd5b4bc4f70a0679766329e.scope: Consumed 1.085s CPU time.
Nov 24 20:37:54 compute-0 podman[289097]: 2025-11-24 20:37:54.658136706 +0000 UTC m=+1.266199593 container died 9e7e40ed1d4eb1b2b8a496c3297a5ab78a4767751dd5b4bc4f70a0679766329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:37:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:54.661+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:54 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:54.684+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:54 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8572d3ee7a179cc3af06d663c954bcf4aaf922ffbfab7244dfc30c6631add25d-merged.mount: Deactivated successfully.
Nov 24 20:37:54 compute-0 podman[289097]: 2025-11-24 20:37:54.740306073 +0000 UTC m=+1.348368980 container remove 9e7e40ed1d4eb1b2b8a496c3297a5ab78a4767751dd5b4bc4f70a0679766329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gauss, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:37:54 compute-0 systemd[1]: libpod-conmon-9e7e40ed1d4eb1b2b8a496c3297a5ab78a4767751dd5b4bc4f70a0679766329e.scope: Deactivated successfully.
Nov 24 20:37:54 compute-0 sudo[288992]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:37:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:37:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:37:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:37:54 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 781f69e2-3e21-4d89-a7d7-1b91c01fcf4e does not exist
Nov 24 20:37:54 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3b1866c9-5f34-4b8f-afa9-89062738486f does not exist
Nov 24 20:37:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:54 compute-0 ceph-mon[75677]: pgmap v1657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:37:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:37:54 compute-0 sudo[289160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:37:54 compute-0 sudo[289160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:54 compute-0 sudo[289160]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:54 compute-0 sudo[289185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:37:54 compute-0 sudo[289185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:37:54 compute-0 sudo[289185]: pam_unix(sudo:session): session closed for user root
Nov 24 20:37:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:55.639+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:55 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:55.684+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:55 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:56.684+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:56 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:56.733+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:56 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:56 compute-0 ceph-mon[75677]: pgmap v1658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2796 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:37:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:57.724+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:57 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:57.727+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:57 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:57 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2796 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:37:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:58.733+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:58 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:58.737+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:58 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:58 compute-0 ceph-mon[75677]: pgmap v1659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:37:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:37:59.719+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:59 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:37:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:37:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:37:59.734+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:59 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:37:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:37:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:00.695+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:00 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:00.717+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:00 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:00 compute-0 podman[289210]: 2025-11-24 20:38:00.879002869 +0000 UTC m=+0.092503005 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:38:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:00 compute-0 ceph-mon[75677]: pgmap v1660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:01 compute-0 anacron[156334]: Job `cron.weekly' started
Nov 24 20:38:01 compute-0 anacron[156334]: Job `cron.weekly' terminated
Nov 24 20:38:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:01.677+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:01 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:01.681+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:01 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:02.712+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:02 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:02.712+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:02 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2801 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:02 compute-0 ceph-mon[75677]: pgmap v1661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:03.742+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:03 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:03.756+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:03 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2801 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:04.764+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:04 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:04.802+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:04 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:04 compute-0 ceph-mon[75677]: pgmap v1662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:05.776+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:05 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:05.779+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:05 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:06.731+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:06 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:06.740+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:06 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:06 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:38:06.840 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:38:06 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:38:06.842 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:38:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:07 compute-0 ceph-mon[75677]: pgmap v1663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:07.764+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:07 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:07.776+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:07 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:08.776+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:08 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:08.788+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:08 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:09 compute-0 ceph-mon[75677]: pgmap v1664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:38:09.395 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:38:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:38:09.396 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:38:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:38:09.396 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:38:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:09.806+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:09 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:09.817+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:09 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:10.801+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:10 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:10.814+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:10 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:10 compute-0 podman[289232]: 2025-11-24 20:38:10.872985459 +0000 UTC m=+0.103018567 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:38:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:11 compute-0 ceph-mon[75677]: pgmap v1665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:11.810+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:11 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:11.835+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:11 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:38:11.843 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:38:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2807 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:12.854+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:12 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:12.858+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:12 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:13 compute-0 ceph-mon[75677]: pgmap v1666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:13 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2807 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:13.842+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:13 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:13.860+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:13 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:14.820+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:14 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:14.846+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:14 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:14 compute-0 podman[289252]: 2025-11-24 20:38:14.878051648 +0000 UTC m=+0.115687977 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 24 20:38:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:15 compute-0 ceph-mon[75677]: pgmap v1667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:15.816+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:15 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:15.896+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:15 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:38:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722805138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:38:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:38:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722805138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:38:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:16.787+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:16 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:16.853+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:16 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:17 compute-0 ceph-mon[75677]: pgmap v1668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/722805138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:38:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/722805138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:38:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2817 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:17.738+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:17 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:17.842+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:17 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:18 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2817 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:18.692+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:18 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:18.800+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:18 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:19 compute-0 ceph-mon[75677]: pgmap v1669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:19.692+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:19 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:19.784+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:19 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:20.725+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:20 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:20.754+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:20 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:21 compute-0 ceph-mon[75677]: pgmap v1670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:21.719+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:21 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:21.740+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:21 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:22 compute-0 ceph-mon[75677]: pgmap v1671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2822 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:22.692+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:22 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:22.710+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:22 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:38:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 7369 writes, 29K keys, 7369 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7369 writes, 1583 syncs, 4.66 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 299 writes, 783 keys, 299 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 299 writes, 129 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:38:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2822 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:23.679+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:23 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:23.738+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:23 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:38:24
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups', 'images', '.rgw.root', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.meta']
Nov 24 20:38:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:38:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:24 compute-0 ceph-mon[75677]: pgmap v1672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:24.639+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:24 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:24.754+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:24 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:25.683+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:25 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:25.774+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:25 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:26 compute-0 ceph-mon[75677]: pgmap v1673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:26.713+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:26 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:26.725+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:26 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2826 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:27 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2826 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:27.737+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:27 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:27.743+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:27 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:28 compute-0 ceph-mon[75677]: pgmap v1674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:28 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:28.688+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:28.789+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:28 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:38:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 8338 writes, 33K keys, 8338 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8338 writes, 1897 syncs, 4.40 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 464 writes, 1187 keys, 464 commit groups, 1.0 writes per commit group, ingest: 0.69 MB, 0.00 MB/s
                                           Interval WAL: 464 writes, 197 syncs, 2.36 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:38:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:29.697+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:29 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:29.818+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:29 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:30 compute-0 ceph-mon[75677]: pgmap v1675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:30.737+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:30 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:30.826+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:30 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:31.748+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:31 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:31 compute-0 podman[289280]: 2025-11-24 20:38:31.826762418 +0000 UTC m=+0.060025677 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:38:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:31.846+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:31 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:32 compute-0 ceph-mon[75677]: pgmap v1676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:32.704+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:32 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:32.894+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:32 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:33 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:33.665+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:33 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:33.942+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:33 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:38:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 6831 writes, 28K keys, 6831 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6831 writes, 1364 syncs, 5.01 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 486 writes, 1268 keys, 486 commit groups, 1.0 writes per commit group, ingest: 0.75 MB, 0.00 MB/s
                                           Interval WAL: 486 writes, 199 syncs, 2.44 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:38:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:34 compute-0 ceph-mon[75677]: pgmap v1677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:34.696+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:34 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:34.897+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:34 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:38:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:38:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:35.719+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:35 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:35.880+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:35 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:36 compute-0 ceph-mon[75677]: pgmap v1678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:36.720+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:36 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:36.835+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:36 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:37.727+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:37 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:37.838+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:37 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:38 compute-0 ceph-mon[75677]: pgmap v1679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:38.739+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:38 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:38.821+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:38 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:39 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Check health
Nov 24 20:38:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:39.768+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:39 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:39.820+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:39 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:38:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:38:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:38:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:38:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:38:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:40 compute-0 ceph-mon[75677]: pgmap v1680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:40.738+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:40 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:40.786+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:40 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:41.701+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:41 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:41 compute-0 sshd-session[289299]: Received disconnect from 182.93.7.194 port 64198:11: Bye Bye [preauth]
Nov 24 20:38:41 compute-0 sshd-session[289299]: Disconnected from authenticating user root 182.93.7.194 port 64198 [preauth]
Nov 24 20:38:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:41.835+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:41 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:41 compute-0 podman[289301]: 2025-11-24 20:38:41.847548755 +0000 UTC m=+0.085064617 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 20:38:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2837 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:42 compute-0 ceph-mon[75677]: pgmap v1681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:42 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2837 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:42.746+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:42 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:42.802+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:42 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:43.779+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:43 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:43.779+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:43 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:44 compute-0 ceph-mon[75677]: pgmap v1682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:44.761+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:44 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:44.762+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:44 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:45.718+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:45 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:45.718+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:45 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:45 compute-0 podman[289321]: 2025-11-24 20:38:45.914937201 +0000 UTC m=+0.138831795 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 20:38:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:46.698+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:46 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:46 compute-0 ceph-mon[75677]: pgmap v1683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:46.759+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:46 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2846 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:47.671+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:47 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:47 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2846 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:47.803+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:47 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:48.655+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:48 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:48 compute-0 ceph-mon[75677]: pgmap v1684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:48.803+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:48 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:49.662+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:49 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:49.770+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:49 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:50.615+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:50 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:50.726+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:50 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:50 compute-0 ceph-mon[75677]: pgmap v1685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:51.565+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:51 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:51.713+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:51 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:52.532+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:52 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:52.748+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:52 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2851 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:52 compute-0 ceph-mon[75677]: pgmap v1686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:53.568+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:53 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2851 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:38:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:53.797+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:53 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:38:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:38:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:38:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:38:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:38:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:38:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:54.587+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:54 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:54 compute-0 ceph-mon[75677]: pgmap v1687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:54.795+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:54 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:55 compute-0 sudo[289348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:38:55 compute-0 sudo[289348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:55 compute-0 sudo[289348]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:55 compute-0 sudo[289373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:38:55 compute-0 sudo[289373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:55 compute-0 sudo[289373]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:55 compute-0 sudo[289398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:38:55 compute-0 sudo[289398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:55 compute-0 sudo[289398]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:55 compute-0 sudo[289423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:38:55 compute-0 sudo[289423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:55.567+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:55 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:55.831+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:55 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:55 compute-0 sudo[289423]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:38:55 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:38:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:38:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:38:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:38:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:38:55 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 07d6ffed-eb42-475e-b3ba-267a933c3b30 does not exist
Nov 24 20:38:55 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a333c9c6-88cd-4dec-a2ad-e20eb0612909 does not exist
Nov 24 20:38:55 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4fb59a2f-37be-4342-adc8-b6f906f6fb61 does not exist
Nov 24 20:38:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:38:55 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:38:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:38:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:38:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:38:55 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:38:56 compute-0 sudo[289479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:38:56 compute-0 sudo[289479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:56 compute-0 sudo[289479]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:56 compute-0 sudo[289504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:38:56 compute-0 sudo[289504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:56 compute-0 sudo[289504]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:56 compute-0 sudo[289529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:38:56 compute-0 sudo[289529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:56 compute-0 sudo[289529]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:56 compute-0 sudo[289554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:38:56 compute-0 sudo[289554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:56.601+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:56 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:56 compute-0 podman[289619]: 2025-11-24 20:38:56.721098465 +0000 UTC m=+0.051883628 container create 1d29a829ae9afa284ac6af13a83e22e8dc444f2a508226b40dd33c65d54d0278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:38:56 compute-0 systemd[1]: Started libpod-conmon-1d29a829ae9afa284ac6af13a83e22e8dc444f2a508226b40dd33c65d54d0278.scope.
Nov 24 20:38:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:38:56 compute-0 podman[289619]: 2025-11-24 20:38:56.704028189 +0000 UTC m=+0.034813352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:38:56 compute-0 podman[289619]: 2025-11-24 20:38:56.813267581 +0000 UTC m=+0.144052824 container init 1d29a829ae9afa284ac6af13a83e22e8dc444f2a508226b40dd33c65d54d0278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:38:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:38:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:38:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:38:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:38:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:38:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:38:56 compute-0 ceph-mon[75677]: pgmap v1688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:56 compute-0 podman[289619]: 2025-11-24 20:38:56.826513036 +0000 UTC m=+0.157298219 container start 1d29a829ae9afa284ac6af13a83e22e8dc444f2a508226b40dd33c65d54d0278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:38:56 compute-0 elastic_babbage[289635]: 167 167
Nov 24 20:38:56 compute-0 systemd[1]: libpod-1d29a829ae9afa284ac6af13a83e22e8dc444f2a508226b40dd33c65d54d0278.scope: Deactivated successfully.
Nov 24 20:38:56 compute-0 podman[289619]: 2025-11-24 20:38:56.83450778 +0000 UTC m=+0.165292983 container attach 1d29a829ae9afa284ac6af13a83e22e8dc444f2a508226b40dd33c65d54d0278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:38:56 compute-0 podman[289619]: 2025-11-24 20:38:56.835771783 +0000 UTC m=+0.166556986 container died 1d29a829ae9afa284ac6af13a83e22e8dc444f2a508226b40dd33c65d54d0278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 20:38:56 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:56.856+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b12bec0853586dc3905c848dd67d940786e20884b5386d9ac4446769d53dbbf7-merged.mount: Deactivated successfully.
Nov 24 20:38:56 compute-0 podman[289619]: 2025-11-24 20:38:56.890848107 +0000 UTC m=+0.221633280 container remove 1d29a829ae9afa284ac6af13a83e22e8dc444f2a508226b40dd33c65d54d0278 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:38:56 compute-0 systemd[1]: libpod-conmon-1d29a829ae9afa284ac6af13a83e22e8dc444f2a508226b40dd33c65d54d0278.scope: Deactivated successfully.
Nov 24 20:38:57 compute-0 podman[289659]: 2025-11-24 20:38:57.124724403 +0000 UTC m=+0.075806669 container create 97cc1940afe18c1226b27e9a581f3a8f37ff8b1da832082a5baa3b1a5d946525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:38:57 compute-0 systemd[1]: Started libpod-conmon-97cc1940afe18c1226b27e9a581f3a8f37ff8b1da832082a5baa3b1a5d946525.scope.
Nov 24 20:38:57 compute-0 podman[289659]: 2025-11-24 20:38:57.089422209 +0000 UTC m=+0.040504505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:38:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7331fc13c1af8548c922ca3c55a5fd31edc823552b0e22fd64f6bebea08b612/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7331fc13c1af8548c922ca3c55a5fd31edc823552b0e22fd64f6bebea08b612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7331fc13c1af8548c922ca3c55a5fd31edc823552b0e22fd64f6bebea08b612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7331fc13c1af8548c922ca3c55a5fd31edc823552b0e22fd64f6bebea08b612/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7331fc13c1af8548c922ca3c55a5fd31edc823552b0e22fd64f6bebea08b612/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:38:57 compute-0 podman[289659]: 2025-11-24 20:38:57.216511618 +0000 UTC m=+0.167593984 container init 97cc1940afe18c1226b27e9a581f3a8f37ff8b1da832082a5baa3b1a5d946525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 20:38:57 compute-0 podman[289659]: 2025-11-24 20:38:57.228341474 +0000 UTC m=+0.179423790 container start 97cc1940afe18c1226b27e9a581f3a8f37ff8b1da832082a5baa3b1a5d946525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:38:57 compute-0 podman[289659]: 2025-11-24 20:38:57.232683311 +0000 UTC m=+0.183765587 container attach 97cc1940afe18c1226b27e9a581f3a8f37ff8b1da832082a5baa3b1a5d946525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:38:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:38:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:57.621+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:57 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:57.826+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:57 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:58 compute-0 tender_rubin[289676]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:38:58 compute-0 tender_rubin[289676]: --> relative data size: 1.0
Nov 24 20:38:58 compute-0 tender_rubin[289676]: --> All data devices are unavailable
Nov 24 20:38:58 compute-0 systemd[1]: libpod-97cc1940afe18c1226b27e9a581f3a8f37ff8b1da832082a5baa3b1a5d946525.scope: Deactivated successfully.
Nov 24 20:38:58 compute-0 podman[289659]: 2025-11-24 20:38:58.321743655 +0000 UTC m=+1.272825941 container died 97cc1940afe18c1226b27e9a581f3a8f37ff8b1da832082a5baa3b1a5d946525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:38:58 compute-0 systemd[1]: libpod-97cc1940afe18c1226b27e9a581f3a8f37ff8b1da832082a5baa3b1a5d946525.scope: Consumed 1.046s CPU time.
Nov 24 20:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7331fc13c1af8548c922ca3c55a5fd31edc823552b0e22fd64f6bebea08b612-merged.mount: Deactivated successfully.
Nov 24 20:38:58 compute-0 podman[289659]: 2025-11-24 20:38:58.408531386 +0000 UTC m=+1.359613712 container remove 97cc1940afe18c1226b27e9a581f3a8f37ff8b1da832082a5baa3b1a5d946525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 20:38:58 compute-0 systemd[1]: libpod-conmon-97cc1940afe18c1226b27e9a581f3a8f37ff8b1da832082a5baa3b1a5d946525.scope: Deactivated successfully.
Nov 24 20:38:58 compute-0 sudo[289554]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:58 compute-0 sudo[289717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:38:58 compute-0 sudo[289717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:58 compute-0 sudo[289717]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:58 compute-0 sudo[289742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:38:58 compute-0 sudo[289742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:58.611+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:58 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:58 compute-0 sudo[289742]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:58 compute-0 sudo[289767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:38:58 compute-0 sudo[289767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:58 compute-0 sudo[289767]: pam_unix(sudo:session): session closed for user root
Nov 24 20:38:58 compute-0 sudo[289792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:38:58 compute-0 sudo[289792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:38:58 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:58.861+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:59 compute-0 ceph-mon[75677]: pgmap v1689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:38:59 compute-0 podman[289858]: 2025-11-24 20:38:59.313743991 +0000 UTC m=+0.074792442 container create 0d50465892d0035389f61a7e3fd2c3968bef8e7effd3e58a0f2e3fd1f9c9fb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 24 20:38:59 compute-0 systemd[1]: Started libpod-conmon-0d50465892d0035389f61a7e3fd2c3968bef8e7effd3e58a0f2e3fd1f9c9fb13.scope.
Nov 24 20:38:59 compute-0 podman[289858]: 2025-11-24 20:38:59.285234488 +0000 UTC m=+0.046282999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:38:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:38:59 compute-0 podman[289858]: 2025-11-24 20:38:59.433457403 +0000 UTC m=+0.194505904 container init 0d50465892d0035389f61a7e3fd2c3968bef8e7effd3e58a0f2e3fd1f9c9fb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 20:38:59 compute-0 podman[289858]: 2025-11-24 20:38:59.446236855 +0000 UTC m=+0.207285276 container start 0d50465892d0035389f61a7e3fd2c3968bef8e7effd3e58a0f2e3fd1f9c9fb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:38:59 compute-0 podman[289858]: 2025-11-24 20:38:59.450497049 +0000 UTC m=+0.211545510 container attach 0d50465892d0035389f61a7e3fd2c3968bef8e7effd3e58a0f2e3fd1f9c9fb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 20:38:59 compute-0 wizardly_noyce[289875]: 167 167
Nov 24 20:38:59 compute-0 systemd[1]: libpod-0d50465892d0035389f61a7e3fd2c3968bef8e7effd3e58a0f2e3fd1f9c9fb13.scope: Deactivated successfully.
Nov 24 20:38:59 compute-0 podman[289858]: 2025-11-24 20:38:59.455189225 +0000 UTC m=+0.216237686 container died 0d50465892d0035389f61a7e3fd2c3968bef8e7effd3e58a0f2e3fd1f9c9fb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 24 20:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5925eebc7eb8775dc6fc57235f4e432f013fb7d4f14bf3548824473bd7940173-merged.mount: Deactivated successfully.
Nov 24 20:38:59 compute-0 podman[289858]: 2025-11-24 20:38:59.509477538 +0000 UTC m=+0.270525959 container remove 0d50465892d0035389f61a7e3fd2c3968bef8e7effd3e58a0f2e3fd1f9c9fb13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_noyce, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:38:59 compute-0 systemd[1]: libpod-conmon-0d50465892d0035389f61a7e3fd2c3968bef8e7effd3e58a0f2e3fd1f9c9fb13.scope: Deactivated successfully.
Nov 24 20:38:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:38:59.647+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:59 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:38:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:38:59 compute-0 podman[289900]: 2025-11-24 20:38:59.740332583 +0000 UTC m=+0.059233056 container create 6db62b6faba560fe0a0b4ee577ce7e57774c4de88ebb8d33ce8f85cc1b5aa748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:38:59 compute-0 systemd[1]: Started libpod-conmon-6db62b6faba560fe0a0b4ee577ce7e57774c4de88ebb8d33ce8f85cc1b5aa748.scope.
Nov 24 20:38:59 compute-0 podman[289900]: 2025-11-24 20:38:59.714171843 +0000 UTC m=+0.033072336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:38:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a5d86797acc1a153b8f0d686bf55d6a64d30cf0c228676c917deda7443d12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a5d86797acc1a153b8f0d686bf55d6a64d30cf0c228676c917deda7443d12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a5d86797acc1a153b8f0d686bf55d6a64d30cf0c228676c917deda7443d12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a5d86797acc1a153b8f0d686bf55d6a64d30cf0c228676c917deda7443d12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:38:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:38:59.840+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:59 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:38:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:38:59 compute-0 podman[289900]: 2025-11-24 20:38:59.842085714 +0000 UTC m=+0.160986197 container init 6db62b6faba560fe0a0b4ee577ce7e57774c4de88ebb8d33ce8f85cc1b5aa748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:38:59 compute-0 podman[289900]: 2025-11-24 20:38:59.851733303 +0000 UTC m=+0.170633766 container start 6db62b6faba560fe0a0b4ee577ce7e57774c4de88ebb8d33ce8f85cc1b5aa748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:38:59 compute-0 podman[289900]: 2025-11-24 20:38:59.85501142 +0000 UTC m=+0.173911883 container attach 6db62b6faba560fe0a0b4ee577ce7e57774c4de88ebb8d33ce8f85cc1b5aa748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:39:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:00.618+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:00 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]: {
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:     "0": [
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:         {
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "devices": [
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "/dev/loop3"
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             ],
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_name": "ceph_lv0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_size": "21470642176",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "name": "ceph_lv0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "tags": {
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.cluster_name": "ceph",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.crush_device_class": "",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.encrypted": "0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.osd_id": "0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.type": "block",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.vdo": "0"
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             },
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "type": "block",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "vg_name": "ceph_vg0"
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:         }
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:     ],
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:     "1": [
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:         {
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "devices": [
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "/dev/loop4"
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             ],
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_name": "ceph_lv1",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_size": "21470642176",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "name": "ceph_lv1",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "tags": {
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.cluster_name": "ceph",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.crush_device_class": "",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.encrypted": "0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.osd_id": "1",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.type": "block",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.vdo": "0"
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             },
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "type": "block",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "vg_name": "ceph_vg1"
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:         }
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:     ],
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:     "2": [
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:         {
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "devices": [
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "/dev/loop5"
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             ],
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_name": "ceph_lv2",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_size": "21470642176",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "name": "ceph_lv2",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "tags": {
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.cluster_name": "ceph",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.crush_device_class": "",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.encrypted": "0",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.osd_id": "2",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.type": "block",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:                 "ceph.vdo": "0"
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             },
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "type": "block",
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:             "vg_name": "ceph_vg2"
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:         }
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]:     ]
Nov 24 20:39:00 compute-0 peaceful_mendel[289916]: }
Nov 24 20:39:00 compute-0 systemd[1]: libpod-6db62b6faba560fe0a0b4ee577ce7e57774c4de88ebb8d33ce8f85cc1b5aa748.scope: Deactivated successfully.
Nov 24 20:39:00 compute-0 podman[289925]: 2025-11-24 20:39:00.701527245 +0000 UTC m=+0.034016571 container died 6db62b6faba560fe0a0b4ee577ce7e57774c4de88ebb8d33ce8f85cc1b5aa748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 20:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff3a5d86797acc1a153b8f0d686bf55d6a64d30cf0c228676c917deda7443d12-merged.mount: Deactivated successfully.
Nov 24 20:39:00 compute-0 podman[289925]: 2025-11-24 20:39:00.754353779 +0000 UTC m=+0.086843085 container remove 6db62b6faba560fe0a0b4ee577ce7e57774c4de88ebb8d33ce8f85cc1b5aa748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:39:00 compute-0 systemd[1]: libpod-conmon-6db62b6faba560fe0a0b4ee577ce7e57774c4de88ebb8d33ce8f85cc1b5aa748.scope: Deactivated successfully.
Nov 24 20:39:00 compute-0 sudo[289792]: pam_unix(sudo:session): session closed for user root
Nov 24 20:39:00 compute-0 sudo[289941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:39:00 compute-0 sudo[289941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:39:00 compute-0 sudo[289941]: pam_unix(sudo:session): session closed for user root
Nov 24 20:39:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:00.879+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:00 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:00 compute-0 sudo[289966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:39:00 compute-0 sudo[289966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:39:00 compute-0 sudo[289966]: pam_unix(sudo:session): session closed for user root
Nov 24 20:39:01 compute-0 sudo[289991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:39:01 compute-0 sudo[289991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:39:01 compute-0 sudo[289991]: pam_unix(sudo:session): session closed for user root
Nov 24 20:39:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:01 compute-0 ceph-mon[75677]: pgmap v1690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:01 compute-0 sudo[290016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:39:01 compute-0 sudo[290016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:39:01 compute-0 podman[290079]: 2025-11-24 20:39:01.518388887 +0000 UTC m=+0.065914434 container create 21db7461de9f76852e7433a71e4f52f73adafc21e5320f4060fd3b86b7ef5e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_meninsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:39:01 compute-0 systemd[1]: Started libpod-conmon-21db7461de9f76852e7433a71e4f52f73adafc21e5320f4060fd3b86b7ef5e30.scope.
Nov 24 20:39:01 compute-0 podman[290079]: 2025-11-24 20:39:01.491457777 +0000 UTC m=+0.038983384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:39:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:39:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:01.615+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:01 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:01 compute-0 podman[290079]: 2025-11-24 20:39:01.622690887 +0000 UTC m=+0.170216444 container init 21db7461de9f76852e7433a71e4f52f73adafc21e5320f4060fd3b86b7ef5e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_meninsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:39:01 compute-0 podman[290079]: 2025-11-24 20:39:01.632501789 +0000 UTC m=+0.180027336 container start 21db7461de9f76852e7433a71e4f52f73adafc21e5320f4060fd3b86b7ef5e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_meninsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 20:39:01 compute-0 podman[290079]: 2025-11-24 20:39:01.636098306 +0000 UTC m=+0.183623853 container attach 21db7461de9f76852e7433a71e4f52f73adafc21e5320f4060fd3b86b7ef5e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:39:01 compute-0 kind_meninsky[290096]: 167 167
Nov 24 20:39:01 compute-0 systemd[1]: libpod-21db7461de9f76852e7433a71e4f52f73adafc21e5320f4060fd3b86b7ef5e30.scope: Deactivated successfully.
Nov 24 20:39:01 compute-0 podman[290079]: 2025-11-24 20:39:01.640716849 +0000 UTC m=+0.188242406 container died 21db7461de9f76852e7433a71e4f52f73adafc21e5320f4060fd3b86b7ef5e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 20:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6d6f0e0528885e73dd00cd8800b197a86af84250f4529c8d65f58a810d48169-merged.mount: Deactivated successfully.
Nov 24 20:39:01 compute-0 podman[290079]: 2025-11-24 20:39:01.694383415 +0000 UTC m=+0.241908962 container remove 21db7461de9f76852e7433a71e4f52f73adafc21e5320f4060fd3b86b7ef5e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_meninsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:39:01 compute-0 systemd[1]: libpod-conmon-21db7461de9f76852e7433a71e4f52f73adafc21e5320f4060fd3b86b7ef5e30.scope: Deactivated successfully.
Nov 24 20:39:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:01.911+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:01 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:01 compute-0 podman[290118]: 2025-11-24 20:39:01.940846488 +0000 UTC m=+0.064414034 container create ca4f63f612dd7ffade905c74abad6b5ca0e3b67ab4fb72e3889b504d7cbdffe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:39:01 compute-0 systemd[1]: Started libpod-conmon-ca4f63f612dd7ffade905c74abad6b5ca0e3b67ab4fb72e3889b504d7cbdffe6.scope.
Nov 24 20:39:02 compute-0 podman[290118]: 2025-11-24 20:39:01.914575455 +0000 UTC m=+0.038143051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:39:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/305bd64517369ca2aa7b281822d6458d2d91d11f5a6bb03c48015b384c08485b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/305bd64517369ca2aa7b281822d6458d2d91d11f5a6bb03c48015b384c08485b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/305bd64517369ca2aa7b281822d6458d2d91d11f5a6bb03c48015b384c08485b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/305bd64517369ca2aa7b281822d6458d2d91d11f5a6bb03c48015b384c08485b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:39:02 compute-0 podman[290118]: 2025-11-24 20:39:02.030856047 +0000 UTC m=+0.154423663 container init ca4f63f612dd7ffade905c74abad6b5ca0e3b67ab4fb72e3889b504d7cbdffe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:39:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:02 compute-0 podman[290118]: 2025-11-24 20:39:02.049025452 +0000 UTC m=+0.172592998 container start ca4f63f612dd7ffade905c74abad6b5ca0e3b67ab4fb72e3889b504d7cbdffe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 20:39:02 compute-0 podman[290118]: 2025-11-24 20:39:02.057907369 +0000 UTC m=+0.181474875 container attach ca4f63f612dd7ffade905c74abad6b5ca0e3b67ab4fb72e3889b504d7cbdffe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:39:02 compute-0 podman[290132]: 2025-11-24 20:39:02.084251465 +0000 UTC m=+0.091903650 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 24 20:39:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2857 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:02.583+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:02 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:02.951+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:02 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:03 compute-0 ceph-mon[75677]: pgmap v1691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2857 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:03 compute-0 stoic_wilson[290135]: {
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "osd_id": 2,
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "type": "bluestore"
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:     },
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "osd_id": 1,
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "type": "bluestore"
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:     },
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "osd_id": 0,
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:         "type": "bluestore"
Nov 24 20:39:03 compute-0 stoic_wilson[290135]:     }
Nov 24 20:39:03 compute-0 stoic_wilson[290135]: }
Nov 24 20:39:03 compute-0 systemd[1]: libpod-ca4f63f612dd7ffade905c74abad6b5ca0e3b67ab4fb72e3889b504d7cbdffe6.scope: Deactivated successfully.
Nov 24 20:39:03 compute-0 podman[290118]: 2025-11-24 20:39:03.17266658 +0000 UTC m=+1.296234116 container died ca4f63f612dd7ffade905c74abad6b5ca0e3b67ab4fb72e3889b504d7cbdffe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilson, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:39:03 compute-0 systemd[1]: libpod-ca4f63f612dd7ffade905c74abad6b5ca0e3b67ab4fb72e3889b504d7cbdffe6.scope: Consumed 1.131s CPU time.
Nov 24 20:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-305bd64517369ca2aa7b281822d6458d2d91d11f5a6bb03c48015b384c08485b-merged.mount: Deactivated successfully.
Nov 24 20:39:03 compute-0 podman[290118]: 2025-11-24 20:39:03.243677651 +0000 UTC m=+1.367245167 container remove ca4f63f612dd7ffade905c74abad6b5ca0e3b67ab4fb72e3889b504d7cbdffe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:39:03 compute-0 systemd[1]: libpod-conmon-ca4f63f612dd7ffade905c74abad6b5ca0e3b67ab4fb72e3889b504d7cbdffe6.scope: Deactivated successfully.
Nov 24 20:39:03 compute-0 sudo[290016]: pam_unix(sudo:session): session closed for user root
Nov 24 20:39:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:39:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:39:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:39:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:39:03 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4f44e605-508e-4538-a8cd-9720e3f8bed4 does not exist
Nov 24 20:39:03 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ee75e350-96eb-46b7-8c58-73d9492214d8 does not exist
Nov 24 20:39:03 compute-0 sudo[290199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:39:03 compute-0 sudo[290199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:39:03 compute-0 sudo[290199]: pam_unix(sudo:session): session closed for user root
Nov 24 20:39:03 compute-0 sudo[290224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:39:03 compute-0 sudo[290224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:39:03 compute-0 sudo[290224]: pam_unix(sudo:session): session closed for user root
Nov 24 20:39:03 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:03.570+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:04.000+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:04 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:39:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:39:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:04.551+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:04 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:05.038+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:05 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:05 compute-0 ceph-mon[75677]: pgmap v1692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:05.576+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:05 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:06.024+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:06 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:06.533+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:06 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:07.011+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:07 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:07 compute-0 ceph-mon[75677]: pgmap v1693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:07.485+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:07 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2867 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:08.003+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:08 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:08 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2867 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:08.524+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:08 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:08.994+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:08 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:09 compute-0 ceph-mon[75677]: pgmap v1694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:39:09.396 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:39:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:39:09.397 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:39:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:39:09.397 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:39:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:09.520+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:09 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:10.007+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:10 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:10 compute-0 ceph-mon[75677]: pgmap v1695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:10.510+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:10 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:10 compute-0 sshd-session[289796]: Invalid user amssys from 14.63.196.175 port 33862
Nov 24 20:39:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:11.044+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:11 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:11 compute-0 sshd-session[289796]: Received disconnect from 14.63.196.175 port 33862:11: Bye Bye [preauth]
Nov 24 20:39:11 compute-0 sshd-session[289796]: Disconnected from invalid user amssys 14.63.196.175 port 33862 [preauth]
Nov 24 20:39:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:11.516+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:11 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:12.048+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:12 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:12 compute-0 ceph-mon[75677]: pgmap v1696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:12.549+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:12 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2872 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:39:12.628 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:39:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:39:12.629 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:39:12 compute-0 podman[290249]: 2025-11-24 20:39:12.877994385 +0000 UTC m=+0.110893408 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 24 20:39:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:13.077+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:13 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:13 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2872 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:13.588+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:13 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:14.061+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:14 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:14 compute-0 ceph-mon[75677]: pgmap v1697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:14.601+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:14 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:15.098+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:15 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:15.586+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:15 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:15 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:39:15.631 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:39:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:16.063+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:16 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:16 compute-0 ceph-mon[75677]: pgmap v1698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:39:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1441639245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:39:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:39:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1441639245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:39:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:16.589+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:16 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:16 compute-0 podman[290270]: 2025-11-24 20:39:16.891298365 +0000 UTC m=+0.113463256 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 20:39:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:17.033+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:17 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1441639245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:39:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1441639245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:39:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2877 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:17.604+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:17 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:18.052+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:18 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:18 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2877 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:18 compute-0 ceph-mon[75677]: pgmap v1699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:18.630+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:18 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:19 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:19.049+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:19.629+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:19 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:20.088+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:20 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:20.618+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:20 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:20 compute-0 ceph-mon[75677]: pgmap v1700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:21.127+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:21 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:21.603+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:21 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:22.108+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:22 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:22.563+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:22 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2882 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:22 compute-0 ceph-mon[75677]: pgmap v1701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:23.123+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:23 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:23.551+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:23 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2882 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 20:39:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:24.156+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:24 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:39:24
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'images', 'vms', '.rgw.root', 'default.rgw.meta']
Nov 24 20:39:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:39:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:24.517+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:24 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:24 compute-0 ceph-mon[75677]: pgmap v1702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 20:39:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:25.126+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:25 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:25.494+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:25 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:26.138+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:26 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:39:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:26.528+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:26 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:26 compute-0 ceph-mon[75677]: pgmap v1703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:39:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.758236) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016766758281, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1760, "num_deletes": 251, "total_data_size": 2101378, "memory_usage": 2143728, "flush_reason": "Manual Compaction"}
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016766773356, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 2056886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47061, "largest_seqno": 48820, "table_properties": {"data_size": 2049158, "index_size": 4226, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 21340, "raw_average_key_size": 21, "raw_value_size": 2031472, "raw_average_value_size": 2092, "num_data_blocks": 185, "num_entries": 971, "num_filter_entries": 971, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016648, "oldest_key_time": 1764016648, "file_creation_time": 1764016766, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 15208 microseconds, and 5892 cpu microseconds.
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.773430) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 2056886 bytes OK
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.773470) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.775715) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.775739) EVENT_LOG_v1 {"time_micros": 1764016766775730, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.775765) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2093199, prev total WAL file size 2093199, number of live WAL files 2.
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.776909) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(2008KB)], [98(10MB)]
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016766776954, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 12785265, "oldest_snapshot_seqno": -1}
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 11680 keys, 11279802 bytes, temperature: kUnknown
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016766866634, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 11279802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11212369, "index_size": 37113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 317238, "raw_average_key_size": 27, "raw_value_size": 11008980, "raw_average_value_size": 942, "num_data_blocks": 1401, "num_entries": 11680, "num_filter_entries": 11680, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016766, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.867402) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 11279802 bytes
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.868718) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.4 rd, 125.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.2 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(11.7) write-amplify(5.5) OK, records in: 12194, records dropped: 514 output_compression: NoCompression
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.868749) EVENT_LOG_v1 {"time_micros": 1764016766868735, "job": 58, "event": "compaction_finished", "compaction_time_micros": 89767, "compaction_time_cpu_micros": 53042, "output_level": 6, "num_output_files": 1, "total_output_size": 11279802, "num_input_records": 12194, "num_output_records": 11680, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016766869773, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016766873691, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.776827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.873786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.873792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.873795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.873798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:26.873801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:27.178+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:27 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:27.502+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:27 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:39:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:28.181+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:28 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:28.520+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:28 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:28 compute-0 ceph-mon[75677]: pgmap v1704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:39:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:29.225+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:29 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:29.475+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:29 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:39:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:30.273+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:30 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:30.427+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:30 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:30 compute-0 ceph-mon[75677]: pgmap v1705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:39:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:31.259+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:31 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:31.466+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:31 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:39:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:32.210+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:32 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:32.496+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:32 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2887 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:32 compute-0 ceph-mon[75677]: pgmap v1706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:39:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:32 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2887 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:32 compute-0 podman[290296]: 2025-11-24 20:39:32.860723674 +0000 UTC m=+0.084559733 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 20:39:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:33.194+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:33 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:33.542+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:33 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:39:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:34.149+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:34 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:34.582+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:34 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:34 compute-0 ceph-mon[75677]: pgmap v1707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:39:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:34 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:39:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:39:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:35.173+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:35 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:35.544+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:35 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:36.131+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:36 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 24 20:39:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:36.507+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:36 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:36 compute-0 ceph-mon[75677]: pgmap v1708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 24 20:39:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:37.113+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:37 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:37.515+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:37 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2897 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:37 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2897 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:38.139+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:38 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:38.488+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:38 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:38 compute-0 ceph-mon[75677]: pgmap v1709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:39.166+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:39 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:39.467+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:39 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:40.186+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:40 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:40.504+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:40 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:39:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:39:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:39:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:39:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:39:40 compute-0 ceph-mon[75677]: pgmap v1710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:41.169+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:41 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:41.487+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:41 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:42.173+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:42 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:42.507+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:42 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2902 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:42 compute-0 ceph-mon[75677]: pgmap v1711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:43.164+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:43 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:43.490+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:43 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:43 compute-0 podman[290318]: 2025-11-24 20:39:43.871476763 +0000 UTC m=+0.097727916 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:39:43 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2902 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:44.177+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:44 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:44.485+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:44 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:44 compute-0 ceph-mon[75677]: pgmap v1712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:45.169+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:45 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:45.534+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:45 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:45 compute-0 sshd-session[290317]: Invalid user admin from 80.94.95.115 port 43592
Nov 24 20:39:45 compute-0 sshd-session[290317]: Connection closed by invalid user admin 80.94.95.115 port 43592 [preauth]
Nov 24 20:39:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:46.218+0000 7f1a67169640 -1 osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:46 compute-0 ceph-osd[89640]: osd.1 152 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:46.558+0000 7f2ca3ee7640 -1 osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:46 compute-0 ceph-osd[88624]: osd.0 152 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 24 20:39:46 compute-0 ceph-mon[75677]: pgmap v1713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 24 20:39:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 24 20:39:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:47.228+0000 7f1a67169640 -1 osd.1 153 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:47 compute-0 ceph-osd[89640]: osd.1 153 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:47.586+0000 7f2ca3ee7640 -1 osd.0 153 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:47 compute-0 ceph-osd[88624]: osd.0 153 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:47 compute-0 podman[290340]: 2025-11-24 20:39:47.877800624 +0000 UTC m=+0.109152740 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:39:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 24 20:39:47 compute-0 ceph-mon[75677]: osdmap e153: 3 total, 3 up, 3 in
Nov 24 20:39:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 24 20:39:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 24 20:39:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:48.209+0000 7f1a67169640 -1 osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:48 compute-0 ceph-osd[89640]: osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:48.591+0000 7f2ca3ee7640 -1 osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:48 compute-0 ceph-osd[88624]: osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:48 compute-0 ceph-mon[75677]: osdmap e154: 3 total, 3 up, 3 in
Nov 24 20:39:48 compute-0 ceph-mon[75677]: pgmap v1716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:39:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:49.203+0000 7f1a67169640 -1 osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:49 compute-0 ceph-osd[89640]: osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:49.635+0000 7f2ca3ee7640 -1 osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:49 compute-0 ceph-osd[88624]: osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 2.0 KiB/s wr, 10 op/s
Nov 24 20:39:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:50.236+0000 7f1a67169640 -1 osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:50 compute-0 ceph-osd[89640]: osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:50.677+0000 7f2ca3ee7640 -1 osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:50 compute-0 ceph-osd[88624]: osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:51 compute-0 ceph-mon[75677]: pgmap v1717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 2.0 KiB/s wr, 10 op/s
Nov 24 20:39:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:51.229+0000 7f1a67169640 -1 osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:51 compute-0 ceph-osd[89640]: osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:51.720+0000 7f2ca3ee7640 -1 osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:51 compute-0 ceph-osd[88624]: osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Nov 24 20:39:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:52.272+0000 7f1a67169640 -1 osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:52 compute-0 ceph-osd[89640]: osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2907 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:52.681+0000 7f2ca3ee7640 -1 osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:52 compute-0 ceph-osd[88624]: osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:53 compute-0 ceph-mon[75677]: pgmap v1718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Nov 24 20:39:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2907 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:53.297+0000 7f1a67169640 -1 osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:53 compute-0 ceph-osd[89640]: osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:53.654+0000 7f2ca3ee7640 -1 osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:53 compute-0 ceph-osd[88624]: osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Nov 24 20:39:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:54.294+0000 7f1a67169640 -1 osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:54 compute-0 ceph-osd[89640]: osd.1 154 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:39:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:39:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:39:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:39:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:39:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:39:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:54.632+0000 7f2ca3ee7640 -1 osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:54 compute-0 ceph-osd[88624]: osd.0 154 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 24 20:39:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 24 20:39:55 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 24 20:39:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:55 compute-0 ceph-mon[75677]: pgmap v1719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Nov 24 20:39:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:55.254+0000 7f1a67169640 -1 osd.1 155 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:55 compute-0 ceph-osd[89640]: osd.1 155 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:55.617+0000 7f2ca3ee7640 -1 osd.0 155 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:55 compute-0 ceph-osd[88624]: osd.0 155 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 24 20:39:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 24 20:39:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 24 20:39:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 6.2 KiB/s wr, 69 op/s
Nov 24 20:39:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:56 compute-0 ceph-mon[75677]: osdmap e155: 3 total, 3 up, 3 in
Nov 24 20:39:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:56.242+0000 7f1a67169640 -1 osd.1 156 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:56 compute-0 ceph-osd[89640]: osd.1 156 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:56.613+0000 7f2ca3ee7640 -1 osd.0 156 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:56 compute-0 ceph-osd[88624]: osd.0 156 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 24 20:39:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:57 compute-0 ceph-mon[75677]: osdmap e156: 3 total, 3 up, 3 in
Nov 24 20:39:57 compute-0 ceph-mon[75677]: pgmap v1722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 6.2 KiB/s wr, 69 op/s
Nov 24 20:39:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:57.202+0000 7f1a67169640 -1 osd.1 156 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:57 compute-0 ceph-osd[89640]: osd.1 156 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 24 20:39:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 24 20:39:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2917 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:39:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 24 20:39:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 24 20:39:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 24 20:39:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:57.647+0000 7f2ca3ee7640 -1 osd.0 156 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:57 compute-0 ceph-osd[88624]: osd.0 156 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.654212) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016797654261, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 681, "num_deletes": 251, "total_data_size": 603466, "memory_usage": 615552, "flush_reason": "Manual Compaction"}
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016797766393, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 473521, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48821, "largest_seqno": 49501, "table_properties": {"data_size": 470199, "index_size": 1102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9654, "raw_average_key_size": 21, "raw_value_size": 462949, "raw_average_value_size": 1035, "num_data_blocks": 48, "num_entries": 447, "num_filter_entries": 447, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016767, "oldest_key_time": 1764016767, "file_creation_time": 1764016797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 112258 microseconds, and 3244 cpu microseconds.
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.766471) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 473521 bytes OK
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.766504) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.793891) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.793966) EVENT_LOG_v1 {"time_micros": 1764016797793949, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.794002) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 599720, prev total WAL file size 599720, number of live WAL files 2.
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.795037) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323531' seq:72057594037927935, type:22 .. '6D6772737461740031353032' seq:0, type:0; will stop at (end)
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(462KB)], [101(10MB)]
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016797795109, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11753323, "oldest_snapshot_seqno": -1}
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 11622 keys, 8531767 bytes, temperature: kUnknown
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016797903126, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8531767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8469139, "index_size": 32500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29061, "raw_key_size": 316661, "raw_average_key_size": 27, "raw_value_size": 8271058, "raw_average_value_size": 711, "num_data_blocks": 1208, "num_entries": 11622, "num_filter_entries": 11622, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.903564) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8531767 bytes
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.905192) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.7 rd, 78.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.8 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(42.8) write-amplify(18.0) OK, records in: 12127, records dropped: 505 output_compression: NoCompression
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.905220) EVENT_LOG_v1 {"time_micros": 1764016797905207, "job": 60, "event": "compaction_finished", "compaction_time_micros": 108176, "compaction_time_cpu_micros": 50581, "output_level": 6, "num_output_files": 1, "total_output_size": 8531767, "num_input_records": 12127, "num_output_records": 11622, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016797905485, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016797909007, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.794944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.909115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.909125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.909128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.909131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:39:57.909135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:39:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.7 KiB/s wr, 38 op/s
Nov 24 20:39:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:58.237+0000 7f1a67169640 -1 osd.1 158 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:58 compute-0 ceph-osd[89640]: osd.1 158 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:58 compute-0 ceph-mon[75677]: osdmap e157: 3 total, 3 up, 3 in
Nov 24 20:39:58 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2917 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:39:58 compute-0 ceph-mon[75677]: osdmap e158: 3 total, 3 up, 3 in
Nov 24 20:39:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 24 20:39:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:58.654+0000 7f2ca3ee7640 -1 osd.0 158 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:58 compute-0 ceph-osd[88624]: osd.0 158 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 24 20:39:58 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 24 20:39:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:39:59.280+0000 7f1a67169640 -1 osd.1 159 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:59 compute-0 ceph-osd[89640]: osd.1 159 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:39:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:59 compute-0 ceph-mon[75677]: pgmap v1725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.7 KiB/s wr, 38 op/s
Nov 24 20:39:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:39:59 compute-0 ceph-mon[75677]: osdmap e159: 3 total, 3 up, 3 in
Nov 24 20:39:59 compute-0 ceph-osd[88624]: osd.0 159 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:39:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:39:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:39:59.617+0000 7f2ca3ee7640 -1 osd.0 159 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.7 KiB/s wr, 52 op/s
Nov 24 20:40:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 24 20:40:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:00.326+0000 7f1a67169640 -1 osd.1 159 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:00 compute-0 ceph-osd[89640]: osd.1 159 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 24 20:40:00 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 24 20:40:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:00 compute-0 ceph-osd[88624]: osd.0 160 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:00.611+0000 7f2ca3ee7640 -1 osd.0 160 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:01.376+0000 7f1a67169640 -1 osd.1 160 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:01 compute-0 ceph-osd[89640]: osd.1 160 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 24 20:40:01 compute-0 ceph-mon[75677]: pgmap v1727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.7 KiB/s wr, 52 op/s
Nov 24 20:40:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:01 compute-0 ceph-mon[75677]: osdmap e160: 3 total, 3 up, 3 in
Nov 24 20:40:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 24 20:40:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 24 20:40:01 compute-0 ceph-osd[88624]: osd.0 161 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:40:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:01.571+0000 7f2ca3ee7640 -1 osd.0 161 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 14 KiB/s wr, 125 op/s
Nov 24 20:40:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:02.367+0000 7f1a67169640 -1 osd.1 161 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:02 compute-0 ceph-osd[89640]: osd.1 161 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:02 compute-0 ceph-mon[75677]: osdmap e161: 3 total, 3 up, 3 in
Nov 24 20:40:02 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 20:40:02 compute-0 ceph-mon[75677]: pgmap v1730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 14 KiB/s wr, 125 op/s
Nov 24 20:40:02 compute-0 ceph-osd[88624]: osd.0 161 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:02.612+0000 7f2ca3ee7640 -1 osd.0 161 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2922 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:03.415+0000 7f1a67169640 -1 osd.1 161 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:03 compute-0 ceph-osd[89640]: osd.1 161 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2922 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:03 compute-0 sudo[290367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:03 compute-0 sudo[290367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:03 compute-0 sudo[290367]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:03.633+0000 7f2ca3ee7640 -1 osd.0 161 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:03 compute-0 ceph-osd[88624]: osd.0 161 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:03 compute-0 sudo[290393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:40:03 compute-0 sudo[290393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:03 compute-0 sudo[290393]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:03 compute-0 podman[290391]: 2025-11-24 20:40:03.728102448 +0000 UTC m=+0.094104298 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:40:03 compute-0 sudo[290437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:03 compute-0 sudo[290437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:03 compute-0 sudo[290437]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:03 compute-0 sudo[290462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 20:40:03 compute-0 sudo[290462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 13 KiB/s wr, 104 op/s
Nov 24 20:40:04 compute-0 sudo[290462]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:40:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:40:04 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:40:04 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:40:04 compute-0 sudo[290507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:04 compute-0 sudo[290507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:04 compute-0 sudo[290507]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:04 compute-0 sudo[290532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:40:04 compute-0 sudo[290532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:04 compute-0 sudo[290532]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:04 compute-0 ceph-osd[89640]: osd.1 161 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:04.438+0000 7f1a67169640 -1 osd.1 161 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:04 compute-0 sudo[290557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:04 compute-0 sudo[290557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:04 compute-0 sudo[290557]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:04 compute-0 sudo[290582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:40:04 compute-0 sudo[290582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:04 compute-0 ceph-mon[75677]: pgmap v1731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 13 KiB/s wr, 104 op/s
Nov 24 20:40:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:40:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:40:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:04.608+0000 7f2ca3ee7640 -1 osd.0 161 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:04 compute-0 ceph-osd[88624]: osd.0 161 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:05 compute-0 sudo[290582]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:40:05 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:40:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:40:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:40:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:40:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:40:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a8bd7f75-93dc-4589-9be8-33d624600820 does not exist
Nov 24 20:40:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 5870edb6-991e-490a-8856-b6a777167936 does not exist
Nov 24 20:40:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 361e1bcf-10b0-451f-91a6-6e180f3bd71b does not exist
Nov 24 20:40:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:40:05 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:40:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:40:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:40:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:40:05 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:40:05 compute-0 sudo[290639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:05 compute-0 sudo[290639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:05 compute-0 sudo[290639]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:05 compute-0 sudo[290664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:40:05 compute-0 sudo[290664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:05 compute-0 sudo[290664]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:05 compute-0 ceph-osd[89640]: osd.1 161 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:05.448+0000 7f1a67169640 -1 osd.1 161 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:05 compute-0 sudo[290689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:05 compute-0 sudo[290689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:05 compute-0 sudo[290689]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:40:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:40:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:40:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:40:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:40:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:40:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:05.617+0000 7f2ca3ee7640 -1 osd.0 161 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:05 compute-0 ceph-osd[88624]: osd.0 161 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 24 20:40:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 24 20:40:05 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 24 20:40:05 compute-0 sudo[290714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:40:05 compute-0 sudo[290714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:06 compute-0 podman[290781]: 2025-11-24 20:40:06.153314285 +0000 UTC m=+0.068485963 container create c919e69c7762784f951d27cf28f718ceb337a53f3b8912b0fef63f7eaa535231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:40:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 13 KiB/s wr, 119 op/s
Nov 24 20:40:06 compute-0 systemd[1]: Started libpod-conmon-c919e69c7762784f951d27cf28f718ceb337a53f3b8912b0fef63f7eaa535231.scope.
Nov 24 20:40:06 compute-0 podman[290781]: 2025-11-24 20:40:06.126716764 +0000 UTC m=+0.041888492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:40:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:40:06 compute-0 podman[290781]: 2025-11-24 20:40:06.257336268 +0000 UTC m=+0.172507956 container init c919e69c7762784f951d27cf28f718ceb337a53f3b8912b0fef63f7eaa535231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 24 20:40:06 compute-0 podman[290781]: 2025-11-24 20:40:06.265864406 +0000 UTC m=+0.181036094 container start c919e69c7762784f951d27cf28f718ceb337a53f3b8912b0fef63f7eaa535231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 20:40:06 compute-0 podman[290781]: 2025-11-24 20:40:06.269504244 +0000 UTC m=+0.184675952 container attach c919e69c7762784f951d27cf28f718ceb337a53f3b8912b0fef63f7eaa535231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:40:06 compute-0 vigilant_faraday[290797]: 167 167
Nov 24 20:40:06 compute-0 systemd[1]: libpod-c919e69c7762784f951d27cf28f718ceb337a53f3b8912b0fef63f7eaa535231.scope: Deactivated successfully.
Nov 24 20:40:06 compute-0 podman[290781]: 2025-11-24 20:40:06.274276491 +0000 UTC m=+0.189448179 container died c919e69c7762784f951d27cf28f718ceb337a53f3b8912b0fef63f7eaa535231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:40:06 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:40:06 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:40:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-047c34e7457fc00a512e2577208dc280d75cc72c27807d3e2ea64e83cd58434e-merged.mount: Deactivated successfully.
Nov 24 20:40:06 compute-0 podman[290781]: 2025-11-24 20:40:06.332813667 +0000 UTC m=+0.247985355 container remove c919e69c7762784f951d27cf28f718ceb337a53f3b8912b0fef63f7eaa535231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:40:06 compute-0 systemd[1]: libpod-conmon-c919e69c7762784f951d27cf28f718ceb337a53f3b8912b0fef63f7eaa535231.scope: Deactivated successfully.
Nov 24 20:40:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:06.436+0000 7f1a67169640 -1 osd.1 162 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:06 compute-0 ceph-osd[89640]: osd.1 162 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:06.624+0000 7f2ca3ee7640 -1 osd.0 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:06 compute-0 ceph-osd[88624]: osd.0 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 24 20:40:06 compute-0 podman[290824]: 2025-11-24 20:40:06.632948915 +0000 UTC m=+0.095278909 container create caf386ce91b41e7bb08f53c78dc6fb354e555f65608cee7d7e987465c0d15e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:40:06 compute-0 podman[290824]: 2025-11-24 20:40:06.577855742 +0000 UTC m=+0.040185796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:40:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:06 compute-0 ceph-mon[75677]: osdmap e162: 3 total, 3 up, 3 in
Nov 24 20:40:06 compute-0 ceph-mon[75677]: pgmap v1733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 13 KiB/s wr, 119 op/s
Nov 24 20:40:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 24 20:40:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 24 20:40:06 compute-0 systemd[1]: Started libpod-conmon-caf386ce91b41e7bb08f53c78dc6fb354e555f65608cee7d7e987465c0d15e68.scope.
Nov 24 20:40:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96aa504d48ac0799106df998f61673a02bcd1f1b9b16d59b41d667ab4987e9a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96aa504d48ac0799106df998f61673a02bcd1f1b9b16d59b41d667ab4987e9a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96aa504d48ac0799106df998f61673a02bcd1f1b9b16d59b41d667ab4987e9a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96aa504d48ac0799106df998f61673a02bcd1f1b9b16d59b41d667ab4987e9a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96aa504d48ac0799106df998f61673a02bcd1f1b9b16d59b41d667ab4987e9a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:06 compute-0 podman[290824]: 2025-11-24 20:40:06.787359816 +0000 UTC m=+0.249689860 container init caf386ce91b41e7bb08f53c78dc6fb354e555f65608cee7d7e987465c0d15e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 20:40:06 compute-0 podman[290824]: 2025-11-24 20:40:06.798722181 +0000 UTC m=+0.261052155 container start caf386ce91b41e7bb08f53c78dc6fb354e555f65608cee7d7e987465c0d15e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:40:06 compute-0 podman[290824]: 2025-11-24 20:40:06.812657343 +0000 UTC m=+0.274987348 container attach caf386ce91b41e7bb08f53c78dc6fb354e555f65608cee7d7e987465c0d15e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_varahamihira, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 20:40:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:07.480+0000 7f1a67169640 -1 osd.1 163 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:07 compute-0 ceph-osd[89640]: osd.1 163 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:07.608+0000 7f2ca3ee7640 -1 osd.0 163 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:07 compute-0 ceph-osd[88624]: osd.0 163 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2927 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:07 compute-0 ceph-mon[75677]: osdmap e163: 3 total, 3 up, 3 in
Nov 24 20:40:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:07 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2927 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:07 compute-0 pedantic_varahamihira[290840]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:40:07 compute-0 pedantic_varahamihira[290840]: --> relative data size: 1.0
Nov 24 20:40:07 compute-0 pedantic_varahamihira[290840]: --> All data devices are unavailable
Nov 24 20:40:07 compute-0 systemd[1]: libpod-caf386ce91b41e7bb08f53c78dc6fb354e555f65608cee7d7e987465c0d15e68.scope: Deactivated successfully.
Nov 24 20:40:07 compute-0 podman[290824]: 2025-11-24 20:40:07.888205114 +0000 UTC m=+1.350535118 container died caf386ce91b41e7bb08f53c78dc6fb354e555f65608cee7d7e987465c0d15e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:40:07 compute-0 systemd[1]: libpod-caf386ce91b41e7bb08f53c78dc6fb354e555f65608cee7d7e987465c0d15e68.scope: Consumed 1.033s CPU time.
Nov 24 20:40:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-96aa504d48ac0799106df998f61673a02bcd1f1b9b16d59b41d667ab4987e9a6-merged.mount: Deactivated successfully.
Nov 24 20:40:08 compute-0 podman[290824]: 2025-11-24 20:40:08.054085402 +0000 UTC m=+1.516415416 container remove caf386ce91b41e7bb08f53c78dc6fb354e555f65608cee7d7e987465c0d15e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:40:08 compute-0 systemd[1]: libpod-conmon-caf386ce91b41e7bb08f53c78dc6fb354e555f65608cee7d7e987465c0d15e68.scope: Deactivated successfully.
Nov 24 20:40:08 compute-0 sudo[290714]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.0 KiB/s wr, 54 op/s
Nov 24 20:40:08 compute-0 sudo[290881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:08 compute-0 sudo[290881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:08 compute-0 sudo[290881]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:08 compute-0 sudo[290906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:40:08 compute-0 sudo[290906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:08 compute-0 sudo[290906]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:08 compute-0 sudo[290931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:08 compute-0 sudo[290931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:08 compute-0 sudo[290931]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:08.441+0000 7f1a67169640 -1 osd.1 163 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:08 compute-0 ceph-osd[89640]: osd.1 163 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:08 compute-0 sudo[290956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:40:08 compute-0 sudo[290956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:08.581+0000 7f2ca3ee7640 -1 osd.0 163 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:08 compute-0 ceph-osd[88624]: osd.0 163 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 24 20:40:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 24 20:40:08 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 24 20:40:08 compute-0 ceph-mon[75677]: pgmap v1735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.0 KiB/s wr, 54 op/s
Nov 24 20:40:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:08 compute-0 podman[291023]: 2025-11-24 20:40:08.976425486 +0000 UTC m=+0.054076088 container create fdb6cf51eff7b07deb982130ed96e3b6290555e3442234ce746dafa6f6a27139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:40:09 compute-0 systemd[1]: Started libpod-conmon-fdb6cf51eff7b07deb982130ed96e3b6290555e3442234ce746dafa6f6a27139.scope.
Nov 24 20:40:09 compute-0 podman[291023]: 2025-11-24 20:40:08.953195375 +0000 UTC m=+0.030846067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:40:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:40:09 compute-0 podman[291023]: 2025-11-24 20:40:09.070011279 +0000 UTC m=+0.147661921 container init fdb6cf51eff7b07deb982130ed96e3b6290555e3442234ce746dafa6f6a27139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:40:09 compute-0 podman[291023]: 2025-11-24 20:40:09.079454242 +0000 UTC m=+0.157104884 container start fdb6cf51eff7b07deb982130ed96e3b6290555e3442234ce746dafa6f6a27139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 20:40:09 compute-0 podman[291023]: 2025-11-24 20:40:09.083601592 +0000 UTC m=+0.161252234 container attach fdb6cf51eff7b07deb982130ed96e3b6290555e3442234ce746dafa6f6a27139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:40:09 compute-0 stupefied_wright[291039]: 167 167
Nov 24 20:40:09 compute-0 systemd[1]: libpod-fdb6cf51eff7b07deb982130ed96e3b6290555e3442234ce746dafa6f6a27139.scope: Deactivated successfully.
Nov 24 20:40:09 compute-0 podman[291023]: 2025-11-24 20:40:09.086852309 +0000 UTC m=+0.164502921 container died fdb6cf51eff7b07deb982130ed96e3b6290555e3442234ce746dafa6f6a27139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:40:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-8964febbbcf11fa9138754910422cfad29a39705ee5b2f522d4d6fd2b6bad041-merged.mount: Deactivated successfully.
Nov 24 20:40:09 compute-0 podman[291023]: 2025-11-24 20:40:09.139325183 +0000 UTC m=+0.216975785 container remove fdb6cf51eff7b07deb982130ed96e3b6290555e3442234ce746dafa6f6a27139 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 20:40:09 compute-0 systemd[1]: libpod-conmon-fdb6cf51eff7b07deb982130ed96e3b6290555e3442234ce746dafa6f6a27139.scope: Deactivated successfully.
Nov 24 20:40:09 compute-0 podman[291061]: 2025-11-24 20:40:09.371415192 +0000 UTC m=+0.059548074 container create f70eaed500efc5688b609b114018a2c6da4ab5d482bad58f5f7915a2d68a221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 20:40:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:40:09.397 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:40:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:40:09.398 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:40:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:40:09.398 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:40:09 compute-0 systemd[1]: Started libpod-conmon-f70eaed500efc5688b609b114018a2c6da4ab5d482bad58f5f7915a2d68a221e.scope.
Nov 24 20:40:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:09.415+0000 7f1a67169640 -1 osd.1 164 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:09 compute-0 ceph-osd[89640]: osd.1 164 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:09 compute-0 podman[291061]: 2025-11-24 20:40:09.340778072 +0000 UTC m=+0.028911074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:40:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02fcabcbbffc16e6ad6a835471260c25373e3428ab1d0fc2e974186bb29d5dfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02fcabcbbffc16e6ad6a835471260c25373e3428ab1d0fc2e974186bb29d5dfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02fcabcbbffc16e6ad6a835471260c25373e3428ab1d0fc2e974186bb29d5dfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02fcabcbbffc16e6ad6a835471260c25373e3428ab1d0fc2e974186bb29d5dfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:09 compute-0 podman[291061]: 2025-11-24 20:40:09.471829798 +0000 UTC m=+0.159962720 container init f70eaed500efc5688b609b114018a2c6da4ab5d482bad58f5f7915a2d68a221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:40:09 compute-0 podman[291061]: 2025-11-24 20:40:09.478689761 +0000 UTC m=+0.166822663 container start f70eaed500efc5688b609b114018a2c6da4ab5d482bad58f5f7915a2d68a221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 20:40:09 compute-0 podman[291061]: 2025-11-24 20:40:09.482161354 +0000 UTC m=+0.170294266 container attach f70eaed500efc5688b609b114018a2c6da4ab5d482bad58f5f7915a2d68a221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 20:40:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:09.609+0000 7f2ca3ee7640 -1 osd.0 164 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:09 compute-0 ceph-osd[88624]: osd.0 164 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:09 compute-0 ceph-mon[75677]: osdmap e164: 3 total, 3 up, 3 in
Nov 24 20:40:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 6.5 KiB/s wr, 99 op/s
Nov 24 20:40:10 compute-0 nifty_shamir[291077]: {
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:     "0": [
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:         {
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "devices": [
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "/dev/loop3"
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             ],
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_name": "ceph_lv0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_size": "21470642176",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "name": "ceph_lv0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "tags": {
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.cluster_name": "ceph",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.crush_device_class": "",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.encrypted": "0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.osd_id": "0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.type": "block",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.vdo": "0"
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             },
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "type": "block",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "vg_name": "ceph_vg0"
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:         }
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:     ],
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:     "1": [
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:         {
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "devices": [
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "/dev/loop4"
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             ],
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_name": "ceph_lv1",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_size": "21470642176",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "name": "ceph_lv1",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "tags": {
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.cluster_name": "ceph",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.crush_device_class": "",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.encrypted": "0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.osd_id": "1",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.type": "block",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.vdo": "0"
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             },
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "type": "block",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "vg_name": "ceph_vg1"
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:         }
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:     ],
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:     "2": [
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:         {
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "devices": [
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "/dev/loop5"
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             ],
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_name": "ceph_lv2",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_size": "21470642176",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "name": "ceph_lv2",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "tags": {
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.cluster_name": "ceph",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.crush_device_class": "",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.encrypted": "0",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.osd_id": "2",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.type": "block",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:                 "ceph.vdo": "0"
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             },
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "type": "block",
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:             "vg_name": "ceph_vg2"
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:         }
Nov 24 20:40:10 compute-0 nifty_shamir[291077]:     ]
Nov 24 20:40:10 compute-0 nifty_shamir[291077]: }
Nov 24 20:40:10 compute-0 systemd[1]: libpod-f70eaed500efc5688b609b114018a2c6da4ab5d482bad58f5f7915a2d68a221e.scope: Deactivated successfully.
Nov 24 20:40:10 compute-0 podman[291061]: 2025-11-24 20:40:10.332118612 +0000 UTC m=+1.020251494 container died f70eaed500efc5688b609b114018a2c6da4ab5d482bad58f5f7915a2d68a221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 20:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-02fcabcbbffc16e6ad6a835471260c25373e3428ab1d0fc2e974186bb29d5dfa-merged.mount: Deactivated successfully.
Nov 24 20:40:10 compute-0 podman[291061]: 2025-11-24 20:40:10.4535518 +0000 UTC m=+1.141684712 container remove f70eaed500efc5688b609b114018a2c6da4ab5d482bad58f5f7915a2d68a221e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:40:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:10.460+0000 7f1a67169640 -1 osd.1 164 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:10 compute-0 ceph-osd[89640]: osd.1 164 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:10 compute-0 systemd[1]: libpod-conmon-f70eaed500efc5688b609b114018a2c6da4ab5d482bad58f5f7915a2d68a221e.scope: Deactivated successfully.
Nov 24 20:40:10 compute-0 sudo[290956]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:10.563+0000 7f2ca3ee7640 -1 osd.0 164 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:10 compute-0 ceph-osd[88624]: osd.0 164 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:10 compute-0 sudo[291098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:10 compute-0 sudo[291098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:10 compute-0 sudo[291098]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 24 20:40:10 compute-0 ceph-mon[75677]: pgmap v1737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 6.5 KiB/s wr, 99 op/s
Nov 24 20:40:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:10 compute-0 sudo[291123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:40:10 compute-0 sudo[291123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 24 20:40:10 compute-0 sudo[291123]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:10 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 24 20:40:10 compute-0 sudo[291148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:10 compute-0 sudo[291148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:10 compute-0 sudo[291148]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:10 compute-0 sudo[291174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:40:10 compute-0 sudo[291174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:11.427+0000 7f1a67169640 -1 osd.1 165 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:11 compute-0 ceph-osd[89640]: osd.1 165 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:11 compute-0 podman[291238]: 2025-11-24 20:40:11.381861823 +0000 UTC m=+0.039208400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:40:11 compute-0 podman[291238]: 2025-11-24 20:40:11.516182157 +0000 UTC m=+0.173528694 container create c416f627e47ae8619fbd15a6c42882d09f1bf46ee57a61eb87453b9a06b49fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 24 20:40:11 compute-0 systemd[1]: Started libpod-conmon-c416f627e47ae8619fbd15a6c42882d09f1bf46ee57a61eb87453b9a06b49fc0.scope.
Nov 24 20:40:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:40:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:11.613+0000 7f2ca3ee7640 -1 osd.0 165 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:11 compute-0 ceph-osd[88624]: osd.0 165 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:11 compute-0 podman[291238]: 2025-11-24 20:40:11.629315133 +0000 UTC m=+0.286661680 container init c416f627e47ae8619fbd15a6c42882d09f1bf46ee57a61eb87453b9a06b49fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:40:11 compute-0 podman[291238]: 2025-11-24 20:40:11.637375488 +0000 UTC m=+0.294722025 container start c416f627e47ae8619fbd15a6c42882d09f1bf46ee57a61eb87453b9a06b49fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:40:11 compute-0 elastic_dirac[291254]: 167 167
Nov 24 20:40:11 compute-0 systemd[1]: libpod-c416f627e47ae8619fbd15a6c42882d09f1bf46ee57a61eb87453b9a06b49fc0.scope: Deactivated successfully.
Nov 24 20:40:11 compute-0 podman[291238]: 2025-11-24 20:40:11.648354412 +0000 UTC m=+0.305700929 container attach c416f627e47ae8619fbd15a6c42882d09f1bf46ee57a61eb87453b9a06b49fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:40:11 compute-0 podman[291238]: 2025-11-24 20:40:11.650000436 +0000 UTC m=+0.307346933 container died c416f627e47ae8619fbd15a6c42882d09f1bf46ee57a61eb87453b9a06b49fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:40:11 compute-0 ceph-mon[75677]: osdmap e165: 3 total, 3 up, 3 in
Nov 24 20:40:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f63e415172127766fa0a0a665deecf8e24ccd58d062b146c5d49fa1051174140-merged.mount: Deactivated successfully.
Nov 24 20:40:11 compute-0 podman[291238]: 2025-11-24 20:40:11.806939444 +0000 UTC m=+0.464285971 container remove c416f627e47ae8619fbd15a6c42882d09f1bf46ee57a61eb87453b9a06b49fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 20:40:11 compute-0 systemd[1]: libpod-conmon-c416f627e47ae8619fbd15a6c42882d09f1bf46ee57a61eb87453b9a06b49fc0.scope: Deactivated successfully.
Nov 24 20:40:12 compute-0 podman[291280]: 2025-11-24 20:40:12.017568398 +0000 UTC m=+0.058785163 container create 21cd96fbd5f18530ca5df4184ed4055586b5dc27cce1584fe05fb961954869d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:40:12 compute-0 systemd[1]: Started libpod-conmon-21cd96fbd5f18530ca5df4184ed4055586b5dc27cce1584fe05fb961954869d0.scope.
Nov 24 20:40:12 compute-0 podman[291280]: 2025-11-24 20:40:11.995072376 +0000 UTC m=+0.036289121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:40:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731fda5a995efe65ac3c7667f4bfe47e3eb9086cf502f671562fc7a3c2f530b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731fda5a995efe65ac3c7667f4bfe47e3eb9086cf502f671562fc7a3c2f530b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731fda5a995efe65ac3c7667f4bfe47e3eb9086cf502f671562fc7a3c2f530b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731fda5a995efe65ac3c7667f4bfe47e3eb9086cf502f671562fc7a3c2f530b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:40:12 compute-0 podman[291280]: 2025-11-24 20:40:12.147159335 +0000 UTC m=+0.188376070 container init 21cd96fbd5f18530ca5df4184ed4055586b5dc27cce1584fe05fb961954869d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 20:40:12 compute-0 podman[291280]: 2025-11-24 20:40:12.161045927 +0000 UTC m=+0.202262662 container start 21cd96fbd5f18530ca5df4184ed4055586b5dc27cce1584fe05fb961954869d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:40:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 7.0 KiB/s wr, 108 op/s
Nov 24 20:40:12 compute-0 podman[291280]: 2025-11-24 20:40:12.174790385 +0000 UTC m=+0.216007150 container attach 21cd96fbd5f18530ca5df4184ed4055586b5dc27cce1584fe05fb961954869d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leavitt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:40:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:12.443+0000 7f1a67169640 -1 osd.1 165 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:12 compute-0 ceph-osd[89640]: osd.1 165 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:12.580+0000 7f2ca3ee7640 -1 osd.0 165 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:12 compute-0 ceph-osd[88624]: osd.0 165 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:40:12.593 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:40:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:40:12.596 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:40:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 24 20:40:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2932 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:12 compute-0 ceph-mon[75677]: pgmap v1739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 7.0 KiB/s wr, 108 op/s
Nov 24 20:40:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 24 20:40:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]: {
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "osd_id": 2,
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "type": "bluestore"
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:     },
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "osd_id": 1,
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "type": "bluestore"
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:     },
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "osd_id": 0,
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:         "type": "bluestore"
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]:     }
Nov 24 20:40:13 compute-0 mystifying_leavitt[291297]: }
Nov 24 20:40:13 compute-0 systemd[1]: libpod-21cd96fbd5f18530ca5df4184ed4055586b5dc27cce1584fe05fb961954869d0.scope: Deactivated successfully.
Nov 24 20:40:13 compute-0 systemd[1]: libpod-21cd96fbd5f18530ca5df4184ed4055586b5dc27cce1584fe05fb961954869d0.scope: Consumed 1.116s CPU time.
Nov 24 20:40:13 compute-0 podman[291331]: 2025-11-24 20:40:13.347395902 +0000 UTC m=+0.032267713 container died 21cd96fbd5f18530ca5df4184ed4055586b5dc27cce1584fe05fb961954869d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 20:40:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:13.424+0000 7f1a67169640 -1 osd.1 166 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:13 compute-0 ceph-osd[89640]: osd.1 166 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:13.549+0000 7f2ca3ee7640 -1 osd.0 166 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:13 compute-0 ceph-osd[88624]: osd.0 166 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-731fda5a995efe65ac3c7667f4bfe47e3eb9086cf502f671562fc7a3c2f530b4-merged.mount: Deactivated successfully.
Nov 24 20:40:13 compute-0 podman[291331]: 2025-11-24 20:40:13.876587249 +0000 UTC m=+0.561459050 container remove 21cd96fbd5f18530ca5df4184ed4055586b5dc27cce1584fe05fb961954869d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:40:13 compute-0 systemd[1]: libpod-conmon-21cd96fbd5f18530ca5df4184ed4055586b5dc27cce1584fe05fb961954869d0.scope: Deactivated successfully.
Nov 24 20:40:13 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2932 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:13 compute-0 ceph-mon[75677]: osdmap e166: 3 total, 3 up, 3 in
Nov 24 20:40:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:13 compute-0 sudo[291174]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:40:13 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:40:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:40:14 compute-0 podman[291346]: 2025-11-24 20:40:14.031847543 +0000 UTC m=+0.103309425 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 20:40:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:40:14 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2f0ab9fd-76ea-438b-9391-4a8db2fd47f8 does not exist
Nov 24 20:40:14 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2004d49c-386a-4dae-8c69-fe2b4ee2099b does not exist
Nov 24 20:40:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 9.0 KiB/s wr, 141 op/s
Nov 24 20:40:14 compute-0 sudo[291366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:40:14 compute-0 sudo[291366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:14 compute-0 sudo[291366]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:14 compute-0 sudo[291391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:40:14 compute-0 sudo[291391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:40:14 compute-0 sudo[291391]: pam_unix(sudo:session): session closed for user root
Nov 24 20:40:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:14.408+0000 7f1a67169640 -1 osd.1 166 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:14 compute-0 ceph-osd[89640]: osd.1 166 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:14.585+0000 7f2ca3ee7640 -1 osd.0 166 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:14 compute-0 ceph-osd[88624]: osd.0 166 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:40:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:40:15 compute-0 ceph-mon[75677]: pgmap v1741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 9.0 KiB/s wr, 141 op/s
Nov 24 20:40:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 24 20:40:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 24 20:40:15 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 24 20:40:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:15.423+0000 7f1a67169640 -1 osd.1 167 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:15 compute-0 ceph-osd[89640]: osd.1 167 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:15.632+0000 7f2ca3ee7640 -1 osd.0 167 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:15 compute-0 ceph-osd[88624]: osd.0 167 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 5 ])
Nov 24 20:40:16 compute-0 ceph-mon[75677]: osdmap e167: 3 total, 3 up, 3 in
Nov 24 20:40:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 8.3 KiB/s wr, 167 op/s
Nov 24 20:40:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:16.435+0000 7f1a67169640 -1 osd.1 167 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:16 compute-0 ceph-osd[89640]: osd.1 167 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:40:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/610516645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:40:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:40:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/610516645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:40:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:16.664+0000 7f2ca3ee7640 -1 osd.0 167 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:16 compute-0 ceph-osd[88624]: osd.0 167 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:17 compute-0 ceph-mon[75677]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 5 ])
Nov 24 20:40:17 compute-0 ceph-mon[75677]: pgmap v1743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 8.3 KiB/s wr, 167 op/s
Nov 24 20:40:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/610516645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:40:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/610516645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:40:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:17.469+0000 7f1a67169640 -1 osd.1 167 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:17 compute-0 ceph-osd[89640]: osd.1 167 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 24 20:40:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 24 20:40:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 24 20:40:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:17.657+0000 7f2ca3ee7640 -1 osd.0 167 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:17 compute-0 ceph-osd[88624]: osd.0 167 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 5.5 KiB/s wr, 109 op/s
Nov 24 20:40:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:18 compute-0 ceph-mon[75677]: osdmap e168: 3 total, 3 up, 3 in
Nov 24 20:40:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:18.446+0000 7f1a67169640 -1 osd.1 167 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:18 compute-0 ceph-osd[89640]: osd.1 167 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:18.649+0000 7f2ca3ee7640 -1 osd.0 167 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:18 compute-0 ceph-osd[88624]: osd.0 167 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:18 compute-0 podman[291416]: 2025-11-24 20:40:18.922776218 +0000 UTC m=+0.144083575 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 24 20:40:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:19 compute-0 ceph-mon[75677]: pgmap v1745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 5.5 KiB/s wr, 109 op/s
Nov 24 20:40:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:19.409+0000 7f1a67169640 -1 osd.1 168 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:19 compute-0 ceph-osd[89640]: osd.1 168 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:19.611+0000 7f2ca3ee7640 -1 osd.0 168 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:19 compute-0 ceph-osd[88624]: osd.0 168 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 4.9 KiB/s wr, 89 op/s
Nov 24 20:40:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:20.445+0000 7f1a67169640 -1 osd.1 168 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:20 compute-0 ceph-osd[89640]: osd.1 168 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:20.575+0000 7f2ca3ee7640 -1 osd.0 168 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:20 compute-0 ceph-osd[88624]: osd.0 168 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:20 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:40:20.600 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:40:21 compute-0 ceph-mon[75677]: pgmap v1746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 4.9 KiB/s wr, 89 op/s
Nov 24 20:40:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:21.455+0000 7f1a67169640 -1 osd.1 168 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:21 compute-0 ceph-osd[89640]: osd.1 168 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:21.601+0000 7f2ca3ee7640 -1 osd.0 168 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:21 compute-0 ceph-osd[88624]: osd.0 168 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.0 KiB/s wr, 58 op/s
Nov 24 20:40:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2942 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:22.488+0000 7f1a67169640 -1 osd.1 168 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:22 compute-0 ceph-osd[89640]: osd.1 168 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:22.610+0000 7f2ca3ee7640 -1 osd.0 168 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:22 compute-0 ceph-osd[88624]: osd.0 168 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 24 20:40:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 24 20:40:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 24 20:40:23 compute-0 ceph-mon[75677]: pgmap v1747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.0 KiB/s wr, 58 op/s
Nov 24 20:40:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2942 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:23 compute-0 ceph-mon[75677]: osdmap e169: 3 total, 3 up, 3 in
Nov 24 20:40:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:23.451+0000 7f1a67169640 -1 osd.1 168 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:23 compute-0 ceph-osd[89640]: osd.1 168 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:23.572+0000 7f2ca3ee7640 -1 osd.0 168 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:23 compute-0 ceph-osd[88624]: osd.0 168 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 383 B/s wr, 1 op/s
Nov 24 20:40:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:24 compute-0 ceph-mon[75677]: pgmap v1749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 383 B/s wr, 1 op/s
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:40:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:24.443+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:24 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:40:24
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'images', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'vms', '.mgr']
Nov 24 20:40:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:40:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:24.615+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:24 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:25.409+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:25 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:25.581+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:25 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 841 B/s rd, 360 B/s wr, 1 op/s
Nov 24 20:40:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:26 compute-0 ceph-mon[75677]: pgmap v1750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 841 B/s rd, 360 B/s wr, 1 op/s
Nov 24 20:40:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:26.424+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:26 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:26.574+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:26 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:27.420+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:27 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:27.605+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:27 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2947 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 307 B/s wr, 1 op/s
Nov 24 20:40:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:28 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2947 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:28 compute-0 ceph-mon[75677]: pgmap v1751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 307 B/s wr, 1 op/s
Nov 24 20:40:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:28.452+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:28 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:28.636+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:28 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:29.439+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:29 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:29.660+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:29 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Nov 24 20:40:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:30.430+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:30 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:30 compute-0 ceph-mon[75677]: pgmap v1752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Nov 24 20:40:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:30.645+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:30 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:31.393+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:31 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:31 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:31.653+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:32.411+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:32 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:32 compute-0 ceph-mon[75677]: pgmap v1753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2952 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:32.659+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:32 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:33.439+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:33 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:33 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2952 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:33.664+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:33 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:33 compute-0 podman[291442]: 2025-11-24 20:40:33.844165599 +0000 UTC m=+0.070687382 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 20:40:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:34 compute-0 ceph-mon[75677]: pgmap v1754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:34.485+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:34 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:34.664+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:34 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:40:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:40:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:35.465+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:35 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:35.643+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:35 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:36.457+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:36 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:36 compute-0 ceph-mon[75677]: pgmap v1755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:36.667+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:36 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:37.474+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:37 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2957 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:37.653+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:37 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:38 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2957 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:38 compute-0 ceph-mon[75677]: pgmap v1756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:38.520+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:38 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:38.694+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:38 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:39.516+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:39 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:39.664+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:39 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:40.522+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:40 compute-0 ceph-mon[75677]: pgmap v1757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:40 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:40:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:40:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:40:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:40:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:40:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:40.690+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:40 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:41.534+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:41 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:41.681+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:41 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:42 compute-0 ceph-mon[75677]: pgmap v1758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:42.548+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:42 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2962 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:42.661+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:42 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:43.515+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:43 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:43 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2962 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:43.660+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:43 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:44.544+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:44 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:44 compute-0 ceph-mon[75677]: pgmap v1759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:44.690+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:44 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:44 compute-0 podman[291461]: 2025-11-24 20:40:44.869930777 +0000 UTC m=+0.091208801 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 20:40:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:45.520+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:45 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:45.736+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:45 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:46.530+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:46 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:46 compute-0 ceph-mon[75677]: pgmap v1760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:46.757+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:46 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:47.528+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:47 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2967 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:47.770+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:47 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:48.500+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:48 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:48 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2967 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:48 compute-0 ceph-mon[75677]: pgmap v1761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:48.737+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:48 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:49.478+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:49 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:49.709+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:49 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:49 compute-0 podman[291482]: 2025-11-24 20:40:49.916084458 +0000 UTC m=+0.141422543 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 20:40:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:50.515+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:50 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:50 compute-0 ceph-mon[75677]: pgmap v1762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:50.687+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:50 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:51.507+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:51 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:51.645+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:51 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:52.470+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:52 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:52.633+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:52 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:52 compute-0 ceph-mon[75677]: pgmap v1763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2972 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:53.516+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:53 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:53.606+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:53 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2972 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:40:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:40:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:40:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:40:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:40:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:40:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:54.539+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:54 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:54.599+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:54 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:54 compute-0 ceph-mon[75677]: pgmap v1764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:55.558+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:55 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:55.562+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:55 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:56.513+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:56 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:56.567+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:56 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:56 compute-0 ceph-mon[75677]: pgmap v1765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:57.478+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:57 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:57.524+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:57 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2977 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:40:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:57 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2977 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:40:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:58.472+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:58 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:58.552+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:58 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:58 compute-0 ceph-mon[75677]: pgmap v1766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:40:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:40:59.519+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:59 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:40:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:40:59.552+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:59 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:40:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:40:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:40:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:00.501+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:00 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:00.574+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:00 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:00 compute-0 ceph-mon[75677]: pgmap v1767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:01.545+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:01 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:01.547+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:01 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:02.544+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:02 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:02.575+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:02 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2982 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:02 compute-0 ceph-mon[75677]: pgmap v1768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:03.518+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:03 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:03 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:03.623+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2982 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:04.510+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:04 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:04 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:04.647+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:04 compute-0 ceph-mon[75677]: pgmap v1769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:04 compute-0 podman[291508]: 2025-11-24 20:41:04.845333647 +0000 UTC m=+0.073916218 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:41:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:05.554+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:05 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:05 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:05.681+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:06.555+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:06 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:06 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:06.721+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:06 compute-0 ceph-mon[75677]: pgmap v1770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:07.586+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:07 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:07.696+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:07 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:08.573+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:08 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:08.686+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:08 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:08 compute-0 ceph-mon[75677]: pgmap v1771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:41:09.398 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:41:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:41:09.399 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:41:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:41:09.399 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:41:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:09.554+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:09 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:09.644+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:09 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:10.585+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:10 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:10.653+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:10 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:10 compute-0 ceph-mon[75677]: pgmap v1772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:11.547+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:11 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:11.610+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:11 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:12.586+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:12 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:12.648+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:12 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2987 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:41:12.714 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:41:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:41:12.716 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:41:12 compute-0 ceph-mon[75677]: pgmap v1773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:12 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2987 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:13.581+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:13 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:13.636+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:13 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:14 compute-0 sudo[291527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:41:14 compute-0 sudo[291527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:14 compute-0 sudo[291527]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:14 compute-0 sudo[291552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:41:14 compute-0 sudo[291552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:14 compute-0 sudo[291552]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:14 compute-0 sudo[291577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:41:14 compute-0 sudo[291577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:14 compute-0 sudo[291577]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:14.598+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:14 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:14 compute-0 sudo[291602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:41:14 compute-0 sudo[291602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:14.608+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:14 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:14 compute-0 ceph-mon[75677]: pgmap v1774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:15 compute-0 sudo[291602]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 20:41:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:41:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:41:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:41:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:41:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 683d45e9-2e06-4215-a677-ee5ea0e6520d does not exist
Nov 24 20:41:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ba7a70fe-14bc-4b43-a681-64027b232405 does not exist
Nov 24 20:41:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a80ce20b-62a9-49fa-86c5-02f2feb78c9d does not exist
Nov 24 20:41:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:41:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:41:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:41:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:41:15 compute-0 sudo[291658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:41:15 compute-0 sudo[291658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:15 compute-0 sudo[291658]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:15 compute-0 sudo[291689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:41:15 compute-0 podman[291682]: 2025-11-24 20:41:15.369577842 +0000 UTC m=+0.072321056 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:41:15 compute-0 sudo[291689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:15 compute-0 sudo[291689]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:15 compute-0 sudo[291726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:41:15 compute-0 sudo[291726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:15 compute-0 sudo[291726]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:15 compute-0 sudo[291751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:41:15 compute-0 sudo[291751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:15.598+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:15 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:15.640+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:15 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:41:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:41:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:15 compute-0 podman[291816]: 2025-11-24 20:41:15.983512956 +0000 UTC m=+0.060191351 container create 189b0a3c55afadba4c243807dfea108da5788c5b7dd4ae64ec18519292db95a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 20:41:16 compute-0 systemd[1]: Started libpod-conmon-189b0a3c55afadba4c243807dfea108da5788c5b7dd4ae64ec18519292db95a9.scope.
Nov 24 20:41:16 compute-0 podman[291816]: 2025-11-24 20:41:15.954381897 +0000 UTC m=+0.031060332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:41:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:41:16 compute-0 podman[291816]: 2025-11-24 20:41:16.078826556 +0000 UTC m=+0.155504931 container init 189b0a3c55afadba4c243807dfea108da5788c5b7dd4ae64ec18519292db95a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:41:16 compute-0 podman[291816]: 2025-11-24 20:41:16.090323454 +0000 UTC m=+0.167001829 container start 189b0a3c55afadba4c243807dfea108da5788c5b7dd4ae64ec18519292db95a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:41:16 compute-0 podman[291816]: 2025-11-24 20:41:16.093704785 +0000 UTC m=+0.170383140 container attach 189b0a3c55afadba4c243807dfea108da5788c5b7dd4ae64ec18519292db95a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:41:16 compute-0 great_wilson[291832]: 167 167
Nov 24 20:41:16 compute-0 systemd[1]: libpod-189b0a3c55afadba4c243807dfea108da5788c5b7dd4ae64ec18519292db95a9.scope: Deactivated successfully.
Nov 24 20:41:16 compute-0 podman[291816]: 2025-11-24 20:41:16.10101446 +0000 UTC m=+0.177692825 container died 189b0a3c55afadba4c243807dfea108da5788c5b7dd4ae64ec18519292db95a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2d4227b24342eb751da54a8f8e64729877c25ca900296ae39cd08aca4cabf33-merged.mount: Deactivated successfully.
Nov 24 20:41:16 compute-0 podman[291816]: 2025-11-24 20:41:16.147322539 +0000 UTC m=+0.224000894 container remove 189b0a3c55afadba4c243807dfea108da5788c5b7dd4ae64ec18519292db95a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:41:16 compute-0 systemd[1]: libpod-conmon-189b0a3c55afadba4c243807dfea108da5788c5b7dd4ae64ec18519292db95a9.scope: Deactivated successfully.
Nov 24 20:41:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:16 compute-0 podman[291858]: 2025-11-24 20:41:16.348362317 +0000 UTC m=+0.046063033 container create a1b94e4ad0ff0eb4555e9ba586bc095ac1cad8a9fff5b4bf955b900efb49e1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lalande, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 20:41:16 compute-0 systemd[1]: Started libpod-conmon-a1b94e4ad0ff0eb4555e9ba586bc095ac1cad8a9fff5b4bf955b900efb49e1bb.scope.
Nov 24 20:41:16 compute-0 podman[291858]: 2025-11-24 20:41:16.331107545 +0000 UTC m=+0.028808291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:41:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:41:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:41:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3104800499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:41:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:41:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3104800499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b168758b0e4543aa031ede665e2a10215b0ef189ca641478880167b43e0d1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b168758b0e4543aa031ede665e2a10215b0ef189ca641478880167b43e0d1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b168758b0e4543aa031ede665e2a10215b0ef189ca641478880167b43e0d1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b168758b0e4543aa031ede665e2a10215b0ef189ca641478880167b43e0d1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b168758b0e4543aa031ede665e2a10215b0ef189ca641478880167b43e0d1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:16 compute-0 podman[291858]: 2025-11-24 20:41:16.469474807 +0000 UTC m=+0.167175613 container init a1b94e4ad0ff0eb4555e9ba586bc095ac1cad8a9fff5b4bf955b900efb49e1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lalande, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:41:16 compute-0 podman[291858]: 2025-11-24 20:41:16.479369611 +0000 UTC m=+0.177070377 container start a1b94e4ad0ff0eb4555e9ba586bc095ac1cad8a9fff5b4bf955b900efb49e1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:41:16 compute-0 podman[291858]: 2025-11-24 20:41:16.483691188 +0000 UTC m=+0.181391954 container attach a1b94e4ad0ff0eb4555e9ba586bc095ac1cad8a9fff5b4bf955b900efb49e1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lalande, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:41:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:16.638+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:16 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:16.646+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:16 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:16 compute-0 ceph-mon[75677]: pgmap v1775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3104800499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:41:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3104800499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:41:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:17 compute-0 friendly_lalande[291875]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:41:17 compute-0 friendly_lalande[291875]: --> relative data size: 1.0
Nov 24 20:41:17 compute-0 friendly_lalande[291875]: --> All data devices are unavailable
Nov 24 20:41:17 compute-0 systemd[1]: libpod-a1b94e4ad0ff0eb4555e9ba586bc095ac1cad8a9fff5b4bf955b900efb49e1bb.scope: Deactivated successfully.
Nov 24 20:41:17 compute-0 podman[291858]: 2025-11-24 20:41:17.620668694 +0000 UTC m=+1.318369430 container died a1b94e4ad0ff0eb4555e9ba586bc095ac1cad8a9fff5b4bf955b900efb49e1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lalande, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 24 20:41:17 compute-0 systemd[1]: libpod-a1b94e4ad0ff0eb4555e9ba586bc095ac1cad8a9fff5b4bf955b900efb49e1bb.scope: Consumed 1.102s CPU time.
Nov 24 20:41:17 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:17.635+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-03b168758b0e4543aa031ede665e2a10215b0ef189ca641478880167b43e0d1b-merged.mount: Deactivated successfully.
Nov 24 20:41:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 2996 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:17 compute-0 podman[291858]: 2025-11-24 20:41:17.682785026 +0000 UTC m=+1.380485772 container remove a1b94e4ad0ff0eb4555e9ba586bc095ac1cad8a9fff5b4bf955b900efb49e1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:41:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:17.685+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:17 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:17 compute-0 systemd[1]: libpod-conmon-a1b94e4ad0ff0eb4555e9ba586bc095ac1cad8a9fff5b4bf955b900efb49e1bb.scope: Deactivated successfully.
Nov 24 20:41:17 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:41:17.718 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:41:17 compute-0 sudo[291751]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:17 compute-0 sudo[291916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:41:17 compute-0 sudo[291916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:17 compute-0 sudo[291916]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:17 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 2996 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:17 compute-0 sudo[291941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:41:17 compute-0 sudo[291941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:17 compute-0 sudo[291941]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:18 compute-0 sudo[291966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:41:18 compute-0 sudo[291966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:18 compute-0 sudo[291966]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:18 compute-0 sudo[291991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:41:18 compute-0 sudo[291991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:18 compute-0 podman[292058]: 2025-11-24 20:41:18.498138018 +0000 UTC m=+0.046758062 container create 99a40330d1eef141370e9e366ab7b1fe35f6beb593e07addfafd0d76a000de70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 20:41:18 compute-0 systemd[1]: Started libpod-conmon-99a40330d1eef141370e9e366ab7b1fe35f6beb593e07addfafd0d76a000de70.scope.
Nov 24 20:41:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:41:18 compute-0 podman[292058]: 2025-11-24 20:41:18.480115845 +0000 UTC m=+0.028735869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:41:18 compute-0 podman[292058]: 2025-11-24 20:41:18.586397399 +0000 UTC m=+0.135017493 container init 99a40330d1eef141370e9e366ab7b1fe35f6beb593e07addfafd0d76a000de70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:41:18 compute-0 podman[292058]: 2025-11-24 20:41:18.59351941 +0000 UTC m=+0.142139444 container start 99a40330d1eef141370e9e366ab7b1fe35f6beb593e07addfafd0d76a000de70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:41:18 compute-0 podman[292058]: 2025-11-24 20:41:18.597240639 +0000 UTC m=+0.145860673 container attach 99a40330d1eef141370e9e366ab7b1fe35f6beb593e07addfafd0d76a000de70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:41:18 compute-0 cool_lalande[292075]: 167 167
Nov 24 20:41:18 compute-0 systemd[1]: libpod-99a40330d1eef141370e9e366ab7b1fe35f6beb593e07addfafd0d76a000de70.scope: Deactivated successfully.
Nov 24 20:41:18 compute-0 podman[292058]: 2025-11-24 20:41:18.603364783 +0000 UTC m=+0.151984787 container died 99a40330d1eef141370e9e366ab7b1fe35f6beb593e07addfafd0d76a000de70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:41:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-98a971278afc128d644252b2da0d9829f1e10aaac96690c271de6a6d14515555-merged.mount: Deactivated successfully.
Nov 24 20:41:18 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:18.640+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:18 compute-0 podman[292058]: 2025-11-24 20:41:18.651193492 +0000 UTC m=+0.199813526 container remove 99a40330d1eef141370e9e366ab7b1fe35f6beb593e07addfafd0d76a000de70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:41:18 compute-0 systemd[1]: libpod-conmon-99a40330d1eef141370e9e366ab7b1fe35f6beb593e07addfafd0d76a000de70.scope: Deactivated successfully.
Nov 24 20:41:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:18.708+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:18 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:18 compute-0 podman[292098]: 2025-11-24 20:41:18.875926165 +0000 UTC m=+0.059464693 container create dd9602b5e71dff1ff990a966d5c4aa2c0b7c4fec2bc0ae88e6209541dd54ab33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:41:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:18 compute-0 ceph-mon[75677]: pgmap v1776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:18 compute-0 systemd[1]: Started libpod-conmon-dd9602b5e71dff1ff990a966d5c4aa2c0b7c4fec2bc0ae88e6209541dd54ab33.scope.
Nov 24 20:41:18 compute-0 podman[292098]: 2025-11-24 20:41:18.849280312 +0000 UTC m=+0.032818890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:41:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ad8d3c017f199fb4e981cc8962b651401d4d00ddae68309a855de17457ac57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ad8d3c017f199fb4e981cc8962b651401d4d00ddae68309a855de17457ac57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ad8d3c017f199fb4e981cc8962b651401d4d00ddae68309a855de17457ac57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ad8d3c017f199fb4e981cc8962b651401d4d00ddae68309a855de17457ac57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:18 compute-0 podman[292098]: 2025-11-24 20:41:18.980559004 +0000 UTC m=+0.164097552 container init dd9602b5e71dff1ff990a966d5c4aa2c0b7c4fec2bc0ae88e6209541dd54ab33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:41:18 compute-0 podman[292098]: 2025-11-24 20:41:18.988352962 +0000 UTC m=+0.171891460 container start dd9602b5e71dff1ff990a966d5c4aa2c0b7c4fec2bc0ae88e6209541dd54ab33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_booth, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:41:18 compute-0 podman[292098]: 2025-11-24 20:41:18.991780524 +0000 UTC m=+0.175319092 container attach dd9602b5e71dff1ff990a966d5c4aa2c0b7c4fec2bc0ae88e6209541dd54ab33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_booth, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:41:19 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:19.684+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:19.696+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:19 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:19 compute-0 naughty_booth[292115]: {
Nov 24 20:41:19 compute-0 naughty_booth[292115]:     "0": [
Nov 24 20:41:19 compute-0 naughty_booth[292115]:         {
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "devices": [
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "/dev/loop3"
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             ],
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_name": "ceph_lv0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_size": "21470642176",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "name": "ceph_lv0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "tags": {
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.cluster_name": "ceph",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.crush_device_class": "",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.encrypted": "0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.osd_id": "0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.type": "block",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.vdo": "0"
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             },
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "type": "block",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "vg_name": "ceph_vg0"
Nov 24 20:41:19 compute-0 naughty_booth[292115]:         }
Nov 24 20:41:19 compute-0 naughty_booth[292115]:     ],
Nov 24 20:41:19 compute-0 naughty_booth[292115]:     "1": [
Nov 24 20:41:19 compute-0 naughty_booth[292115]:         {
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "devices": [
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "/dev/loop4"
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             ],
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_name": "ceph_lv1",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_size": "21470642176",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "name": "ceph_lv1",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "tags": {
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.cluster_name": "ceph",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.crush_device_class": "",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.encrypted": "0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.osd_id": "1",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.type": "block",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.vdo": "0"
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             },
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "type": "block",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "vg_name": "ceph_vg1"
Nov 24 20:41:19 compute-0 naughty_booth[292115]:         }
Nov 24 20:41:19 compute-0 naughty_booth[292115]:     ],
Nov 24 20:41:19 compute-0 naughty_booth[292115]:     "2": [
Nov 24 20:41:19 compute-0 naughty_booth[292115]:         {
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "devices": [
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "/dev/loop5"
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             ],
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_name": "ceph_lv2",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_size": "21470642176",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "name": "ceph_lv2",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "tags": {
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.cluster_name": "ceph",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.crush_device_class": "",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.encrypted": "0",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.osd_id": "2",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.type": "block",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:                 "ceph.vdo": "0"
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             },
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "type": "block",
Nov 24 20:41:19 compute-0 naughty_booth[292115]:             "vg_name": "ceph_vg2"
Nov 24 20:41:19 compute-0 naughty_booth[292115]:         }
Nov 24 20:41:19 compute-0 naughty_booth[292115]:     ]
Nov 24 20:41:19 compute-0 naughty_booth[292115]: }
Nov 24 20:41:19 compute-0 systemd[1]: libpod-dd9602b5e71dff1ff990a966d5c4aa2c0b7c4fec2bc0ae88e6209541dd54ab33.scope: Deactivated successfully.
Nov 24 20:41:19 compute-0 podman[292098]: 2025-11-24 20:41:19.781126011 +0000 UTC m=+0.964664539 container died dd9602b5e71dff1ff990a966d5c4aa2c0b7c4fec2bc0ae88e6209541dd54ab33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:41:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-77ad8d3c017f199fb4e981cc8962b651401d4d00ddae68309a855de17457ac57-merged.mount: Deactivated successfully.
Nov 24 20:41:19 compute-0 podman[292098]: 2025-11-24 20:41:19.859671062 +0000 UTC m=+1.043209560 container remove dd9602b5e71dff1ff990a966d5c4aa2c0b7c4fec2bc0ae88e6209541dd54ab33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_booth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:41:19 compute-0 systemd[1]: libpod-conmon-dd9602b5e71dff1ff990a966d5c4aa2c0b7c4fec2bc0ae88e6209541dd54ab33.scope: Deactivated successfully.
Nov 24 20:41:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:19 compute-0 sudo[291991]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:20 compute-0 sudo[292136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:41:20 compute-0 sudo[292136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:20 compute-0 sudo[292136]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:20 compute-0 sudo[292162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:41:20 compute-0 sudo[292162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:20 compute-0 sudo[292162]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:20 compute-0 podman[292160]: 2025-11-24 20:41:20.222680333 +0000 UTC m=+0.164906402 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:41:20 compute-0 sudo[292209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:41:20 compute-0 sudo[292209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:20 compute-0 sudo[292209]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:20 compute-0 sudo[292238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:41:20 compute-0 sudo[292238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:20 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:20.635+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:20.655+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:20 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:20 compute-0 podman[292300]: 2025-11-24 20:41:20.779465248 +0000 UTC m=+0.069656165 container create a2fd5bba65a6ea1d1944cd2e168dcf7277776af7a08838419ced9720ae22cc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:41:20 compute-0 systemd[1]: Started libpod-conmon-a2fd5bba65a6ea1d1944cd2e168dcf7277776af7a08838419ced9720ae22cc47.scope.
Nov 24 20:41:20 compute-0 podman[292300]: 2025-11-24 20:41:20.752419625 +0000 UTC m=+0.042610632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:41:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:41:20 compute-0 podman[292300]: 2025-11-24 20:41:20.879224037 +0000 UTC m=+0.169415044 container init a2fd5bba65a6ea1d1944cd2e168dcf7277776af7a08838419ced9720ae22cc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:41:20 compute-0 podman[292300]: 2025-11-24 20:41:20.891749382 +0000 UTC m=+0.181940329 container start a2fd5bba65a6ea1d1944cd2e168dcf7277776af7a08838419ced9720ae22cc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:41:20 compute-0 podman[292300]: 2025-11-24 20:41:20.89613878 +0000 UTC m=+0.186329737 container attach a2fd5bba65a6ea1d1944cd2e168dcf7277776af7a08838419ced9720ae22cc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:41:20 compute-0 pensive_curie[292317]: 167 167
Nov 24 20:41:20 compute-0 systemd[1]: libpod-a2fd5bba65a6ea1d1944cd2e168dcf7277776af7a08838419ced9720ae22cc47.scope: Deactivated successfully.
Nov 24 20:41:20 compute-0 conmon[292317]: conmon a2fd5bba65a6ea1d1944 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2fd5bba65a6ea1d1944cd2e168dcf7277776af7a08838419ced9720ae22cc47.scope/container/memory.events
Nov 24 20:41:20 compute-0 podman[292300]: 2025-11-24 20:41:20.90139161 +0000 UTC m=+0.191582617 container died a2fd5bba65a6ea1d1944cd2e168dcf7277776af7a08838419ced9720ae22cc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:41:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:20 compute-0 ceph-mon[75677]: pgmap v1777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-815e8ff0f72be3c1184c006c5d276d85cc6c844d02795b7bde95990eae6166a3-merged.mount: Deactivated successfully.
Nov 24 20:41:20 compute-0 podman[292300]: 2025-11-24 20:41:20.953500584 +0000 UTC m=+0.243691501 container remove a2fd5bba65a6ea1d1944cd2e168dcf7277776af7a08838419ced9720ae22cc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 24 20:41:20 compute-0 systemd[1]: libpod-conmon-a2fd5bba65a6ea1d1944cd2e168dcf7277776af7a08838419ced9720ae22cc47.scope: Deactivated successfully.
Nov 24 20:41:21 compute-0 podman[292341]: 2025-11-24 20:41:21.204045837 +0000 UTC m=+0.067840286 container create 522d21151545341c4683fd9fca2e14c735f557b11e4fe00448812a35d934489b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:41:21 compute-0 systemd[1]: Started libpod-conmon-522d21151545341c4683fd9fca2e14c735f557b11e4fe00448812a35d934489b.scope.
Nov 24 20:41:21 compute-0 podman[292341]: 2025-11-24 20:41:21.17875846 +0000 UTC m=+0.042552989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:41:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cd426d768eb42b5b07e5773b92a23fb360bd0bf3d7f043844a7c5a6b9198e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cd426d768eb42b5b07e5773b92a23fb360bd0bf3d7f043844a7c5a6b9198e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cd426d768eb42b5b07e5773b92a23fb360bd0bf3d7f043844a7c5a6b9198e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cd426d768eb42b5b07e5773b92a23fb360bd0bf3d7f043844a7c5a6b9198e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:41:21 compute-0 podman[292341]: 2025-11-24 20:41:21.328790284 +0000 UTC m=+0.192584833 container init 522d21151545341c4683fd9fca2e14c735f557b11e4fe00448812a35d934489b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 20:41:21 compute-0 podman[292341]: 2025-11-24 20:41:21.342966533 +0000 UTC m=+0.206760982 container start 522d21151545341c4683fd9fca2e14c735f557b11e4fe00448812a35d934489b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:41:21 compute-0 podman[292341]: 2025-11-24 20:41:21.347267018 +0000 UTC m=+0.211061477 container attach 522d21151545341c4683fd9fca2e14c735f557b11e4fe00448812a35d934489b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:41:21 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:21.630+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:21.640+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:21 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:22 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:22.584+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:22.661+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:22 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:22 compute-0 clever_perlman[292357]: {
Nov 24 20:41:22 compute-0 clever_perlman[292357]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "osd_id": 2,
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "type": "bluestore"
Nov 24 20:41:22 compute-0 clever_perlman[292357]:     },
Nov 24 20:41:22 compute-0 clever_perlman[292357]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "osd_id": 1,
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "type": "bluestore"
Nov 24 20:41:22 compute-0 clever_perlman[292357]:     },
Nov 24 20:41:22 compute-0 clever_perlman[292357]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "osd_id": 0,
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:41:22 compute-0 clever_perlman[292357]:         "type": "bluestore"
Nov 24 20:41:22 compute-0 clever_perlman[292357]:     }
Nov 24 20:41:22 compute-0 clever_perlman[292357]: }
Nov 24 20:41:22 compute-0 systemd[1]: libpod-522d21151545341c4683fd9fca2e14c735f557b11e4fe00448812a35d934489b.scope: Deactivated successfully.
Nov 24 20:41:22 compute-0 podman[292341]: 2025-11-24 20:41:22.812520717 +0000 UTC m=+1.676315206 container died 522d21151545341c4683fd9fca2e14c735f557b11e4fe00448812a35d934489b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:41:22 compute-0 systemd[1]: libpod-522d21151545341c4683fd9fca2e14c735f557b11e4fe00448812a35d934489b.scope: Consumed 1.480s CPU time.
Nov 24 20:41:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-46cd426d768eb42b5b07e5773b92a23fb360bd0bf3d7f043844a7c5a6b9198e9-merged.mount: Deactivated successfully.
Nov 24 20:41:22 compute-0 podman[292341]: 2025-11-24 20:41:22.892365252 +0000 UTC m=+1.756159711 container remove 522d21151545341c4683fd9fca2e14c735f557b11e4fe00448812a35d934489b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:41:22 compute-0 systemd[1]: libpod-conmon-522d21151545341c4683fd9fca2e14c735f557b11e4fe00448812a35d934489b.scope: Deactivated successfully.
Nov 24 20:41:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3001 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:22 compute-0 sudo[292238]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:22 compute-0 ceph-mon[75677]: pgmap v1778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:41:22 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:41:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:41:22 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:41:22 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2fc808d8-9be8-4bb8-bf9b-3045edd0c230 does not exist
Nov 24 20:41:22 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 895db284-3908-4a88-b7eb-0434b19fa958 does not exist
Nov 24 20:41:23 compute-0 sudo[292404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:41:23 compute-0 sudo[292404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:23 compute-0 sudo[292404]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:23 compute-0 sudo[292429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:41:23 compute-0 sudo[292429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:41:23 compute-0 sudo[292429]: pam_unix(sudo:session): session closed for user root
Nov 24 20:41:23 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:23.604+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:23.626+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:23 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3001 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:23 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:41:23 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:41:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:41:24
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.mgr', 'volumes', '.rgw.root', 'images']
Nov 24 20:41:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:41:24 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:24.570+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:24.617+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:24 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:24 compute-0 ceph-mon[75677]: pgmap v1779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:25 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:25.608+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:25.622+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:25 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:26 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:26.595+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:26.635+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:26 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:26 compute-0 ceph-mon[75677]: pgmap v1780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:27.609+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:27 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:27.647+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:27 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:28.614+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:28 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:28.678+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:28 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:28 compute-0 ceph-mon[75677]: pgmap v1781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:29 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:29.580+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:29.728+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:29 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:30.563+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:30 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:30.769+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:30 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:31 compute-0 ceph-mon[75677]: pgmap v1782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:31.526+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:31 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:31.734+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:31 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:32.547+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:32 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3006 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:32.757+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:32 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:33 compute-0 ceph-mon[75677]: pgmap v1783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:33 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3006 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:33.566+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:33 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:33.731+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:33 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:34.591+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:34 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:34.709+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:34 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:41:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:35 compute-0 ceph-mon[75677]: pgmap v1784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:41:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:41:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:35.584+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:35 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:35.754+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:35 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:35 compute-0 podman[292454]: 2025-11-24 20:41:35.848197281 +0000 UTC m=+0.075917572 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 20:41:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:36.572+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:36 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:36.773+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:36 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:37 compute-0 ceph-mon[75677]: pgmap v1785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:37.605+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:37 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3016 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:37.778+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:37 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:38 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3016 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:38.560+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:38 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:38 compute-0 sshd-session[292474]: Invalid user under from 182.93.7.194 port 59182
Nov 24 20:41:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:38.748+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:38 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:39 compute-0 ceph-mon[75677]: pgmap v1786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:39 compute-0 sshd-session[292474]: Received disconnect from 182.93.7.194 port 59182:11: Bye Bye [preauth]
Nov 24 20:41:39 compute-0 sshd-session[292474]: Disconnected from invalid user under 182.93.7.194 port 59182 [preauth]
Nov 24 20:41:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:39.558+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:39 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:39.796+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:39 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:41:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:41:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:41:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:41:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:41:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:40.600+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:40 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:40.785+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:40 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:41 compute-0 ceph-mon[75677]: pgmap v1787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:41.577+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:41 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:41.823+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:41 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:42.559+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:42 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.673684) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016902673775, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1723, "num_deletes": 261, "total_data_size": 2067316, "memory_usage": 2099432, "flush_reason": "Manual Compaction"}
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016902691877, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 2012607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49502, "largest_seqno": 51224, "table_properties": {"data_size": 2004868, "index_size": 4237, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20810, "raw_average_key_size": 21, "raw_value_size": 1987371, "raw_average_value_size": 2087, "num_data_blocks": 185, "num_entries": 952, "num_filter_entries": 952, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016798, "oldest_key_time": 1764016798, "file_creation_time": 1764016902, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 18238 microseconds, and 11885 cpu microseconds.
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.691934) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 2012607 bytes OK
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.691960) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.694073) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.694093) EVENT_LOG_v1 {"time_micros": 1764016902694086, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.694112) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 2059206, prev total WAL file size 2059206, number of live WAL files 2.
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.695481) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323634' seq:72057594037927935, type:22 .. '6C6F676D0032353135' seq:0, type:0; will stop at (end)
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(1965KB)], [104(8331KB)]
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016902695570, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 10544374, "oldest_snapshot_seqno": -1}
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 12041 keys, 10346549 bytes, temperature: kUnknown
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016902767797, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10346549, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10279440, "index_size": 35893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 327924, "raw_average_key_size": 27, "raw_value_size": 10072128, "raw_average_value_size": 836, "num_data_blocks": 1345, "num_entries": 12041, "num_filter_entries": 12041, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016902, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.768258) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10346549 bytes
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.769765) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.8 rd, 143.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 8.1 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(10.4) write-amplify(5.1) OK, records in: 12574, records dropped: 533 output_compression: NoCompression
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.769786) EVENT_LOG_v1 {"time_micros": 1764016902769776, "job": 62, "event": "compaction_finished", "compaction_time_micros": 72302, "compaction_time_cpu_micros": 37764, "output_level": 6, "num_output_files": 1, "total_output_size": 10346549, "num_input_records": 12574, "num_output_records": 12041, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016902770279, "job": 62, "event": "table_file_deletion", "file_number": 106}
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016902772010, "job": 62, "event": "table_file_deletion", "file_number": 104}
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.695246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.772045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.772050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.772051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.772053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:41:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:41:42.772055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:41:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:42.779+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:42 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:43 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3022 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:43 compute-0 ceph-mon[75677]: pgmap v1788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:43.534+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:43 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:43.803+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:43 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:44 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3022 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:44.542+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:44 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:44.777+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:44 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:45 compute-0 ceph-mon[75677]: pgmap v1789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:45.584+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:45 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:45 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:45.778+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:45 compute-0 podman[292476]: 2025-11-24 20:41:45.862699978 +0000 UTC m=+0.085366084 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:41:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:46.549+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:46 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:46.758+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:46 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:47 compute-0 ceph-mon[75677]: pgmap v1790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:47.530+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:47 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:47.785+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:47 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:48.574+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:48 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:48.765+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:48 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:49 compute-0 ceph-mon[75677]: pgmap v1791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:49.526+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:49 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:49.729+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:49 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:50.508+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:50 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:50.733+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:50 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:50 compute-0 podman[292493]: 2025-11-24 20:41:50.943863712 +0000 UTC m=+0.169965278 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 24 20:41:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:51 compute-0 ceph-mon[75677]: pgmap v1792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:51.521+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:51 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:51.757+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:51 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:52.539+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:52 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3026 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:52.800+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:52 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:53 compute-0 ceph-mon[75677]: pgmap v1793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3026 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:53.506+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:53 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:53.801+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:53 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:41:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:41:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:41:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:41:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:41:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:41:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:54.486+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:54 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:54.786+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:54 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:55 compute-0 ceph-mon[75677]: pgmap v1794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:55.521+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:55 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:55.799+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:55 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:56.499+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:56 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:56.780+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:56 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:57 compute-0 ceph-mon[75677]: pgmap v1795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:57.470+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:57 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3037 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:41:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:57.752+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:57 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:58 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3037 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:41:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:58.437+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:58 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:58.730+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:58 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:41:59 compute-0 ceph-mon[75677]: pgmap v1796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:41:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:41:59.407+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:59 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:41:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:41:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:41:59.726+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:59 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:41:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:00.437+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:00 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:00.715+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:00 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:01 compute-0 ceph-mon[75677]: pgmap v1797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:01.484+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:01 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:01.758+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:01 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.309280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016922309330, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 502, "num_deletes": 251, "total_data_size": 368510, "memory_usage": 379064, "flush_reason": "Manual Compaction"}
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Nov 24 20:42:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016922313271, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 363028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51225, "largest_seqno": 51726, "table_properties": {"data_size": 360280, "index_size": 720, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7356, "raw_average_key_size": 19, "raw_value_size": 354530, "raw_average_value_size": 950, "num_data_blocks": 32, "num_entries": 373, "num_filter_entries": 373, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016903, "oldest_key_time": 1764016903, "file_creation_time": 1764016922, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 4039 microseconds, and 2140 cpu microseconds.
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.313318) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 363028 bytes OK
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.313340) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.315146) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.315164) EVENT_LOG_v1 {"time_micros": 1764016922315157, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.315182) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 365489, prev total WAL file size 365489, number of live WAL files 2.
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.315614) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(354KB)], [107(10104KB)]
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016922315649, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10709577, "oldest_snapshot_seqno": -1}
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 11905 keys, 9183493 bytes, temperature: kUnknown
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016922379812, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 9183493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9118582, "index_size": 34015, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29829, "raw_key_size": 326061, "raw_average_key_size": 27, "raw_value_size": 8914839, "raw_average_value_size": 748, "num_data_blocks": 1258, "num_entries": 11905, "num_filter_entries": 11905, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764016922, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.380224) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 9183493 bytes
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.382105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.6 rd, 142.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(54.8) write-amplify(25.3) OK, records in: 12414, records dropped: 509 output_compression: NoCompression
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.382143) EVENT_LOG_v1 {"time_micros": 1764016922382124, "job": 64, "event": "compaction_finished", "compaction_time_micros": 64281, "compaction_time_cpu_micros": 24904, "output_level": 6, "num_output_files": 1, "total_output_size": 9183493, "num_input_records": 12414, "num_output_records": 11905, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016922382433, "job": 64, "event": "table_file_deletion", "file_number": 109}
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764016922386734, "job": 64, "event": "table_file_deletion", "file_number": 107}
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.315557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.386851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.386862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.386867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.386871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:42:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:42:02.386875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:42:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:02.475+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:02 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3041 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:02.803+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:02 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:03 compute-0 ceph-mon[75677]: pgmap v1798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3041 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:03.517+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:03 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:03.770+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:03 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:04 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:04.533+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:04.782+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:04 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:05 compute-0 ceph-mon[75677]: pgmap v1799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:05 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:05.493+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:05.817+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:05 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:06 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:06.537+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:06.827+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:06 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:06 compute-0 podman[292519]: 2025-11-24 20:42:06.860355022 +0000 UTC m=+0.077022892 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 24 20:42:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:07 compute-0 ceph-mon[75677]: pgmap v1800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:07 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:07.545+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3046 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:07.833+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:07 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:08 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3046 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:08 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:08.546+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:08.842+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:08 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:09 compute-0 ceph-mon[75677]: pgmap v1801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:09.399 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:42:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:09.400 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:42:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:09.400 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:42:09 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:09.566+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:09.857+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:09 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:10 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:10.579+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:10.822+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:10 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:11 compute-0 ceph-mon[75677]: pgmap v1802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:11 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:11.615+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:11 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:11.861+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:12 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:12.587+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3052 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:12.833+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:12 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:12.861 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:42:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:12.863 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:42:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:13 compute-0 ceph-mon[75677]: pgmap v1803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:13 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3052 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:13 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:13.567+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:13.858+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:13 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:14 compute-0 ceph-mon[75677]: pgmap v1804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:14 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:14.603+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:14.867+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:14 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:15 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:15.639+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:15.853+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:15 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:16 compute-0 ceph-mon[75677]: pgmap v1805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:42:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/878767598' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:42:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:42:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/878767598' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:42:16 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:16.651+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:16.858+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:16 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:16 compute-0 podman[292539]: 2025-11-24 20:42:16.868008168 +0000 UTC m=+0.097865370 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 20:42:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/878767598' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:42:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/878767598' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:42:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:17 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:17.609+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3057 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:17.898+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:17 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:18 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3057 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:18 compute-0 ceph-mon[75677]: pgmap v1806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:18 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:18.568+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:18.892+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:18 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:19 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:19.538+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:19.859+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:19 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:19 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:19.865 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:42:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:20 compute-0 ceph-mon[75677]: pgmap v1807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:20 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:20.511+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:20.839+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:20 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:21 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:21.523+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:21.792+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:21 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:21 compute-0 podman[292559]: 2025-11-24 20:42:21.88301977 +0000 UTC m=+0.116641832 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:42:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:22 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:22.532+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:22 compute-0 ceph-mon[75677]: pgmap v1808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3062 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:22.804+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:22 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:23 compute-0 sudo[292585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:42:23 compute-0 sudo[292585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:23 compute-0 sudo[292585]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:23 compute-0 sudo[292610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:42:23 compute-0 sudo[292610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:23 compute-0 sudo[292610]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:23 compute-0 sudo[292635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:42:23 compute-0 sudo[292635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:23 compute-0 sudo[292635]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:23 compute-0 sudo[292660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:42:23 compute-0 sudo[292660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:23.513+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:23 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3062 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:23.811+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:23 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:24 compute-0 sudo[292660]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:42:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:42:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:42:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:42:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:42:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3682b3f4-2a76-4fe6-ae5e-e94c36b2eec4 does not exist
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4c6a1967-560e-4b98-a7b6-10fc61e7b8b4 does not exist
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a45f1aae-63e4-45a9-9910-0896468618f6 does not exist
Nov 24 20:42:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:42:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:42:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:42:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:42:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:42:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:24 compute-0 sudo[292717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:42:24 compute-0 sudo[292717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:24 compute-0 sudo[292717]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:24 compute-0 sudo[292742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:42:24 compute-0 sudo[292742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:24 compute-0 sudo[292742]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:42:24 compute-0 sudo[292767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:42:24 compute-0 sudo[292767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:24 compute-0 sudo[292767]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:42:24
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'backups', 'images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.mgr']
Nov 24 20:42:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:42:24 compute-0 sudo[292792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:42:24 compute-0 sudo[292792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:24 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:24.559+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:42:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:42:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:42:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:42:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:42:24 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:42:24 compute-0 ceph-mon[75677]: pgmap v1809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:24.794+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:24 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:24 compute-0 podman[292859]: 2025-11-24 20:42:24.898717317 +0000 UTC m=+0.062992207 container create e4db6c91d2c73f3a5da1da140c6fb977c6d31d4132e814b330c67a0098ea0088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 20:42:24 compute-0 systemd[1]: Started libpod-conmon-e4db6c91d2c73f3a5da1da140c6fb977c6d31d4132e814b330c67a0098ea0088.scope.
Nov 24 20:42:24 compute-0 podman[292859]: 2025-11-24 20:42:24.873466571 +0000 UTC m=+0.037741481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:42:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:42:24 compute-0 podman[292859]: 2025-11-24 20:42:24.996387578 +0000 UTC m=+0.160662468 container init e4db6c91d2c73f3a5da1da140c6fb977c6d31d4132e814b330c67a0098ea0088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 20:42:25 compute-0 podman[292859]: 2025-11-24 20:42:25.008698149 +0000 UTC m=+0.172973019 container start e4db6c91d2c73f3a5da1da140c6fb977c6d31d4132e814b330c67a0098ea0088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:42:25 compute-0 podman[292859]: 2025-11-24 20:42:25.012229703 +0000 UTC m=+0.176504583 container attach e4db6c91d2c73f3a5da1da140c6fb977c6d31d4132e814b330c67a0098ea0088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:42:25 compute-0 nervous_zhukovsky[292875]: 167 167
Nov 24 20:42:25 compute-0 systemd[1]: libpod-e4db6c91d2c73f3a5da1da140c6fb977c6d31d4132e814b330c67a0098ea0088.scope: Deactivated successfully.
Nov 24 20:42:25 compute-0 podman[292859]: 2025-11-24 20:42:25.019815226 +0000 UTC m=+0.184090106 container died e4db6c91d2c73f3a5da1da140c6fb977c6d31d4132e814b330c67a0098ea0088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b70eeb24407425189bc9ea6a0bdf34000a5e893ef014ca3a3b905c57597b2ca-merged.mount: Deactivated successfully.
Nov 24 20:42:25 compute-0 podman[292859]: 2025-11-24 20:42:25.065881748 +0000 UTC m=+0.230156608 container remove e4db6c91d2c73f3a5da1da140c6fb977c6d31d4132e814b330c67a0098ea0088 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 20:42:25 compute-0 systemd[1]: libpod-conmon-e4db6c91d2c73f3a5da1da140c6fb977c6d31d4132e814b330c67a0098ea0088.scope: Deactivated successfully.
Nov 24 20:42:25 compute-0 podman[292898]: 2025-11-24 20:42:25.248363179 +0000 UTC m=+0.048944089 container create e1c52a159ec4660366c9be3084d3f94a8ec752e0a066ac9c9fd02eae164dd5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:42:25 compute-0 systemd[1]: Started libpod-conmon-e1c52a159ec4660366c9be3084d3f94a8ec752e0a066ac9c9fd02eae164dd5be.scope.
Nov 24 20:42:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce75270f1f237dcb724c55ed047e7f4f1fe3aae1cd7db10e6d6e323e79478e77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:25 compute-0 podman[292898]: 2025-11-24 20:42:25.228766845 +0000 UTC m=+0.029347745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce75270f1f237dcb724c55ed047e7f4f1fe3aae1cd7db10e6d6e323e79478e77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce75270f1f237dcb724c55ed047e7f4f1fe3aae1cd7db10e6d6e323e79478e77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce75270f1f237dcb724c55ed047e7f4f1fe3aae1cd7db10e6d6e323e79478e77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce75270f1f237dcb724c55ed047e7f4f1fe3aae1cd7db10e6d6e323e79478e77/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:25 compute-0 podman[292898]: 2025-11-24 20:42:25.34517361 +0000 UTC m=+0.145754550 container init e1c52a159ec4660366c9be3084d3f94a8ec752e0a066ac9c9fd02eae164dd5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:42:25 compute-0 podman[292898]: 2025-11-24 20:42:25.352820225 +0000 UTC m=+0.153401135 container start e1c52a159ec4660366c9be3084d3f94a8ec752e0a066ac9c9fd02eae164dd5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:42:25 compute-0 podman[292898]: 2025-11-24 20:42:25.3560162 +0000 UTC m=+0.156597140 container attach e1c52a159ec4660366c9be3084d3f94a8ec752e0a066ac9c9fd02eae164dd5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 20:42:25 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:25.571+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:25.745+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:25 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:26 compute-0 loving_vaughan[292915]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:42:26 compute-0 loving_vaughan[292915]: --> relative data size: 1.0
Nov 24 20:42:26 compute-0 loving_vaughan[292915]: --> All data devices are unavailable
Nov 24 20:42:26 compute-0 systemd[1]: libpod-e1c52a159ec4660366c9be3084d3f94a8ec752e0a066ac9c9fd02eae164dd5be.scope: Deactivated successfully.
Nov 24 20:42:26 compute-0 podman[292898]: 2025-11-24 20:42:26.497953278 +0000 UTC m=+1.298534228 container died e1c52a159ec4660366c9be3084d3f94a8ec752e0a066ac9c9fd02eae164dd5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:42:26 compute-0 systemd[1]: libpod-e1c52a159ec4660366c9be3084d3f94a8ec752e0a066ac9c9fd02eae164dd5be.scope: Consumed 1.108s CPU time.
Nov 24 20:42:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce75270f1f237dcb724c55ed047e7f4f1fe3aae1cd7db10e6d6e323e79478e77-merged.mount: Deactivated successfully.
Nov 24 20:42:26 compute-0 podman[292898]: 2025-11-24 20:42:26.575207716 +0000 UTC m=+1.375788626 container remove e1c52a159ec4660366c9be3084d3f94a8ec752e0a066ac9c9fd02eae164dd5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_vaughan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:42:26 compute-0 systemd[1]: libpod-conmon-e1c52a159ec4660366c9be3084d3f94a8ec752e0a066ac9c9fd02eae164dd5be.scope: Deactivated successfully.
Nov 24 20:42:26 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:26.600+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:26 compute-0 sudo[292792]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:26 compute-0 ceph-mon[75677]: pgmap v1810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:26 compute-0 sudo[292957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:42:26 compute-0 sudo[292957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:26 compute-0 sudo[292957]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:26 compute-0 sudo[292982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:42:26 compute-0 sudo[292982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:26 compute-0 sudo[292982]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:26.791+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:26 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:26 compute-0 sudo[293007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:42:26 compute-0 sudo[293007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:26 compute-0 sudo[293007]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:26 compute-0 sudo[293032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:42:26 compute-0 sudo[293032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:27 compute-0 podman[293095]: 2025-11-24 20:42:27.35100436 +0000 UTC m=+0.046212658 container create 10935c7c0705e7dc5dcabf4b978931c5b45eb42a171e93342f7b9577b9de025f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:42:27 compute-0 systemd[1]: Started libpod-conmon-10935c7c0705e7dc5dcabf4b978931c5b45eb42a171e93342f7b9577b9de025f.scope.
Nov 24 20:42:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:42:27 compute-0 podman[293095]: 2025-11-24 20:42:27.329774051 +0000 UTC m=+0.024982399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:42:27 compute-0 podman[293095]: 2025-11-24 20:42:27.435793718 +0000 UTC m=+0.131002106 container init 10935c7c0705e7dc5dcabf4b978931c5b45eb42a171e93342f7b9577b9de025f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:42:27 compute-0 podman[293095]: 2025-11-24 20:42:27.447574902 +0000 UTC m=+0.142783200 container start 10935c7c0705e7dc5dcabf4b978931c5b45eb42a171e93342f7b9577b9de025f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 20:42:27 compute-0 podman[293095]: 2025-11-24 20:42:27.451815636 +0000 UTC m=+0.147023984 container attach 10935c7c0705e7dc5dcabf4b978931c5b45eb42a171e93342f7b9577b9de025f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_williams, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 20:42:27 compute-0 loving_williams[293111]: 167 167
Nov 24 20:42:27 compute-0 systemd[1]: libpod-10935c7c0705e7dc5dcabf4b978931c5b45eb42a171e93342f7b9577b9de025f.scope: Deactivated successfully.
Nov 24 20:42:27 compute-0 podman[293095]: 2025-11-24 20:42:27.455258288 +0000 UTC m=+0.150466606 container died 10935c7c0705e7dc5dcabf4b978931c5b45eb42a171e93342f7b9577b9de025f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:42:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1dbd050a4068963d380d673e134a06c746636e96a6544fa5bad9c417a1be9f4-merged.mount: Deactivated successfully.
Nov 24 20:42:27 compute-0 podman[293095]: 2025-11-24 20:42:27.511876763 +0000 UTC m=+0.207085051 container remove 10935c7c0705e7dc5dcabf4b978931c5b45eb42a171e93342f7b9577b9de025f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:42:27 compute-0 systemd[1]: libpod-conmon-10935c7c0705e7dc5dcabf4b978931c5b45eb42a171e93342f7b9577b9de025f.scope: Deactivated successfully.
Nov 24 20:42:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:27.571+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:27 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3067 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:27 compute-0 podman[293134]: 2025-11-24 20:42:27.733025099 +0000 UTC m=+0.052848285 container create 5d7497a37ead9f6fa8370c3781566e3844f316b4ac0557d1dcc8334e2aaa64d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:42:27 compute-0 systemd[1]: Started libpod-conmon-5d7497a37ead9f6fa8370c3781566e3844f316b4ac0557d1dcc8334e2aaa64d9.scope.
Nov 24 20:42:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:42:27 compute-0 podman[293134]: 2025-11-24 20:42:27.706088439 +0000 UTC m=+0.025911685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81a58523a4942a1086d0205a8de7d027d3f70b8a329777c67e343ad141a282f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81a58523a4942a1086d0205a8de7d027d3f70b8a329777c67e343ad141a282f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81a58523a4942a1086d0205a8de7d027d3f70b8a329777c67e343ad141a282f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81a58523a4942a1086d0205a8de7d027d3f70b8a329777c67e343ad141a282f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:27.823+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:27 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:27 compute-0 podman[293134]: 2025-11-24 20:42:27.831048391 +0000 UTC m=+0.150871567 container init 5d7497a37ead9f6fa8370c3781566e3844f316b4ac0557d1dcc8334e2aaa64d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:42:27 compute-0 podman[293134]: 2025-11-24 20:42:27.839495898 +0000 UTC m=+0.159319044 container start 5d7497a37ead9f6fa8370c3781566e3844f316b4ac0557d1dcc8334e2aaa64d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:42:27 compute-0 podman[293134]: 2025-11-24 20:42:27.870110017 +0000 UTC m=+0.189933173 container attach 5d7497a37ead9f6fa8370c3781566e3844f316b4ac0557d1dcc8334e2aaa64d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 24 20:42:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:28.523+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:28 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]: {
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:     "0": [
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:         {
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "devices": [
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "/dev/loop3"
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             ],
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_name": "ceph_lv0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_size": "21470642176",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "name": "ceph_lv0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "tags": {
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.cluster_name": "ceph",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.crush_device_class": "",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.encrypted": "0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.osd_id": "0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.type": "block",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.vdo": "0"
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             },
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "type": "block",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "vg_name": "ceph_vg0"
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:         }
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:     ],
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:     "1": [
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:         {
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "devices": [
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "/dev/loop4"
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             ],
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_name": "ceph_lv1",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_size": "21470642176",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "name": "ceph_lv1",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "tags": {
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.cluster_name": "ceph",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.crush_device_class": "",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.encrypted": "0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.osd_id": "1",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.type": "block",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.vdo": "0"
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             },
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "type": "block",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "vg_name": "ceph_vg1"
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:         }
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:     ],
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:     "2": [
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:         {
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "devices": [
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "/dev/loop5"
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             ],
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_name": "ceph_lv2",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_size": "21470642176",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "name": "ceph_lv2",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "tags": {
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.cluster_name": "ceph",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.crush_device_class": "",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.encrypted": "0",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.osd_id": "2",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.type": "block",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:                 "ceph.vdo": "0"
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             },
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "type": "block",
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:             "vg_name": "ceph_vg2"
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:         }
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]:     ]
Nov 24 20:42:28 compute-0 hopeful_shockley[293151]: }
Nov 24 20:42:28 compute-0 systemd[1]: libpod-5d7497a37ead9f6fa8370c3781566e3844f316b4ac0557d1dcc8334e2aaa64d9.scope: Deactivated successfully.
Nov 24 20:42:28 compute-0 podman[293134]: 2025-11-24 20:42:28.638505673 +0000 UTC m=+0.958328839 container died 5d7497a37ead9f6fa8370c3781566e3844f316b4ac0557d1dcc8334e2aaa64d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 24 20:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e81a58523a4942a1086d0205a8de7d027d3f70b8a329777c67e343ad141a282f-merged.mount: Deactivated successfully.
Nov 24 20:42:28 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3067 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:28 compute-0 ceph-mon[75677]: pgmap v1811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:28.858+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:28 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:28 compute-0 podman[293134]: 2025-11-24 20:42:28.903455801 +0000 UTC m=+1.223278987 container remove 5d7497a37ead9f6fa8370c3781566e3844f316b4ac0557d1dcc8334e2aaa64d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:42:28 compute-0 systemd[1]: libpod-conmon-5d7497a37ead9f6fa8370c3781566e3844f316b4ac0557d1dcc8334e2aaa64d9.scope: Deactivated successfully.
Nov 24 20:42:28 compute-0 sudo[293032]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:29 compute-0 sudo[293174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:42:29 compute-0 sudo[293174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:29 compute-0 sudo[293174]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:29 compute-0 sudo[293199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:42:29 compute-0 sudo[293199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:29 compute-0 sudo[293199]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:29 compute-0 sudo[293224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:42:29 compute-0 sudo[293224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:29 compute-0 sudo[293224]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:29 compute-0 sudo[293249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:42:29 compute-0 sudo[293249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:29.567+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:29 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:29 compute-0 podman[293317]: 2025-11-24 20:42:29.739116886 +0000 UTC m=+0.077687918 container create 73be0a9368b4c23ae8ed5b52c21709670f1200de18b252393be2cbecbee0d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_burnell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:42:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:29 compute-0 systemd[1]: Started libpod-conmon-73be0a9368b4c23ae8ed5b52c21709670f1200de18b252393be2cbecbee0d7b6.scope.
Nov 24 20:42:29 compute-0 podman[293317]: 2025-11-24 20:42:29.71085957 +0000 UTC m=+0.049430662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:42:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:42:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:29.837+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:29 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:29 compute-0 podman[293317]: 2025-11-24 20:42:29.840025716 +0000 UTC m=+0.178596808 container init 73be0a9368b4c23ae8ed5b52c21709670f1200de18b252393be2cbecbee0d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_burnell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:42:29 compute-0 podman[293317]: 2025-11-24 20:42:29.852428528 +0000 UTC m=+0.190999560 container start 73be0a9368b4c23ae8ed5b52c21709670f1200de18b252393be2cbecbee0d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_burnell, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 20:42:29 compute-0 podman[293317]: 2025-11-24 20:42:29.856512797 +0000 UTC m=+0.195083839 container attach 73be0a9368b4c23ae8ed5b52c21709670f1200de18b252393be2cbecbee0d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 20:42:29 compute-0 inspiring_burnell[293333]: 167 167
Nov 24 20:42:29 compute-0 systemd[1]: libpod-73be0a9368b4c23ae8ed5b52c21709670f1200de18b252393be2cbecbee0d7b6.scope: Deactivated successfully.
Nov 24 20:42:29 compute-0 conmon[293333]: conmon 73be0a9368b4c23ae8ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73be0a9368b4c23ae8ed5b52c21709670f1200de18b252393be2cbecbee0d7b6.scope/container/memory.events
Nov 24 20:42:29 compute-0 podman[293317]: 2025-11-24 20:42:29.86147352 +0000 UTC m=+0.200044552 container died 73be0a9368b4c23ae8ed5b52c21709670f1200de18b252393be2cbecbee0d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_burnell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 24 20:42:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7dc8962ab93e4cb8da0e0c552482777613bb44784c5285d1f72d2e9453e1e4c-merged.mount: Deactivated successfully.
Nov 24 20:42:29 compute-0 podman[293317]: 2025-11-24 20:42:29.911515269 +0000 UTC m=+0.250086281 container remove 73be0a9368b4c23ae8ed5b52c21709670f1200de18b252393be2cbecbee0d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_burnell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:42:29 compute-0 systemd[1]: libpod-conmon-73be0a9368b4c23ae8ed5b52c21709670f1200de18b252393be2cbecbee0d7b6.scope: Deactivated successfully.
Nov 24 20:42:30 compute-0 podman[293355]: 2025-11-24 20:42:30.168936765 +0000 UTC m=+0.071437942 container create 21aafa263f54e1bfd141f933329a5e50f1b1bcac561ef3d52aba2f1ccac69b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pare, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:42:30 compute-0 systemd[1]: Started libpod-conmon-21aafa263f54e1bfd141f933329a5e50f1b1bcac561ef3d52aba2f1ccac69b36.scope.
Nov 24 20:42:30 compute-0 podman[293355]: 2025-11-24 20:42:30.143271399 +0000 UTC m=+0.045772606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:42:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:42:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7082c8344bd47fc439f541be4b2fa2d0f765760e356258c3d33146babf7328/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7082c8344bd47fc439f541be4b2fa2d0f765760e356258c3d33146babf7328/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7082c8344bd47fc439f541be4b2fa2d0f765760e356258c3d33146babf7328/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7082c8344bd47fc439f541be4b2fa2d0f765760e356258c3d33146babf7328/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:42:30 compute-0 podman[293355]: 2025-11-24 20:42:30.273874662 +0000 UTC m=+0.176375859 container init 21aafa263f54e1bfd141f933329a5e50f1b1bcac561ef3d52aba2f1ccac69b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:42:30 compute-0 podman[293355]: 2025-11-24 20:42:30.286286635 +0000 UTC m=+0.188787812 container start 21aafa263f54e1bfd141f933329a5e50f1b1bcac561ef3d52aba2f1ccac69b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pare, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:42:30 compute-0 podman[293355]: 2025-11-24 20:42:30.290143858 +0000 UTC m=+0.192645035 container attach 21aafa263f54e1bfd141f933329a5e50f1b1bcac561ef3d52aba2f1ccac69b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 20:42:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:30.573+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:30 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:30 compute-0 ceph-mon[75677]: pgmap v1812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:30.825+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:30 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:31 compute-0 zealous_pare[293371]: {
Nov 24 20:42:31 compute-0 zealous_pare[293371]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "osd_id": 2,
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "type": "bluestore"
Nov 24 20:42:31 compute-0 zealous_pare[293371]:     },
Nov 24 20:42:31 compute-0 zealous_pare[293371]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "osd_id": 1,
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "type": "bluestore"
Nov 24 20:42:31 compute-0 zealous_pare[293371]:     },
Nov 24 20:42:31 compute-0 zealous_pare[293371]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "osd_id": 0,
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:42:31 compute-0 zealous_pare[293371]:         "type": "bluestore"
Nov 24 20:42:31 compute-0 zealous_pare[293371]:     }
Nov 24 20:42:31 compute-0 zealous_pare[293371]: }
Nov 24 20:42:31 compute-0 systemd[1]: libpod-21aafa263f54e1bfd141f933329a5e50f1b1bcac561ef3d52aba2f1ccac69b36.scope: Deactivated successfully.
Nov 24 20:42:31 compute-0 systemd[1]: libpod-21aafa263f54e1bfd141f933329a5e50f1b1bcac561ef3d52aba2f1ccac69b36.scope: Consumed 1.176s CPU time.
Nov 24 20:42:31 compute-0 podman[293355]: 2025-11-24 20:42:31.45724279 +0000 UTC m=+1.359743977 container died 21aafa263f54e1bfd141f933329a5e50f1b1bcac561ef3d52aba2f1ccac69b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:42:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea7082c8344bd47fc439f541be4b2fa2d0f765760e356258c3d33146babf7328-merged.mount: Deactivated successfully.
Nov 24 20:42:31 compute-0 podman[293355]: 2025-11-24 20:42:31.536579952 +0000 UTC m=+1.439081129 container remove 21aafa263f54e1bfd141f933329a5e50f1b1bcac561ef3d52aba2f1ccac69b36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:42:31 compute-0 systemd[1]: libpod-conmon-21aafa263f54e1bfd141f933329a5e50f1b1bcac561ef3d52aba2f1ccac69b36.scope: Deactivated successfully.
Nov 24 20:42:31 compute-0 sudo[293249]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:42:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:42:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:42:31 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:42:31 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 33de86ef-3c24-4fca-be18-dc1fff8aa502 does not exist
Nov 24 20:42:31 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 35114252-21d0-41e5-b30c-70de48623844 does not exist
Nov 24 20:42:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:31.612+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:31 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:31 compute-0 sudo[293418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:42:31 compute-0 sudo[293418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:31 compute-0 sudo[293418]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:31 compute-0 sudo[293443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:42:31 compute-0 sudo[293443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:42:31 compute-0 sudo[293443]: pam_unix(sudo:session): session closed for user root
Nov 24 20:42:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:31.805+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:31 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:32.568+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:32 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:32 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:42:32 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:42:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:32 compute-0 ceph-mon[75677]: pgmap v1813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3072 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:32.770+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:32 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:33.549+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:33 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:33 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3072 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:33.812+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:33 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:34.592+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:34 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:34 compute-0 ceph-mon[75677]: pgmap v1814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:34.785+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:34 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:42:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:42:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:35.596+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:35 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:35.793+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:35 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:36.571+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:36 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:36 compute-0 ceph-mon[75677]: pgmap v1815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:36.831+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:36 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:37.543+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:37 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3077 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:37.815+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:37 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:37 compute-0 podman[293468]: 2025-11-24 20:42:37.869506913 +0000 UTC m=+0.089657269 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 20:42:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:38.536+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:38 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:38 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3077 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:38 compute-0 ceph-mon[75677]: pgmap v1816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:38.776+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:38 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:39.557+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:39 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:39.754+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:39 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:40.579+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:40 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:42:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:42:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:42:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:42:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:42:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:40 compute-0 ceph-mon[75677]: pgmap v1817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:40.774+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:40 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:41.555+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:41 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:41.738+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:41 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:42.539+0000 7f2ca3ee7640 -1 osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:42 compute-0 ceph-osd[88624]: osd.0 169 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:42.703+0000 7f1a67169640 -1 osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:42 compute-0 ceph-osd[89640]: osd.1 169 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3081 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 24 20:42:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 24 20:42:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:42 compute-0 ceph-mon[75677]: pgmap v1818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:42:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 24 20:42:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:43.525+0000 7f2ca3ee7640 -1 osd.0 170 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:43 compute-0 ceph-osd[88624]: osd.0 170 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:43.751+0000 7f1a67169640 -1 osd.1 170 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:43 compute-0 ceph-osd[89640]: osd.1 170 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:43 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3081 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:43 compute-0 ceph-mon[75677]: osdmap e170: 3 total, 3 up, 3 in
Nov 24 20:42:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 614 B/s wr, 2 op/s
Nov 24 20:42:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:44.575+0000 7f2ca3ee7640 -1 osd.0 170 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:44 compute-0 ceph-osd[88624]: osd.0 170 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:44.735+0000 7f1a67169640 -1 osd.1 170 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:44 compute-0 ceph-osd[89640]: osd.1 170 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Nov 24 20:42:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Nov 24 20:42:44 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Nov 24 20:42:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:45 compute-0 ceph-mon[75677]: pgmap v1820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 614 B/s wr, 2 op/s
Nov 24 20:42:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:45.573+0000 7f2ca3ee7640 -1 osd.0 170 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:45 compute-0 ceph-osd[88624]: osd.0 170 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:45.711+0000 7f1a67169640 -1 osd.1 171 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:45 compute-0 ceph-osd[89640]: osd.1 171 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:46 compute-0 ceph-mon[75677]: osdmap e171: 3 total, 3 up, 3 in
Nov 24 20:42:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.9 KiB/s wr, 47 op/s
Nov 24 20:42:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:46.608+0000 7f2ca3ee7640 -1 osd.0 171 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:46 compute-0 ceph-osd[88624]: osd.0 171 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:46.702+0000 7f1a67169640 -1 osd.1 171 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:46 compute-0 ceph-osd[89640]: osd.1 171 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:47 compute-0 ceph-mon[75677]: pgmap v1822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.9 KiB/s wr, 47 op/s
Nov 24 20:42:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:47.653+0000 7f2ca3ee7640 -1 osd.0 171 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:47 compute-0 ceph-osd[88624]: osd.0 171 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:47.673+0000 7f1a67169640 -1 osd.1 171 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:47 compute-0 ceph-osd[89640]: osd.1 171 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:47 compute-0 podman[293487]: 2025-11-24 20:42:47.874703084 +0000 UTC m=+0.094493688 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Nov 24 20:42:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Nov 24 20:42:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Nov 24 20:42:48 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Nov 24 20:42:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.2 KiB/s wr, 62 op/s
Nov 24 20:42:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:48.663+0000 7f2ca3ee7640 -1 osd.0 171 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:48 compute-0 ceph-osd[88624]: osd.0 171 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:48.674+0000 7f1a67169640 -1 osd.1 172 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:48 compute-0 ceph-osd[89640]: osd.1 172 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'default.rgw.log' : 6 ])
Nov 24 20:42:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:49 compute-0 ceph-mon[75677]: osdmap e172: 3 total, 3 up, 3 in
Nov 24 20:42:49 compute-0 ceph-mon[75677]: pgmap v1824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.2 KiB/s wr, 62 op/s
Nov 24 20:42:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:49.657+0000 7f2ca3ee7640 -1 osd.0 172 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:49 compute-0 ceph-osd[88624]: osd.0 172 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:49.679+0000 7f1a67169640 -1 osd.1 172 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:49 compute-0 ceph-osd[89640]: osd.1 172 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Nov 24 20:42:50 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Nov 24 20:42:50 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Nov 24 20:42:50 compute-0 ceph-mon[75677]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'default.rgw.log' : 6 ])
Nov 24 20:42:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.8 KiB/s wr, 60 op/s
Nov 24 20:42:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:50.634+0000 7f2ca3ee7640 -1 osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:50 compute-0 ceph-osd[88624]: osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:50.714+0000 7f1a67169640 -1 osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:50 compute-0 ceph-osd[89640]: osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:51 compute-0 ceph-mon[75677]: osdmap e173: 3 total, 3 up, 3 in
Nov 24 20:42:51 compute-0 ceph-mon[75677]: pgmap v1826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.8 KiB/s wr, 60 op/s
Nov 24 20:42:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:51 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:51.291 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2 2001:db8::f816:3eff:fead:26dc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd969cbc-9d64-4c71-aac4-ed42562227fa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fbd4137f-635f-441e-aeaa-a3fd83d0e21b) old=Port_Binding(mac=['fa:16:3e:ad:26:dc 2001:db8::f816:3eff:fead:26dc'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:42:51 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:51.294 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fbd4137f-635f-441e-aeaa-a3fd83d0e21b in datapath 8acea282-7eae-4ece-adc3-81c7101656b0 updated
Nov 24 20:42:51 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:51.296 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8acea282-7eae-4ece-adc3-81c7101656b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:42:51 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:51.298 165944 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpqk_b8ppq/privsep.sock']
Nov 24 20:42:51 compute-0 ceph-osd[88624]: osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:51.595+0000 7f2ca3ee7640 -1 osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:51.729+0000 7f1a67169640 -1 osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:51 compute-0 ceph-osd[89640]: osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:52.063 165944 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:52.064 165944 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqk_b8ppq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:51.902 293511 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:51.909 293511 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:51.911 293511 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:51.912 293511 INFO oslo.privsep.daemon [-] privsep daemon running as pid 293511
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:52.068 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6a5c60-4e34-48bc-96ea-3c30b681ae54]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:42:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 6.3 KiB/s wr, 118 op/s
Nov 24 20:42:52 compute-0 ceph-osd[88624]: osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:52.587+0000 7f2ca3ee7640 -1 osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:52.595 293511 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:52.595 293511 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:52.595 293511 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:42:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3086 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:52.698 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a6fd23-1fa4-4ec3-bdfc-c5179a4c3290]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:42:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:52.752+0000 7f1a67169640 -1 osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:52 compute-0 ceph-osd[89640]: osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:52 compute-0 podman[293516]: 2025-11-24 20:42:52.865970592 +0000 UTC m=+0.102512714 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:42:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:53 compute-0 ceph-mon[75677]: pgmap v1827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 6.3 KiB/s wr, 118 op/s
Nov 24 20:42:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:53 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3086 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:53 compute-0 ceph-osd[88624]: osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:53.541+0000 7f2ca3ee7640 -1 osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:53.790+0000 7f1a67169640 -1 osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:53 compute-0 ceph-osd[89640]: osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 KiB/s wr, 72 op/s
Nov 24 20:42:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:42:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:42:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:42:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:42:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:42:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:42:54 compute-0 ceph-osd[88624]: osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:54.528+0000 7f2ca3ee7640 -1 osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:54.761+0000 7f1a67169640 -1 osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:54 compute-0 ceph-osd[89640]: osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:55 compute-0 ceph-mon[75677]: pgmap v1828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 KiB/s wr, 72 op/s
Nov 24 20:42:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:55 compute-0 ceph-osd[88624]: osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:55.559+0000 7f2ca3ee7640 -1 osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:55.778+0000 7f1a67169640 -1 osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:55 compute-0 ceph-osd[89640]: osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:55 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:55.977 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:26:dc 2001:db8::f816:3eff:fead:26dc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd969cbc-9d64-4c71-aac4-ed42562227fa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fbd4137f-635f-441e-aeaa-a3fd83d0e21b) old=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2 2001:db8::f816:3eff:fead:26dc'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:42:55 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:55.979 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fbd4137f-635f-441e-aeaa-a3fd83d0e21b in datapath 8acea282-7eae-4ece-adc3-81c7101656b0 updated
Nov 24 20:42:55 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:55.980 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8acea282-7eae-4ece-adc3-81c7101656b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:42:55 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:55.981 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bc2f72-8a85-4f4e-94e5-d215b939307d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:42:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 KiB/s wr, 71 op/s
Nov 24 20:42:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:56 compute-0 ceph-osd[88624]: osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:56.566+0000 7f2ca3ee7640 -1 osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:56.742+0000 7f1a67169640 -1 osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:56 compute-0 ceph-osd[89640]: osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:57 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:57.089 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2 2001:db8::f816:3eff:fead:26dc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd969cbc-9d64-4c71-aac4-ed42562227fa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fbd4137f-635f-441e-aeaa-a3fd83d0e21b) old=Port_Binding(mac=['fa:16:3e:ad:26:dc 2001:db8::f816:3eff:fead:26dc'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:42:57 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:57.091 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fbd4137f-635f-441e-aeaa-a3fd83d0e21b in datapath 8acea282-7eae-4ece-adc3-81c7101656b0 updated
Nov 24 20:42:57 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:57.092 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8acea282-7eae-4ece-adc3-81c7101656b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:42:57 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:42:57.093 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[df36dd9e-c1b3-4531-871e-a7974959445e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:42:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:57 compute-0 ceph-mon[75677]: pgmap v1829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 KiB/s wr, 71 op/s
Nov 24 20:42:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:57 compute-0 ceph-osd[88624]: osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:57.537+0000 7f2ca3ee7640 -1 osd.0 173 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3096 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:42:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Nov 24 20:42:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Nov 24 20:42:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Nov 24 20:42:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:57.756+0000 7f1a67169640 -1 osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:57 compute-0 ceph-osd[89640]: osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.1 KiB/s wr, 70 op/s
Nov 24 20:42:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:58 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3096 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:42:58 compute-0 ceph-mon[75677]: osdmap e174: 3 total, 3 up, 3 in
Nov 24 20:42:58 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:58.544+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:58.727+0000 7f1a67169640 -1 osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:58 compute-0 ceph-osd[89640]: osd.1 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:42:59 compute-0 ceph-mon[75677]: pgmap v1831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.1 KiB/s wr, 70 op/s
Nov 24 20:42:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:59 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:42:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:42:59.533+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:42:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:42:59.684+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:59 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:42:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.5 KiB/s wr, 56 op/s
Nov 24 20:43:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:00 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:00.543+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:00.643+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:00 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:01 compute-0 ceph-mon[75677]: pgmap v1832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.5 KiB/s wr, 56 op/s
Nov 24 20:43:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:01 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:01.569+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:01.600+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:01 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:01 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:01.751 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd969cbc-9d64-4c71-aac4-ed42562227fa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fbd4137f-635f-441e-aeaa-a3fd83d0e21b) old=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2 2001:db8::f816:3eff:fead:26dc'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:43:01 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:01.752 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fbd4137f-635f-441e-aeaa-a3fd83d0e21b in datapath 8acea282-7eae-4ece-adc3-81c7101656b0 updated
Nov 24 20:43:01 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:01.753 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8acea282-7eae-4ece-adc3-81c7101656b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:43:01 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:01.754 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[6461f425-24c4-4ddb-86ec-cd0fcd887633]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:43:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 818 B/s wr, 5 op/s
Nov 24 20:43:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:02 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:02.618+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:02.638+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:02 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3101 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:02 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:02.756 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2 2001:db8::f816:3eff:fead:26dc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd969cbc-9d64-4c71-aac4-ed42562227fa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fbd4137f-635f-441e-aeaa-a3fd83d0e21b) old=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:43:02 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:02.758 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fbd4137f-635f-441e-aeaa-a3fd83d0e21b in datapath 8acea282-7eae-4ece-adc3-81c7101656b0 updated
Nov 24 20:43:02 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:02.760 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8acea282-7eae-4ece-adc3-81c7101656b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:43:02 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:02.761 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[41f177bc-1653-480a-998e-4498b94208b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:43:03 compute-0 ceph-mon[75677]: pgmap v1833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 818 B/s wr, 5 op/s
Nov 24 20:43:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3101 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:03.615+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:03 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:03 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:03.633+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:04 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:04.606+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:04.616+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:04 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:05 compute-0 ceph-mon[75677]: pgmap v1834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:05.588+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:05 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:05 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:05.639+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:06.571+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:06 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:06 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:06.649+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:07 compute-0 ceph-mon[75677]: pgmap v1835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:07.574+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:07 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:07 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:07.625+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3107 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:08 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:08.252 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd969cbc-9d64-4c71-aac4-ed42562227fa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fbd4137f-635f-441e-aeaa-a3fd83d0e21b) old=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2 2001:db8::f816:3eff:fead:26dc'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:43:08 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:08.253 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fbd4137f-635f-441e-aeaa-a3fd83d0e21b in datapath 8acea282-7eae-4ece-adc3-81c7101656b0 updated
Nov 24 20:43:08 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:08.254 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8acea282-7eae-4ece-adc3-81c7101656b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:43:08 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:08.255 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc1d386-6626-4699-8924-f25ce0840df4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:43:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:08 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3107 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:08 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:08.586+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:08.623+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:08 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:08 compute-0 podman[293542]: 2025-11-24 20:43:08.839465855 +0000 UTC m=+0.069255854 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:43:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:09.400 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:43:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:09.400 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:43:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:09.400 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:43:09 compute-0 ceph-mon[75677]: pgmap v1836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:09 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:09.590+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:09.605+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:09 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:10 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:10.571+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:10.633+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:10 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:11 compute-0 ceph-mon[75677]: pgmap v1837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:11 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:11.611+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:11.620+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:11 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:12.189 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2 2001:db8::f816:3eff:fead:26dc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd969cbc-9d64-4c71-aac4-ed42562227fa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fbd4137f-635f-441e-aeaa-a3fd83d0e21b) old=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:43:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:12.190 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fbd4137f-635f-441e-aeaa-a3fd83d0e21b in datapath 8acea282-7eae-4ece-adc3-81c7101656b0 updated
Nov 24 20:43:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:12.191 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8acea282-7eae-4ece-adc3-81c7101656b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:43:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:12.192 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd11b05-7997-4aac-bc26-5039b4637b76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:43:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:12 compute-0 ceph-mon[75677]: pgmap v1838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:12 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:12.574+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:12.576+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:12 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3111 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:12.929 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:43:12 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:12.931 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:43:13 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:13 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:13 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3111 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:13 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:13.525+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:13.609+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:13 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:14 compute-0 ceph-mon[75677]: pgmap v1839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:14 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:14.500+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:14.648+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:14 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:15 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:15.491+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:15.629+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:15 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:15 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:15.933 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:43:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:43:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1132867409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:43:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:43:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1132867409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:43:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:16 compute-0 ceph-mon[75677]: pgmap v1840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1132867409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:43:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1132867409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:43:16 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:16.469+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:16.583+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:16 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:17 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:17.507+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:17.569+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:17 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3116 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:18 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3116 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:18 compute-0 ceph-mon[75677]: pgmap v1841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:18 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:18.521+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:18.605+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:18 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:18 compute-0 podman[293560]: 2025-11-24 20:43:18.830394252 +0000 UTC m=+0.066395747 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:43:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:19 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:19.501+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:19.644+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:19 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:19 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:19.955 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:26:dc 2001:db8::f816:3eff:fead:26dc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd969cbc-9d64-4c71-aac4-ed42562227fa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fbd4137f-635f-441e-aeaa-a3fd83d0e21b) old=Port_Binding(mac=['fa:16:3e:ad:26:dc 10.100.0.2 2001:db8::f816:3eff:fead:26dc'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:43:19 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:19.957 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fbd4137f-635f-441e-aeaa-a3fd83d0e21b in datapath 8acea282-7eae-4ece-adc3-81c7101656b0 updated
Nov 24 20:43:19 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:19.958 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8acea282-7eae-4ece-adc3-81c7101656b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:43:19 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:19.959 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[d4116419-7828-401c-a155-9ea0e49a9757]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:43:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:20 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:20.460+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:20 compute-0 ceph-mon[75677]: pgmap v1842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:20.608+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:20 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:21 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:21.504+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:21 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:21.561+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:21 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:22 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:22 compute-0 ceph-mon[75677]: pgmap v1843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:22.525+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:22 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:22.594+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:22 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3122 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:23 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:23 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3122 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:23.525+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:23 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:23.590+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:23 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:23 compute-0 podman[293584]: 2025-11-24 20:43:23.902513422 +0000 UTC m=+0.134814987 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:43:24
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'images', 'vms', 'volumes', 'backups', 'default.rgw.log']
Nov 24 20:43:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:43:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:24.520+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:24 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:24 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:24 compute-0 ceph-mon[75677]: pgmap v1844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:24.633+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:24 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:25 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:25.555+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:25 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:25.667+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:25 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:26 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:26 compute-0 ceph-mon[75677]: pgmap v1845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:26.578+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:26 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:26.673+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:26 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:27.532+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:27 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:27 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:27.634+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:27 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3126 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:28.516+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:28 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:28 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:28 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3126 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:28 compute-0 ceph-mon[75677]: pgmap v1846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:28.646+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:28 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:29.557+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:29 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:29 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:29.649+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:29 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:30.546+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:30 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:30 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:30 compute-0 ceph-mon[75677]: pgmap v1847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:30.616+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:30 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:31.546+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:31 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:31 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:31.590+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:31 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:31 compute-0 sudo[293610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:31 compute-0 sudo[293610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:31 compute-0 sudo[293610]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:32 compute-0 sudo[293635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:43:32 compute-0 sudo[293635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:32 compute-0 sudo[293635]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:32 compute-0 sudo[293660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:32 compute-0 sudo[293660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:32 compute-0 sudo[293660]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:32 compute-0 sudo[293685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 20:43:32 compute-0 sudo[293685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:32.557+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:32 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:32 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:32 compute-0 ceph-mon[75677]: pgmap v1848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:32.624+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:32 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3132 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:32 compute-0 podman[293781]: 2025-11-24 20:43:32.789963942 +0000 UTC m=+0.081760818 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:43:32 compute-0 podman[293781]: 2025-11-24 20:43:32.903949692 +0000 UTC m=+0.195746528 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:43:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:33.521+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:33 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:33.592+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:33 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:33 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:33 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3132 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:33 compute-0 sudo[293685]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:43:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:43:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:43:33 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:43:33 compute-0 sudo[293941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:33 compute-0 sudo[293941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:33 compute-0 sudo[293941]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:33 compute-0 sudo[293966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:43:33 compute-0 sudo[293966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:34 compute-0 sudo[293966]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:34 compute-0 sudo[293991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:34 compute-0 sudo[293991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:34 compute-0 sudo[293991]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:34 compute-0 sudo[294016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:43:34 compute-0 sudo[294016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:34.491+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:34 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:34.569+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:34 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:34 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:43:34 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:43:34 compute-0 ceph-mon[75677]: pgmap v1849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:34 compute-0 sudo[294016]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:43:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:43:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:43:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:43:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:43:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:43:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0a5086c5-f959-4e6e-86f6-4a3a0dae06f4 does not exist
Nov 24 20:43:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev fe998159-9b4c-44c9-a2ae-7f73ca533fdc does not exist
Nov 24 20:43:34 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 0935df65-511c-4809-9f0e-ca83a4dce63d does not exist
Nov 24 20:43:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:43:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:43:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:43:34 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:43:34 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:43:34 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:43:34 compute-0 sudo[294073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:34 compute-0 sudo[294073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:34 compute-0 sudo[294073]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:34 compute-0 sudo[294098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:43:34 compute-0 sudo[294098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:34 compute-0 sudo[294098]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:34 compute-0 sudo[294123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:34 compute-0 sudo[294123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:34 compute-0 sudo[294123]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:35 compute-0 sudo[294148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:43:35 compute-0 sudo[294148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:43:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:43:35 compute-0 podman[294213]: 2025-11-24 20:43:35.424850221 +0000 UTC m=+0.046947727 container create 38c384caec6213e87c972b7f7b8aea086595873a972f64e1c5ac37074ea352df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rhodes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:43:35 compute-0 systemd[1]: Started libpod-conmon-38c384caec6213e87c972b7f7b8aea086595873a972f64e1c5ac37074ea352df.scope.
Nov 24 20:43:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:43:35 compute-0 podman[294213]: 2025-11-24 20:43:35.401518837 +0000 UTC m=+0.023616373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:43:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:35.499+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:35 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:35 compute-0 podman[294213]: 2025-11-24 20:43:35.516609106 +0000 UTC m=+0.138706652 container init 38c384caec6213e87c972b7f7b8aea086595873a972f64e1c5ac37074ea352df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:43:35 compute-0 podman[294213]: 2025-11-24 20:43:35.528878084 +0000 UTC m=+0.150975620 container start 38c384caec6213e87c972b7f7b8aea086595873a972f64e1c5ac37074ea352df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rhodes, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 20:43:35 compute-0 podman[294213]: 2025-11-24 20:43:35.533530499 +0000 UTC m=+0.155627995 container attach 38c384caec6213e87c972b7f7b8aea086595873a972f64e1c5ac37074ea352df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rhodes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:43:35 compute-0 agitated_rhodes[294228]: 167 167
Nov 24 20:43:35 compute-0 systemd[1]: libpod-38c384caec6213e87c972b7f7b8aea086595873a972f64e1c5ac37074ea352df.scope: Deactivated successfully.
Nov 24 20:43:35 compute-0 conmon[294228]: conmon 38c384caec6213e87c97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38c384caec6213e87c972b7f7b8aea086595873a972f64e1c5ac37074ea352df.scope/container/memory.events
Nov 24 20:43:35 compute-0 podman[294213]: 2025-11-24 20:43:35.536091407 +0000 UTC m=+0.158188963 container died 38c384caec6213e87c972b7f7b8aea086595873a972f64e1c5ac37074ea352df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rhodes, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:43:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-77377d63f3de5214845e7d7861edb7fa346b2b1f8a7e2f95fc190ec507ff008e-merged.mount: Deactivated successfully.
Nov 24 20:43:35 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:35.571+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:35 compute-0 podman[294213]: 2025-11-24 20:43:35.579377085 +0000 UTC m=+0.201474581 container remove 38c384caec6213e87c972b7f7b8aea086595873a972f64e1c5ac37074ea352df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_rhodes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:43:35 compute-0 systemd[1]: libpod-conmon-38c384caec6213e87c972b7f7b8aea086595873a972f64e1c5ac37074ea352df.scope: Deactivated successfully.
Nov 24 20:43:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:35 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:43:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:43:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:43:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:43:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:43:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:43:35 compute-0 podman[294252]: 2025-11-24 20:43:35.804666243 +0000 UTC m=+0.047456781 container create b98c3202618c77082499a73631ab488578892f08495417ea0c4e495153651d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_darwin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:43:35 compute-0 systemd[1]: Started libpod-conmon-b98c3202618c77082499a73631ab488578892f08495417ea0c4e495153651d6f.scope.
Nov 24 20:43:35 compute-0 podman[294252]: 2025-11-24 20:43:35.785460749 +0000 UTC m=+0.028251337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:43:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3199b6bdada9d54b7eafc4c14577b077c4d3b6e0d8aec5dabc1c62121fbb419d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3199b6bdada9d54b7eafc4c14577b077c4d3b6e0d8aec5dabc1c62121fbb419d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3199b6bdada9d54b7eafc4c14577b077c4d3b6e0d8aec5dabc1c62121fbb419d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3199b6bdada9d54b7eafc4c14577b077c4d3b6e0d8aec5dabc1c62121fbb419d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3199b6bdada9d54b7eafc4c14577b077c4d3b6e0d8aec5dabc1c62121fbb419d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:35 compute-0 podman[294252]: 2025-11-24 20:43:35.946966469 +0000 UTC m=+0.189757087 container init b98c3202618c77082499a73631ab488578892f08495417ea0c4e495153651d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:43:35 compute-0 podman[294252]: 2025-11-24 20:43:35.955306342 +0000 UTC m=+0.198096890 container start b98c3202618c77082499a73631ab488578892f08495417ea0c4e495153651d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_darwin, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:43:35 compute-0 podman[294252]: 2025-11-24 20:43:35.959072293 +0000 UTC m=+0.201862891 container attach b98c3202618c77082499a73631ab488578892f08495417ea0c4e495153651d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_darwin, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 20:43:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:36.522+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:36 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:36 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:36.526+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:36 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:36 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:36 compute-0 ceph-mon[75677]: pgmap v1850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:37 compute-0 nostalgic_darwin[294268]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:43:37 compute-0 nostalgic_darwin[294268]: --> relative data size: 1.0
Nov 24 20:43:37 compute-0 nostalgic_darwin[294268]: --> All data devices are unavailable
Nov 24 20:43:37 compute-0 systemd[1]: libpod-b98c3202618c77082499a73631ab488578892f08495417ea0c4e495153651d6f.scope: Deactivated successfully.
Nov 24 20:43:37 compute-0 podman[294252]: 2025-11-24 20:43:37.138663759 +0000 UTC m=+1.381454327 container died b98c3202618c77082499a73631ab488578892f08495417ea0c4e495153651d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:43:37 compute-0 systemd[1]: libpod-b98c3202618c77082499a73631ab488578892f08495417ea0c4e495153651d6f.scope: Consumed 1.118s CPU time.
Nov 24 20:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3199b6bdada9d54b7eafc4c14577b077c4d3b6e0d8aec5dabc1c62121fbb419d-merged.mount: Deactivated successfully.
Nov 24 20:43:37 compute-0 podman[294252]: 2025-11-24 20:43:37.209758041 +0000 UTC m=+1.452548579 container remove b98c3202618c77082499a73631ab488578892f08495417ea0c4e495153651d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 20:43:37 compute-0 systemd[1]: libpod-conmon-b98c3202618c77082499a73631ab488578892f08495417ea0c4e495153651d6f.scope: Deactivated successfully.
Nov 24 20:43:37 compute-0 sudo[294148]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:37 compute-0 sudo[294307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:37 compute-0 sudo[294307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:37 compute-0 sudo[294307]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:37 compute-0 sudo[294332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:43:37 compute-0 sudo[294332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:37 compute-0 sudo[294332]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:37 compute-0 sudo[294357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:37.482+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:37 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:37 compute-0 sudo[294357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:37 compute-0 sudo[294357]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:37.531+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:37 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:37 compute-0 sudo[294382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:43:37 compute-0 sudo[294382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:37 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:37 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3137 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:38 compute-0 podman[294447]: 2025-11-24 20:43:38.033086687 +0000 UTC m=+0.054009595 container create d630197f49120939aa40c4f84fe49e9b5517c9d5416aeb4da689ba0dff59e2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 20:43:38 compute-0 systemd[1]: Started libpod-conmon-d630197f49120939aa40c4f84fe49e9b5517c9d5416aeb4da689ba0dff59e2df.scope.
Nov 24 20:43:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:43:38 compute-0 podman[294447]: 2025-11-24 20:43:38.017018387 +0000 UTC m=+0.037941325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:43:38 compute-0 podman[294447]: 2025-11-24 20:43:38.130577755 +0000 UTC m=+0.151500693 container init d630197f49120939aa40c4f84fe49e9b5517c9d5416aeb4da689ba0dff59e2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_liskov, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 24 20:43:38 compute-0 podman[294447]: 2025-11-24 20:43:38.142685759 +0000 UTC m=+0.163608707 container start d630197f49120939aa40c4f84fe49e9b5517c9d5416aeb4da689ba0dff59e2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_liskov, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:43:38 compute-0 podman[294447]: 2025-11-24 20:43:38.14718966 +0000 UTC m=+0.168112668 container attach d630197f49120939aa40c4f84fe49e9b5517c9d5416aeb4da689ba0dff59e2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:43:38 compute-0 hungry_liskov[294463]: 167 167
Nov 24 20:43:38 compute-0 systemd[1]: libpod-d630197f49120939aa40c4f84fe49e9b5517c9d5416aeb4da689ba0dff59e2df.scope: Deactivated successfully.
Nov 24 20:43:38 compute-0 podman[294447]: 2025-11-24 20:43:38.14905181 +0000 UTC m=+0.169974758 container died d630197f49120939aa40c4f84fe49e9b5517c9d5416aeb4da689ba0dff59e2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_liskov, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d8a094befdc060c3559c0e6ef6ad34c71b44898456f7463451c17f57c156003-merged.mount: Deactivated successfully.
Nov 24 20:43:38 compute-0 podman[294447]: 2025-11-24 20:43:38.275040679 +0000 UTC m=+0.295963627 container remove d630197f49120939aa40c4f84fe49e9b5517c9d5416aeb4da689ba0dff59e2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:43:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:38 compute-0 systemd[1]: libpod-conmon-d630197f49120939aa40c4f84fe49e9b5517c9d5416aeb4da689ba0dff59e2df.scope: Deactivated successfully.
Nov 24 20:43:38 compute-0 podman[294487]: 2025-11-24 20:43:38.483658071 +0000 UTC m=+0.051179190 container create ff1decda0873b7b2c094f26644a5a7910ad45d479c35afdf5710068658800279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:43:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:38.516+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:38 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:38 compute-0 systemd[1]: Started libpod-conmon-ff1decda0873b7b2c094f26644a5a7910ad45d479c35afdf5710068658800279.scope.
Nov 24 20:43:38 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:38.538+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:43:38 compute-0 podman[294487]: 2025-11-24 20:43:38.466380409 +0000 UTC m=+0.033901538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fef5fa330f6588aae48fff4b93244932893e1e01c9a8f2f7961b9edc95e9e1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fef5fa330f6588aae48fff4b93244932893e1e01c9a8f2f7961b9edc95e9e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fef5fa330f6588aae48fff4b93244932893e1e01c9a8f2f7961b9edc95e9e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fef5fa330f6588aae48fff4b93244932893e1e01c9a8f2f7961b9edc95e9e1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:38 compute-0 podman[294487]: 2025-11-24 20:43:38.579151875 +0000 UTC m=+0.146673014 container init ff1decda0873b7b2c094f26644a5a7910ad45d479c35afdf5710068658800279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:43:38 compute-0 podman[294487]: 2025-11-24 20:43:38.586507923 +0000 UTC m=+0.154029042 container start ff1decda0873b7b2c094f26644a5a7910ad45d479c35afdf5710068658800279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 20:43:38 compute-0 podman[294487]: 2025-11-24 20:43:38.590230862 +0000 UTC m=+0.157751981 container attach ff1decda0873b7b2c094f26644a5a7910ad45d479c35afdf5710068658800279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:43:38 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:38 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:38 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3137 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:38 compute-0 ceph-mon[75677]: pgmap v1851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:39 compute-0 wizardly_pare[294503]: {
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:     "0": [
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:         {
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "devices": [
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "/dev/loop3"
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             ],
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_name": "ceph_lv0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_size": "21470642176",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "name": "ceph_lv0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "tags": {
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.cluster_name": "ceph",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.crush_device_class": "",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.encrypted": "0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.osd_id": "0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.type": "block",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.vdo": "0"
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             },
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "type": "block",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "vg_name": "ceph_vg0"
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:         }
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:     ],
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:     "1": [
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:         {
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "devices": [
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "/dev/loop4"
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             ],
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_name": "ceph_lv1",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_size": "21470642176",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "name": "ceph_lv1",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "tags": {
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.cluster_name": "ceph",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.crush_device_class": "",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.encrypted": "0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.osd_id": "1",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.type": "block",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.vdo": "0"
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             },
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "type": "block",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "vg_name": "ceph_vg1"
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:         }
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:     ],
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:     "2": [
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:         {
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "devices": [
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "/dev/loop5"
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             ],
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_name": "ceph_lv2",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_size": "21470642176",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "name": "ceph_lv2",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "tags": {
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.cluster_name": "ceph",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.crush_device_class": "",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.encrypted": "0",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.osd_id": "2",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.type": "block",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:                 "ceph.vdo": "0"
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             },
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "type": "block",
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:             "vg_name": "ceph_vg2"
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:         }
Nov 24 20:43:39 compute-0 wizardly_pare[294503]:     ]
Nov 24 20:43:39 compute-0 wizardly_pare[294503]: }
Nov 24 20:43:39 compute-0 systemd[1]: libpod-ff1decda0873b7b2c094f26644a5a7910ad45d479c35afdf5710068658800279.scope: Deactivated successfully.
Nov 24 20:43:39 compute-0 podman[294487]: 2025-11-24 20:43:39.409237882 +0000 UTC m=+0.976759041 container died ff1decda0873b7b2c094f26644a5a7910ad45d479c35afdf5710068658800279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fef5fa330f6588aae48fff4b93244932893e1e01c9a8f2f7961b9edc95e9e1f-merged.mount: Deactivated successfully.
Nov 24 20:43:39 compute-0 podman[294487]: 2025-11-24 20:43:39.480648383 +0000 UTC m=+1.048169512 container remove ff1decda0873b7b2c094f26644a5a7910ad45d479c35afdf5710068658800279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pare, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:43:39 compute-0 systemd[1]: libpod-conmon-ff1decda0873b7b2c094f26644a5a7910ad45d479c35afdf5710068658800279.scope: Deactivated successfully.
Nov 24 20:43:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:39.503+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:39 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:39 compute-0 sudo[294382]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:39 compute-0 podman[294513]: 2025-11-24 20:43:39.527385453 +0000 UTC m=+0.080488414 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:43:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:39.575+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:39 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:39 compute-0 sudo[294543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:39 compute-0 sudo[294543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:39 compute-0 sudo[294543]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:39 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:39 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:39 compute-0 sudo[294568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:43:39 compute-0 sudo[294568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:39 compute-0 sudo[294568]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:39 compute-0 sudo[294593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:39 compute-0 sudo[294593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:39 compute-0 sudo[294593]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:39 compute-0 sudo[294618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:43:39 compute-0 sudo[294618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:40 compute-0 podman[294684]: 2025-11-24 20:43:40.242061382 +0000 UTC m=+0.048311604 container create 7b59e8efc250f39a40f553bbacdf71cc6b0025917b649a266c92e63b5250703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:43:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:40 compute-0 systemd[1]: Started libpod-conmon-7b59e8efc250f39a40f553bbacdf71cc6b0025917b649a266c92e63b5250703b.scope.
Nov 24 20:43:40 compute-0 podman[294684]: 2025-11-24 20:43:40.220748512 +0000 UTC m=+0.026998744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:43:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:43:40 compute-0 podman[294684]: 2025-11-24 20:43:40.350192754 +0000 UTC m=+0.156443036 container init 7b59e8efc250f39a40f553bbacdf71cc6b0025917b649a266c92e63b5250703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 20:43:40 compute-0 podman[294684]: 2025-11-24 20:43:40.362353 +0000 UTC m=+0.168603212 container start 7b59e8efc250f39a40f553bbacdf71cc6b0025917b649a266c92e63b5250703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:43:40 compute-0 podman[294684]: 2025-11-24 20:43:40.365826583 +0000 UTC m=+0.172076855 container attach 7b59e8efc250f39a40f553bbacdf71cc6b0025917b649a266c92e63b5250703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 20:43:40 compute-0 angry_moore[294700]: 167 167
Nov 24 20:43:40 compute-0 systemd[1]: libpod-7b59e8efc250f39a40f553bbacdf71cc6b0025917b649a266c92e63b5250703b.scope: Deactivated successfully.
Nov 24 20:43:40 compute-0 podman[294684]: 2025-11-24 20:43:40.371787252 +0000 UTC m=+0.178037534 container died 7b59e8efc250f39a40f553bbacdf71cc6b0025917b649a266c92e63b5250703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 20:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec9393063bb7d83a775047a0995100a6caeb4ea99706e4e9fdbc0ce89e82bdd3-merged.mount: Deactivated successfully.
Nov 24 20:43:40 compute-0 podman[294684]: 2025-11-24 20:43:40.418286297 +0000 UTC m=+0.224536539 container remove 7b59e8efc250f39a40f553bbacdf71cc6b0025917b649a266c92e63b5250703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:43:40 compute-0 systemd[1]: libpod-conmon-7b59e8efc250f39a40f553bbacdf71cc6b0025917b649a266c92e63b5250703b.scope: Deactivated successfully.
Nov 24 20:43:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:40.507+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:40 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:40.528+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:40 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:43:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:43:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:43:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:43:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:43:40 compute-0 podman[294725]: 2025-11-24 20:43:40.638947449 +0000 UTC m=+0.066186241 container create 2bd0ac20039ae6a4ab771e92f7694e079131bf45a1eea205bcd9e5714f362f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hugle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 20:43:40 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:40 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:40 compute-0 ceph-mon[75677]: pgmap v1852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:40 compute-0 systemd[1]: Started libpod-conmon-2bd0ac20039ae6a4ab771e92f7694e079131bf45a1eea205bcd9e5714f362f51.scope.
Nov 24 20:43:40 compute-0 podman[294725]: 2025-11-24 20:43:40.60833556 +0000 UTC m=+0.035574442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:43:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f168a307edf57b12089b9c376fd9c97d8275ded4a79800f859486b7a74c3201/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f168a307edf57b12089b9c376fd9c97d8275ded4a79800f859486b7a74c3201/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f168a307edf57b12089b9c376fd9c97d8275ded4a79800f859486b7a74c3201/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f168a307edf57b12089b9c376fd9c97d8275ded4a79800f859486b7a74c3201/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:43:40 compute-0 podman[294725]: 2025-11-24 20:43:40.792327972 +0000 UTC m=+0.219566844 container init 2bd0ac20039ae6a4ab771e92f7694e079131bf45a1eea205bcd9e5714f362f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hugle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 20:43:40 compute-0 podman[294725]: 2025-11-24 20:43:40.803201024 +0000 UTC m=+0.230439846 container start 2bd0ac20039ae6a4ab771e92f7694e079131bf45a1eea205bcd9e5714f362f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:43:40 compute-0 podman[294725]: 2025-11-24 20:43:40.810512619 +0000 UTC m=+0.237751431 container attach 2bd0ac20039ae6a4ab771e92f7694e079131bf45a1eea205bcd9e5714f362f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hugle, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:43:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:41.471+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:41 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:41 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:41.507+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:41 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:41 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]: {
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "osd_id": 2,
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "type": "bluestore"
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:     },
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "osd_id": 1,
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "type": "bluestore"
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:     },
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "osd_id": 0,
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:         "type": "bluestore"
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]:     }
Nov 24 20:43:42 compute-0 inspiring_hugle[294742]: }
Nov 24 20:43:42 compute-0 systemd[1]: libpod-2bd0ac20039ae6a4ab771e92f7694e079131bf45a1eea205bcd9e5714f362f51.scope: Deactivated successfully.
Nov 24 20:43:42 compute-0 podman[294725]: 2025-11-24 20:43:42.114255887 +0000 UTC m=+1.541494719 container died 2bd0ac20039ae6a4ab771e92f7694e079131bf45a1eea205bcd9e5714f362f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 24 20:43:42 compute-0 systemd[1]: libpod-2bd0ac20039ae6a4ab771e92f7694e079131bf45a1eea205bcd9e5714f362f51.scope: Consumed 1.269s CPU time.
Nov 24 20:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f168a307edf57b12089b9c376fd9c97d8275ded4a79800f859486b7a74c3201-merged.mount: Deactivated successfully.
Nov 24 20:43:42 compute-0 podman[294725]: 2025-11-24 20:43:42.181768023 +0000 UTC m=+1.609006815 container remove 2bd0ac20039ae6a4ab771e92f7694e079131bf45a1eea205bcd9e5714f362f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hugle, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:43:42 compute-0 systemd[1]: libpod-conmon-2bd0ac20039ae6a4ab771e92f7694e079131bf45a1eea205bcd9e5714f362f51.scope: Deactivated successfully.
Nov 24 20:43:42 compute-0 sudo[294618]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:43:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:43:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:43:42 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:43:42 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev fa6e0eac-1288-418a-8a0a-7dec17bdd849 does not exist
Nov 24 20:43:42 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3cc05bf1-bbca-4f93-814a-809d1c2f190b does not exist
Nov 24 20:43:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:42 compute-0 sudo[294786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:43:42 compute-0 sudo[294786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:42 compute-0 sudo[294786]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:42 compute-0 sudo[294811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:43:42 compute-0 sudo[294811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:43:42 compute-0 sudo[294811]: pam_unix(sudo:session): session closed for user root
Nov 24 20:43:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:42.496+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:42 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:42 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:42.529+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:42 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:42 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:43:42 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:43:42 compute-0 ceph-mon[75677]: pgmap v1853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:43 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3142 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:43.483+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:43 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:43 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:43.541+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:43 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:43 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:43 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3142 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:44 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:44.499+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:44.504+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:44 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:44 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:44 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:44 compute-0 ceph-mon[75677]: pgmap v1854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:45 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:45.489+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:45.533+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:45 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:45 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:45 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:46.495+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:46 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:46 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:46.517+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:46 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:46 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:46 compute-0 ceph-mon[75677]: pgmap v1855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:47.452+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:47 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:47 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:47.514+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:47 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:47 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:48.422+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:48 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:48 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:48.518+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:48 compute-0 ceph-mon[75677]: pgmap v1856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:48 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:48 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:49.411+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:49 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:49.481+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:49 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:49 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:49 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:49 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:49.880 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:26:dc 2001:db8:0:1:f816:3eff:fead:26dc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd969cbc-9d64-4c71-aac4-ed42562227fa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fbd4137f-635f-441e-aeaa-a3fd83d0e21b) old=Port_Binding(mac=['fa:16:3e:ad:26:dc 2001:db8::f816:3eff:fead:26dc'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fead:26dc/64', 'neutron:device_id': 'ovnmeta-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8acea282-7eae-4ece-adc3-81c7101656b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2bea49ff7444fcd88acdcb79885d823', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:43:49 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:49.882 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fbd4137f-635f-441e-aeaa-a3fd83d0e21b in datapath 8acea282-7eae-4ece-adc3-81c7101656b0 updated
Nov 24 20:43:49 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:49.886 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8acea282-7eae-4ece-adc3-81c7101656b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:43:49 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:43:49.888 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c12bed-4daa-4ebd-ab8a-bd032aabb85d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:43:49 compute-0 podman[294836]: 2025-11-24 20:43:49.903471084 +0000 UTC m=+0.112907731 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 20:43:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:50 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:50.422+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:50 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:50.515+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:50 compute-0 ceph-mon[75677]: pgmap v1857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:50 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:50 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:51.406+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:51 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:51 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:51.518+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:51 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:51 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:52.369+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:52 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:52 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:52.519+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3147 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:52 compute-0 ceph-mon[75677]: pgmap v1858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:52 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:52 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:52 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3147 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:53.406+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:53 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:53 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:53.515+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:53 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:53 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:43:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:43:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:43:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:43:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:43:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:43:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:54.438+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:54 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:54 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:54.478+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:54 compute-0 ceph-mon[75677]: pgmap v1859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:43:54 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:54 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:54 compute-0 podman[294856]: 2025-11-24 20:43:54.871293225 +0000 UTC m=+0.102497473 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 24 20:43:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:55.446+0000 7f1a67169640 -1 osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:55 compute-0 ceph-osd[89640]: osd.1 174 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:55 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:55.500+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Nov 24 20:43:55 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:55 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Nov 24 20:43:55 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Nov 24 20:43:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 24 20:43:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:56.455+0000 7f1a67169640 -1 osd.1 175 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:56 compute-0 ceph-osd[89640]: osd.1 175 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:56 compute-0 ceph-osd[88624]: osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:56.507+0000 7f2ca3ee7640 -1 osd.0 174 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:56 compute-0 ceph-mon[75677]: osdmap e175: 3 total, 3 up, 3 in
Nov 24 20:43:56 compute-0 ceph-mon[75677]: pgmap v1861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 24 20:43:56 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:56 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:57.412+0000 7f1a67169640 -1 osd.1 175 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:57 compute-0 ceph-osd[89640]: osd.1 175 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:57 compute-0 ceph-osd[88624]: osd.0 175 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:57.487+0000 7f2ca3ee7640 -1 osd.0 175 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3157 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:43:57 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:57 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:57 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3157 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:43:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 24 20:43:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:58.373+0000 7f1a67169640 -1 osd.1 175 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:58 compute-0 ceph-osd[89640]: osd.1 175 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:58 compute-0 ceph-osd[88624]: osd.0 175 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:58.468+0000 7f2ca3ee7640 -1 osd.0 175 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:58 compute-0 ceph-mon[75677]: pgmap v1862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 24 20:43:58 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:58 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:43:59.396+0000 7f1a67169640 -1 osd.1 175 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:59 compute-0 ceph-osd[89640]: osd.1 175 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:43:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:59 compute-0 ceph-osd[88624]: osd.0 175 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:43:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:43:59.469+0000 7f2ca3ee7640 -1 osd.0 175 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:43:59 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:43:59 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 177 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 821 KiB/s wr, 18 op/s
Nov 24 20:44:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:00.403+0000 7f1a67169640 -1 osd.1 175 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:00 compute-0 ceph-osd[89640]: osd.1 175 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:00 compute-0 ceph-osd[88624]: osd.0 175 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:00.470+0000 7f2ca3ee7640 -1 osd.0 175 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Nov 24 20:44:00 compute-0 ceph-mon[75677]: pgmap v1863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 177 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 821 KiB/s wr, 18 op/s
Nov 24 20:44:00 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:00 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Nov 24 20:44:00 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Nov 24 20:44:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:01.431+0000 7f1a67169640 -1 osd.1 176 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:01 compute-0 ceph-osd[89640]: osd.1 176 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:01 compute-0 ceph-osd[88624]: osd.0 176 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:01.514+0000 7f2ca3ee7640 -1 osd.0 176 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 24 20:44:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 24 20:44:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 24 20:44:01 compute-0 ceph-mon[75677]: osdmap e176: 3 total, 3 up, 3 in
Nov 24 20:44:01 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:01 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 257 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 14 MiB/s wr, 46 op/s
Nov 24 20:44:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:02.413+0000 7f1a67169640 -1 osd.1 177 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:02 compute-0 ceph-osd[89640]: osd.1 177 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:02 compute-0 ceph-osd[88624]: osd.0 177 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:02.555+0000 7f2ca3ee7640 -1 osd.0 177 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 24 20:44:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3162 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 24 20:44:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 24 20:44:02 compute-0 ceph-mon[75677]: osdmap e177: 3 total, 3 up, 3 in
Nov 24 20:44:02 compute-0 ceph-mon[75677]: pgmap v1866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 257 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 14 MiB/s wr, 46 op/s
Nov 24 20:44:02 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:02 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:03 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:44:03.384 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:44:03 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:44:03.386 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:44:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:03.444+0000 7f1a67169640 -1 osd.1 178 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:03 compute-0 ceph-osd[89640]: osd.1 178 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:03 compute-0 ceph-osd[88624]: osd.0 178 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:03.588+0000 7f2ca3ee7640 -1 osd.0 178 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:03 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3162 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:03 compute-0 ceph-mon[75677]: osdmap e178: 3 total, 3 up, 3 in
Nov 24 20:44:03 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:03 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 249 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 19 MiB/s wr, 57 op/s
Nov 24 20:44:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:04.403+0000 7f1a67169640 -1 osd.1 178 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:04 compute-0 ceph-osd[89640]: osd.1 178 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:04 compute-0 ceph-osd[88624]: osd.0 178 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:04.546+0000 7f2ca3ee7640 -1 osd.0 178 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:04 compute-0 ceph-mon[75677]: pgmap v1868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 249 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 19 MiB/s wr, 57 op/s
Nov 24 20:44:04 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:04 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:05.429+0000 7f1a67169640 -1 osd.1 178 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:05 compute-0 ceph-osd[89640]: osd.1 178 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:05 compute-0 ceph-osd[88624]: osd.0 178 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:05.524+0000 7f2ca3ee7640 -1 osd.0 178 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:05 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:05 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 17 MiB/s wr, 135 op/s
Nov 24 20:44:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:06.406+0000 7f1a67169640 -1 osd.1 178 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:06 compute-0 ceph-osd[89640]: osd.1 178 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:06 compute-0 ceph-osd[88624]: osd.0 178 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:06.562+0000 7f2ca3ee7640 -1 osd.0 178 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:06 compute-0 ceph-mon[75677]: pgmap v1869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 17 MiB/s wr, 135 op/s
Nov 24 20:44:06 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:06 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:07.453+0000 7f1a67169640 -1 osd.1 178 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:07 compute-0 ceph-osd[89640]: osd.1 178 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:07 compute-0 ceph-osd[88624]: osd.0 178 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:07.585+0000 7f2ca3ee7640 -1 osd.0 178 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 24 20:44:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 24 20:44:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 24 20:44:07 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:07 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:07 compute-0 ceph-mon[75677]: osdmap e179: 3 total, 3 up, 3 in
Nov 24 20:44:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 3.7 MiB/s wr, 106 op/s
Nov 24 20:44:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:08.406+0000 7f1a67169640 -1 osd.1 179 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:08 compute-0 ceph-osd[89640]: osd.1 179 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:08 compute-0 ceph-osd[88624]: osd.0 179 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:08.560+0000 7f2ca3ee7640 -1 osd.0 179 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 24 20:44:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 24 20:44:08 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 24 20:44:08 compute-0 ceph-mon[75677]: pgmap v1871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 3.7 MiB/s wr, 106 op/s
Nov 24 20:44:08 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:08 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:08 compute-0 ceph-mon[75677]: osdmap e180: 3 total, 3 up, 3 in
Nov 24 20:44:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:44:09.400 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:44:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:44:09.401 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:44:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:44:09.401 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:44:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:09.417+0000 7f1a67169640 -1 osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:09 compute-0 ceph-osd[89640]: osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:09 compute-0 ceph-osd[88624]: osd.0 179 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:09.566+0000 7f2ca3ee7640 -1 osd.0 179 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:09 compute-0 podman[294884]: 2025-11-24 20:44:09.860143165 +0000 UTC m=+0.084757978 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 20:44:09 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:09 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.2 MiB/s wr, 92 op/s
Nov 24 20:44:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:10.372+0000 7f1a67169640 -1 osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:10 compute-0 ceph-osd[89640]: osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:10 compute-0 ceph-osd[88624]: osd.0 180 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:10.552+0000 7f2ca3ee7640 -1 osd.0 180 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:10 compute-0 ceph-mon[75677]: pgmap v1873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.2 MiB/s wr, 92 op/s
Nov 24 20:44:10 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:10 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:11.338+0000 7f1a67169640 -1 osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:11 compute-0 ceph-osd[89640]: osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:11 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:44:11.388 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:44:11 compute-0 ceph-osd[88624]: osd.0 180 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:11.521+0000 7f2ca3ee7640 -1 osd.0 180 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:11 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:11 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 4.7 KiB/s wr, 81 op/s
Nov 24 20:44:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:12.350+0000 7f1a67169640 -1 osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:12 compute-0 ceph-osd[89640]: osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:12 compute-0 ceph-osd[88624]: osd.0 180 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:12.485+0000 7f2ca3ee7640 -1 osd.0 180 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3167 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:12 compute-0 ceph-mon[75677]: pgmap v1874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 4.7 KiB/s wr, 81 op/s
Nov 24 20:44:12 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:12 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:12 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3167 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:13.381+0000 7f1a67169640 -1 osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:13 compute-0 ceph-osd[89640]: osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:13 compute-0 ceph-osd[88624]: osd.0 180 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:13.522+0000 7f2ca3ee7640 -1 osd.0 180 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:14 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:14 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Nov 24 20:44:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:14.373+0000 7f1a67169640 -1 osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:14 compute-0 ceph-osd[89640]: osd.1 180 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:14 compute-0 ceph-osd[88624]: osd.0 180 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:14.539+0000 7f2ca3ee7640 -1 osd.0 180 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 24 20:44:15 compute-0 ceph-mon[75677]: pgmap v1875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Nov 24 20:44:15 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:15 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 24 20:44:15 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 24 20:44:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:15.353+0000 7f1a67169640 -1 osd.1 181 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:15 compute-0 ceph-osd[89640]: osd.1 181 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:15.513+0000 7f2ca3ee7640 -1 osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:15 compute-0 ceph-osd[88624]: osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:16 compute-0 ceph-mon[75677]: osdmap e181: 3 total, 3 up, 3 in
Nov 24 20:44:16 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:16 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.4 KiB/s wr, 48 op/s
Nov 24 20:44:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:16.352+0000 7f1a67169640 -1 osd.1 181 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:16 compute-0 ceph-osd[89640]: osd.1 181 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:44:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/74951687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:44:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:44:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/74951687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:44:16 compute-0 ceph-osd[88624]: osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:16.464+0000 7f2ca3ee7640 -1 osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:17 compute-0 ceph-mon[75677]: pgmap v1877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.4 KiB/s wr, 48 op/s
Nov 24 20:44:17 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/74951687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:44:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/74951687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:44:17 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:17.392+0000 7f1a67169640 -1 osd.1 181 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:17 compute-0 ceph-osd[89640]: osd.1 181 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:17 compute-0 ceph-osd[88624]: osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:17.464+0000 7f2ca3ee7640 -1 osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3177 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:18 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:18 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:18 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3177 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.8 KiB/s wr, 40 op/s
Nov 24 20:44:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:18.373+0000 7f1a67169640 -1 osd.1 181 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:18 compute-0 ceph-osd[89640]: osd.1 181 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:18 compute-0 ceph-osd[88624]: osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:18.447+0000 7f2ca3ee7640 -1 osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:19 compute-0 ceph-mon[75677]: pgmap v1878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.8 KiB/s wr, 40 op/s
Nov 24 20:44:19 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:19 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:19.376+0000 7f1a67169640 -1 osd.1 181 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:19 compute-0 ceph-osd[89640]: osd.1 181 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:19 compute-0 ceph-osd[88624]: osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:19.436+0000 7f2ca3ee7640 -1 osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:20 compute-0 ceph-mon[75677]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'default.rgw.log' : 19 ])
Nov 24 20:44:20 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.0 KiB/s wr, 39 op/s
Nov 24 20:44:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:20.396+0000 7f1a67169640 -1 osd.1 181 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:20 compute-0 ceph-osd[89640]: osd.1 181 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:20.477+0000 7f2ca3ee7640 -1 osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:20 compute-0 ceph-osd[88624]: osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:21.423+0000 7f1a67169640 -1 osd.1 181 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:21 compute-0 ceph-osd[89640]: osd.1 181 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:21.498+0000 7f2ca3ee7640 -1 osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:21 compute-0 ceph-osd[88624]: osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:21 compute-0 ceph-mon[75677]: pgmap v1879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.0 KiB/s wr, 39 op/s
Nov 24 20:44:21 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:21 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:21 compute-0 podman[294903]: 2025-11-24 20:44:21.609569146 +0000 UTC m=+0.077699059 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 20:44:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 24 20:44:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:22.416+0000 7f1a67169640 -1 osd.1 181 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:22 compute-0 ceph-osd[89640]: osd.1 181 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:22 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:22 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:22 compute-0 ceph-mon[75677]: pgmap v1880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 24 20:44:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:22.535+0000 7f2ca3ee7640 -1 osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:22 compute-0 ceph-osd[88624]: osd.0 181 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 23 slow ops, oldest one blocked for 3182 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 24 20:44:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 24 20:44:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 24 20:44:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:23.409+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:23 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:23.487+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:23 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:23 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:23 compute-0 ceph-mon[75677]: Health check update: 23 slow ops, oldest one blocked for 3182 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:23 compute-0 ceph-mon[75677]: osdmap e182: 3 total, 3 up, 3 in
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Nov 24 20:44:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:24.426+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:44:24
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'images', 'vms', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', '.mgr', 'default.rgw.meta', '.rgw.root']
Nov 24 20:44:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:44:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:24.528+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:24 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:24 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:24 compute-0 ceph-mon[75677]: pgmap v1882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Nov 24 20:44:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:25.434+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:25.514+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:25 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:25 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:25 compute-0 podman[294926]: 2025-11-24 20:44:25.896354207 +0000 UTC m=+0.126035683 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:44:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Nov 24 20:44:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:26.406+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:26.479+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:26 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:26 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:26 compute-0 ceph-mon[75677]: pgmap v1883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Nov 24 20:44:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:27.387+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:27 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:27.515+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:27 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 23 slow ops, oldest one blocked for 3187 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:27 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:27 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Nov 24 20:44:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:28.432+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:28.520+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:28 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:28 compute-0 ceph-mon[75677]: Health check update: 23 slow ops, oldest one blocked for 3187 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:28 compute-0 ceph-mon[75677]: pgmap v1884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Nov 24 20:44:28 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:28 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:29.405+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:29.570+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:29 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:29 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 24 20:44:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:30.435+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:30.525+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:30 compute-0 ceph-mon[75677]: pgmap v1885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 24 20:44:30 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:30 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:31.444+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:31.522+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:31 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:31 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:32.460+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:32.512+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 23 slow ops, oldest one blocked for 3192 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:32 compute-0 ceph-mon[75677]: pgmap v1886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:32 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:32 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:33.471+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:33.516+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:33 compute-0 ceph-mon[75677]: Health check update: 23 slow ops, oldest one blocked for 3192 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:33 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:33 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:34.487+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:34 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:34.538+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:34 compute-0 ceph-mon[75677]: pgmap v1887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:34 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:34 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:44:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:44:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:35.456+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:35.572+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:35 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:35 compute-0 ceph-mon[75677]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 13 ])
Nov 24 20:44:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:36.435+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:36.591+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:36 compute-0 ceph-mon[75677]: pgmap v1888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:36 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:36 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:37 compute-0 sshd-session[294952]: Invalid user jacob from 182.93.7.194 port 54022
Nov 24 20:44:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:37.395+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:37 compute-0 sshd-session[294952]: Received disconnect from 182.93.7.194 port 54022:11: Bye Bye [preauth]
Nov 24 20:44:37 compute-0 sshd-session[294952]: Disconnected from invalid user jacob 182.93.7.194 port 54022 [preauth]
Nov 24 20:44:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:37.583+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:38 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:38 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:38.379+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:38.560+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:39 compute-0 ceph-mon[75677]: pgmap v1889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:39 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:39 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:39.354+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:39.540+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:40 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:40 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:40.402+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:40.520+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:44:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:44:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:44:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:44:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:44:40 compute-0 podman[294954]: 2025-11-24 20:44:40.828023912 +0000 UTC m=+0.064841735 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:44:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 3197 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:41 compute-0 ceph-mon[75677]: pgmap v1890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:41 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:41 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.050922) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017081051019, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 2391, "num_deletes": 256, "total_data_size": 2992694, "memory_usage": 3043824, "flush_reason": "Manual Compaction"}
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017081076228, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 2922261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51727, "largest_seqno": 54117, "table_properties": {"data_size": 2911863, "index_size": 6124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 28265, "raw_average_key_size": 22, "raw_value_size": 2888235, "raw_average_value_size": 2288, "num_data_blocks": 264, "num_entries": 1262, "num_filter_entries": 1262, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764016923, "oldest_key_time": 1764016923, "file_creation_time": 1764017081, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 25364 microseconds, and 13169 cpu microseconds.
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.076300) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 2922261 bytes OK
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.076330) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.077474) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.077496) EVENT_LOG_v1 {"time_micros": 1764017081077488, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.077520) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2981881, prev total WAL file size 2981881, number of live WAL files 2.
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.079048) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(2853KB)], [110(8968KB)]
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017081079123, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 12105754, "oldest_snapshot_seqno": -1}
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 12643 keys, 10637909 bytes, temperature: kUnknown
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017081165760, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 10637909, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10567363, "index_size": 37858, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31621, "raw_key_size": 344509, "raw_average_key_size": 27, "raw_value_size": 10349601, "raw_average_value_size": 818, "num_data_blocks": 1410, "num_entries": 12643, "num_filter_entries": 12643, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017081, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.166024) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 10637909 bytes
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.168020) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.6 rd, 122.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 13167, records dropped: 524 output_compression: NoCompression
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.168038) EVENT_LOG_v1 {"time_micros": 1764017081168029, "job": 66, "event": "compaction_finished", "compaction_time_micros": 86714, "compaction_time_cpu_micros": 51374, "output_level": 6, "num_output_files": 1, "total_output_size": 10637909, "num_input_records": 13167, "num_output_records": 12643, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017081168884, "job": 66, "event": "table_file_deletion", "file_number": 112}
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017081172442, "job": 66, "event": "table_file_deletion", "file_number": 110}
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.078903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.172507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.172512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.172513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.172522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:44:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:44:41.172524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:44:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:41.361+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:41.508+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:41 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:42 compute-0 ceph-mon[75677]: Health check update: 14 slow ops, oldest one blocked for 3197 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:42 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:42 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:42.322+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:42 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:42.524+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:42 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:42 compute-0 sudo[294973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:44:42 compute-0 sudo[294973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:42 compute-0 sudo[294973]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:42 compute-0 sudo[294998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:44:42 compute-0 sudo[294998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:42 compute-0 sudo[294998]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:42 compute-0 sudo[295023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:44:42 compute-0 sudo[295023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:42 compute-0 sudo[295023]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:42 compute-0 sudo[295048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:44:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:42 compute-0 sudo[295048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:43 compute-0 ceph-mon[75677]: pgmap v1891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:43 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:43 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:43.315+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:43 compute-0 sudo[295048]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:44:43 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:44:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:44:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:44:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:44:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:44:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev df7eefb1-3f39-450b-9d0d-7b1e83c83c53 does not exist
Nov 24 20:44:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c25769ae-0c87-449a-913a-0a3a7f04c594 does not exist
Nov 24 20:44:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a38b4dba-2ae7-465d-a544-ad198301d6f9 does not exist
Nov 24 20:44:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:44:43 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:44:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:44:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:44:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:44:43 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:44:43 compute-0 sudo[295104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:44:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:43.481+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:43 compute-0 sudo[295104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:43 compute-0 sudo[295104]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:43 compute-0 sudo[295129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:44:43 compute-0 sudo[295129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:43 compute-0 sudo[295129]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:43 compute-0 sudo[295154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:44:43 compute-0 sudo[295154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:43 compute-0 sudo[295154]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:43 compute-0 sudo[295179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:44:43 compute-0 sudo[295179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:44 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:44:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:44:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:44:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:44:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:44:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:44:44 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:44 compute-0 podman[295243]: 2025-11-24 20:44:44.230408833 +0000 UTC m=+0.121869881 container create 54bce2bd41f609a3a0f93a13c5d381e92bb1c26254749f50b843d7b1f1e606f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:44:44 compute-0 podman[295243]: 2025-11-24 20:44:44.146895689 +0000 UTC m=+0.038356777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:44:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:44.267+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:44 compute-0 systemd[1]: Started libpod-conmon-54bce2bd41f609a3a0f93a13c5d381e92bb1c26254749f50b843d7b1f1e606f0.scope.
Nov 24 20:44:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:44:44 compute-0 podman[295243]: 2025-11-24 20:44:44.457499007 +0000 UTC m=+0.348960025 container init 54bce2bd41f609a3a0f93a13c5d381e92bb1c26254749f50b843d7b1f1e606f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:44:44 compute-0 podman[295243]: 2025-11-24 20:44:44.465909302 +0000 UTC m=+0.357370350 container start 54bce2bd41f609a3a0f93a13c5d381e92bb1c26254749f50b843d7b1f1e606f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:44:44 compute-0 podman[295243]: 2025-11-24 20:44:44.470183707 +0000 UTC m=+0.361644735 container attach 54bce2bd41f609a3a0f93a13c5d381e92bb1c26254749f50b843d7b1f1e606f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:44:44 compute-0 relaxed_nightingale[295260]: 167 167
Nov 24 20:44:44 compute-0 systemd[1]: libpod-54bce2bd41f609a3a0f93a13c5d381e92bb1c26254749f50b843d7b1f1e606f0.scope: Deactivated successfully.
Nov 24 20:44:44 compute-0 podman[295243]: 2025-11-24 20:44:44.475479969 +0000 UTC m=+0.366941017 container died 54bce2bd41f609a3a0f93a13c5d381e92bb1c26254749f50b843d7b1f1e606f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:44:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:44.501+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-950f167ae075879d484a83bc71cfb71fcec7af041be1c4b5e929272ba6d8a3fb-merged.mount: Deactivated successfully.
Nov 24 20:44:44 compute-0 podman[295243]: 2025-11-24 20:44:44.523974456 +0000 UTC m=+0.415435464 container remove 54bce2bd41f609a3a0f93a13c5d381e92bb1c26254749f50b843d7b1f1e606f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 20:44:44 compute-0 systemd[1]: libpod-conmon-54bce2bd41f609a3a0f93a13c5d381e92bb1c26254749f50b843d7b1f1e606f0.scope: Deactivated successfully.
Nov 24 20:44:44 compute-0 podman[295284]: 2025-11-24 20:44:44.68748531 +0000 UTC m=+0.050464261 container create ed8d260605cb3f0a51c60546b6a1a45921aa236cb5d16df23268331200ee666c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:44:44 compute-0 systemd[1]: Started libpod-conmon-ed8d260605cb3f0a51c60546b6a1a45921aa236cb5d16df23268331200ee666c.scope.
Nov 24 20:44:44 compute-0 podman[295284]: 2025-11-24 20:44:44.66614476 +0000 UTC m=+0.029123741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:44:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69435815876ac80ea2cd1e1b2d8bc8f681fe41721a377d6bc1fa1a9a338ebe1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69435815876ac80ea2cd1e1b2d8bc8f681fe41721a377d6bc1fa1a9a338ebe1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69435815876ac80ea2cd1e1b2d8bc8f681fe41721a377d6bc1fa1a9a338ebe1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69435815876ac80ea2cd1e1b2d8bc8f681fe41721a377d6bc1fa1a9a338ebe1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69435815876ac80ea2cd1e1b2d8bc8f681fe41721a377d6bc1fa1a9a338ebe1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:44 compute-0 podman[295284]: 2025-11-24 20:44:44.808866648 +0000 UTC m=+0.171845649 container init ed8d260605cb3f0a51c60546b6a1a45921aa236cb5d16df23268331200ee666c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_solomon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:44:44 compute-0 podman[295284]: 2025-11-24 20:44:44.822219405 +0000 UTC m=+0.185198376 container start ed8d260605cb3f0a51c60546b6a1a45921aa236cb5d16df23268331200ee666c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_solomon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 20:44:44 compute-0 podman[295284]: 2025-11-24 20:44:44.827190668 +0000 UTC m=+0.190169699 container attach ed8d260605cb3f0a51c60546b6a1a45921aa236cb5d16df23268331200ee666c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_solomon, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:44:45 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:45 compute-0 ceph-mon[75677]: pgmap v1892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:45 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:45.289+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:45.530+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:45 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:46 compute-0 vigorous_solomon[295300]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:44:46 compute-0 vigorous_solomon[295300]: --> relative data size: 1.0
Nov 24 20:44:46 compute-0 vigorous_solomon[295300]: --> All data devices are unavailable
Nov 24 20:44:46 compute-0 systemd[1]: libpod-ed8d260605cb3f0a51c60546b6a1a45921aa236cb5d16df23268331200ee666c.scope: Deactivated successfully.
Nov 24 20:44:46 compute-0 podman[295284]: 2025-11-24 20:44:46.069248086 +0000 UTC m=+1.432227057 container died ed8d260605cb3f0a51c60546b6a1a45921aa236cb5d16df23268331200ee666c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_solomon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:44:46 compute-0 systemd[1]: libpod-ed8d260605cb3f0a51c60546b6a1a45921aa236cb5d16df23268331200ee666c.scope: Consumed 1.214s CPU time.
Nov 24 20:44:46 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:46 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-69435815876ac80ea2cd1e1b2d8bc8f681fe41721a377d6bc1fa1a9a338ebe1b-merged.mount: Deactivated successfully.
Nov 24 20:44:46 compute-0 podman[295284]: 2025-11-24 20:44:46.137208594 +0000 UTC m=+1.500187525 container remove ed8d260605cb3f0a51c60546b6a1a45921aa236cb5d16df23268331200ee666c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_solomon, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 20:44:46 compute-0 systemd[1]: libpod-conmon-ed8d260605cb3f0a51c60546b6a1a45921aa236cb5d16df23268331200ee666c.scope: Deactivated successfully.
Nov 24 20:44:46 compute-0 sudo[295179]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:46.280+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:46 compute-0 sudo[295340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:44:46 compute-0 sudo[295340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:46 compute-0 sudo[295340]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:46 compute-0 sudo[295365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:44:46 compute-0 sudo[295365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:46 compute-0 sudo[295365]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:46 compute-0 sudo[295390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:44:46 compute-0 sudo[295390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:46 compute-0 sudo[295390]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:46 compute-0 sudo[295415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:44:46 compute-0 sudo[295415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:46.528+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:46 compute-0 podman[295479]: 2025-11-24 20:44:46.956760149 +0000 UTC m=+0.053803171 container create 33df4bb50b0604275822b5f9f55b04a0cac683c784234572a5259b2f0ed50237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_faraday, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:44:46 compute-0 systemd[1]: Started libpod-conmon-33df4bb50b0604275822b5f9f55b04a0cac683c784234572a5259b2f0ed50237.scope.
Nov 24 20:44:47 compute-0 podman[295479]: 2025-11-24 20:44:46.930039994 +0000 UTC m=+0.027083076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:44:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:44:47 compute-0 podman[295479]: 2025-11-24 20:44:47.062929888 +0000 UTC m=+0.159972960 container init 33df4bb50b0604275822b5f9f55b04a0cac683c784234572a5259b2f0ed50237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:44:47 compute-0 podman[295479]: 2025-11-24 20:44:47.07454693 +0000 UTC m=+0.171589962 container start 33df4bb50b0604275822b5f9f55b04a0cac683c784234572a5259b2f0ed50237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_faraday, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:44:47 compute-0 podman[295479]: 2025-11-24 20:44:47.078626629 +0000 UTC m=+0.175669711 container attach 33df4bb50b0604275822b5f9f55b04a0cac683c784234572a5259b2f0ed50237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_faraday, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:44:47 compute-0 quizzical_faraday[295496]: 167 167
Nov 24 20:44:47 compute-0 systemd[1]: libpod-33df4bb50b0604275822b5f9f55b04a0cac683c784234572a5259b2f0ed50237.scope: Deactivated successfully.
Nov 24 20:44:47 compute-0 podman[295479]: 2025-11-24 20:44:47.082967225 +0000 UTC m=+0.180010247 container died 33df4bb50b0604275822b5f9f55b04a0cac683c784234572a5259b2f0ed50237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_faraday, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:44:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 3207 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:47 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:47 compute-0 ceph-mon[75677]: pgmap v1893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:47 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e8f1a25aafa3387878205e0d08f8b8c8ef9a23735b14a78cebfd328204ef27c-merged.mount: Deactivated successfully.
Nov 24 20:44:47 compute-0 podman[295479]: 2025-11-24 20:44:47.138545571 +0000 UTC m=+0.235588603 container remove 33df4bb50b0604275822b5f9f55b04a0cac683c784234572a5259b2f0ed50237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:44:47 compute-0 systemd[1]: libpod-conmon-33df4bb50b0604275822b5f9f55b04a0cac683c784234572a5259b2f0ed50237.scope: Deactivated successfully.
Nov 24 20:44:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:47.277+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:47 compute-0 podman[295520]: 2025-11-24 20:44:47.377953776 +0000 UTC m=+0.067637580 container create 6613228f31167b1e0ca6155153f8a8d3d9c037c7cce07c5103f944a4de52edce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:44:47 compute-0 systemd[1]: Started libpod-conmon-6613228f31167b1e0ca6155153f8a8d3d9c037c7cce07c5103f944a4de52edce.scope.
Nov 24 20:44:47 compute-0 podman[295520]: 2025-11-24 20:44:47.350889992 +0000 UTC m=+0.040573876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:44:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c86cc156d143f93a0d88ce14987879981067eee34d0655f81a8b5d217be962/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c86cc156d143f93a0d88ce14987879981067eee34d0655f81a8b5d217be962/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c86cc156d143f93a0d88ce14987879981067eee34d0655f81a8b5d217be962/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c86cc156d143f93a0d88ce14987879981067eee34d0655f81a8b5d217be962/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:47 compute-0 podman[295520]: 2025-11-24 20:44:47.491827513 +0000 UTC m=+0.181511407 container init 6613228f31167b1e0ca6155153f8a8d3d9c037c7cce07c5103f944a4de52edce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 20:44:47 compute-0 podman[295520]: 2025-11-24 20:44:47.511672184 +0000 UTC m=+0.201356008 container start 6613228f31167b1e0ca6155153f8a8d3d9c037c7cce07c5103f944a4de52edce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 20:44:47 compute-0 podman[295520]: 2025-11-24 20:44:47.516766829 +0000 UTC m=+0.206450663 container attach 6613228f31167b1e0ca6155153f8a8d3d9c037c7cce07c5103f944a4de52edce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 24 20:44:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:47.525+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:48 compute-0 ceph-mon[75677]: Health check update: 14 slow ops, oldest one blocked for 3207 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:48 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:48 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:48 compute-0 silly_sammet[295536]: {
Nov 24 20:44:48 compute-0 silly_sammet[295536]:     "0": [
Nov 24 20:44:48 compute-0 silly_sammet[295536]:         {
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "devices": [
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "/dev/loop3"
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             ],
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_name": "ceph_lv0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_size": "21470642176",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "name": "ceph_lv0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "tags": {
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.cluster_name": "ceph",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.crush_device_class": "",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.encrypted": "0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.osd_id": "0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.type": "block",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.vdo": "0"
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             },
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "type": "block",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "vg_name": "ceph_vg0"
Nov 24 20:44:48 compute-0 silly_sammet[295536]:         }
Nov 24 20:44:48 compute-0 silly_sammet[295536]:     ],
Nov 24 20:44:48 compute-0 silly_sammet[295536]:     "1": [
Nov 24 20:44:48 compute-0 silly_sammet[295536]:         {
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "devices": [
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "/dev/loop4"
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             ],
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_name": "ceph_lv1",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_size": "21470642176",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "name": "ceph_lv1",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "tags": {
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.cluster_name": "ceph",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.crush_device_class": "",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.encrypted": "0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.osd_id": "1",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.type": "block",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.vdo": "0"
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             },
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "type": "block",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "vg_name": "ceph_vg1"
Nov 24 20:44:48 compute-0 silly_sammet[295536]:         }
Nov 24 20:44:48 compute-0 silly_sammet[295536]:     ],
Nov 24 20:44:48 compute-0 silly_sammet[295536]:     "2": [
Nov 24 20:44:48 compute-0 silly_sammet[295536]:         {
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "devices": [
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "/dev/loop5"
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             ],
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_name": "ceph_lv2",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_size": "21470642176",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "name": "ceph_lv2",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "tags": {
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.cluster_name": "ceph",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.crush_device_class": "",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.encrypted": "0",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.osd_id": "2",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.type": "block",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:                 "ceph.vdo": "0"
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             },
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "type": "block",
Nov 24 20:44:48 compute-0 silly_sammet[295536]:             "vg_name": "ceph_vg2"
Nov 24 20:44:48 compute-0 silly_sammet[295536]:         }
Nov 24 20:44:48 compute-0 silly_sammet[295536]:     ]
Nov 24 20:44:48 compute-0 silly_sammet[295536]: }
Nov 24 20:44:48 compute-0 systemd[1]: libpod-6613228f31167b1e0ca6155153f8a8d3d9c037c7cce07c5103f944a4de52edce.scope: Deactivated successfully.
Nov 24 20:44:48 compute-0 conmon[295536]: conmon 6613228f31167b1e0ca6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6613228f31167b1e0ca6155153f8a8d3d9c037c7cce07c5103f944a4de52edce.scope/container/memory.events
Nov 24 20:44:48 compute-0 podman[295520]: 2025-11-24 20:44:48.306561498 +0000 UTC m=+0.996245292 container died 6613228f31167b1e0ca6155153f8a8d3d9c037c7cce07c5103f944a4de52edce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:44:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:48.321+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-58c86cc156d143f93a0d88ce14987879981067eee34d0655f81a8b5d217be962-merged.mount: Deactivated successfully.
Nov 24 20:44:48 compute-0 podman[295520]: 2025-11-24 20:44:48.367506699 +0000 UTC m=+1.057190493 container remove 6613228f31167b1e0ca6155153f8a8d3d9c037c7cce07c5103f944a4de52edce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:44:48 compute-0 systemd[1]: libpod-conmon-6613228f31167b1e0ca6155153f8a8d3d9c037c7cce07c5103f944a4de52edce.scope: Deactivated successfully.
Nov 24 20:44:48 compute-0 sudo[295415]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:48.508+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:48 compute-0 sudo[295555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:44:48 compute-0 sudo[295555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:48 compute-0 sudo[295555]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:48 compute-0 sudo[295580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:44:48 compute-0 sudo[295580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:48 compute-0 sudo[295580]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:48 compute-0 sudo[295605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:44:48 compute-0 sudo[295605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:48 compute-0 sudo[295605]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:48 compute-0 sudo[295630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:44:48 compute-0 sudo[295630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:49 compute-0 ceph-mon[75677]: pgmap v1894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:49 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:49 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:49 compute-0 podman[295697]: 2025-11-24 20:44:49.239184428 +0000 UTC m=+0.056161484 container create 405df29f77cfc38b5e44f5a5b822b52fc70148773f824a026133e1e7c48e3879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:44:49 compute-0 systemd[1]: Started libpod-conmon-405df29f77cfc38b5e44f5a5b822b52fc70148773f824a026133e1e7c48e3879.scope.
Nov 24 20:44:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:49.307+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:49 compute-0 podman[295697]: 2025-11-24 20:44:49.219711407 +0000 UTC m=+0.036688453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:44:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:44:49 compute-0 podman[295697]: 2025-11-24 20:44:49.338640439 +0000 UTC m=+0.155617505 container init 405df29f77cfc38b5e44f5a5b822b52fc70148773f824a026133e1e7c48e3879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 20:44:49 compute-0 podman[295697]: 2025-11-24 20:44:49.350826255 +0000 UTC m=+0.167803311 container start 405df29f77cfc38b5e44f5a5b822b52fc70148773f824a026133e1e7c48e3879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:44:49 compute-0 podman[295697]: 2025-11-24 20:44:49.354174404 +0000 UTC m=+0.171151450 container attach 405df29f77cfc38b5e44f5a5b822b52fc70148773f824a026133e1e7c48e3879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:44:49 compute-0 unruffled_chatterjee[295713]: 167 167
Nov 24 20:44:49 compute-0 systemd[1]: libpod-405df29f77cfc38b5e44f5a5b822b52fc70148773f824a026133e1e7c48e3879.scope: Deactivated successfully.
Nov 24 20:44:49 compute-0 podman[295697]: 2025-11-24 20:44:49.359345213 +0000 UTC m=+0.176322269 container died 405df29f77cfc38b5e44f5a5b822b52fc70148773f824a026133e1e7c48e3879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:44:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f45726d27913a464eb1957cac9d476febc3154db90f55e846e055dd0f27a00e7-merged.mount: Deactivated successfully.
Nov 24 20:44:49 compute-0 podman[295697]: 2025-11-24 20:44:49.40897041 +0000 UTC m=+0.225947456 container remove 405df29f77cfc38b5e44f5a5b822b52fc70148773f824a026133e1e7c48e3879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:44:49 compute-0 systemd[1]: libpod-conmon-405df29f77cfc38b5e44f5a5b822b52fc70148773f824a026133e1e7c48e3879.scope: Deactivated successfully.
Nov 24 20:44:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:49.509+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:49 compute-0 podman[295737]: 2025-11-24 20:44:49.607157083 +0000 UTC m=+0.055713112 container create 594a1bf3f4284db6c69c63e805c2a18500cce6361802aaff2ec57b7398e3a56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 20:44:49 compute-0 systemd[1]: Started libpod-conmon-594a1bf3f4284db6c69c63e805c2a18500cce6361802aaff2ec57b7398e3a56a.scope.
Nov 24 20:44:49 compute-0 podman[295737]: 2025-11-24 20:44:49.579711978 +0000 UTC m=+0.028268037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:44:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3cb9ba7f6ff389777f5cf4a50ff928bde370275bcceb126424a2680ed69e7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3cb9ba7f6ff389777f5cf4a50ff928bde370275bcceb126424a2680ed69e7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3cb9ba7f6ff389777f5cf4a50ff928bde370275bcceb126424a2680ed69e7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3cb9ba7f6ff389777f5cf4a50ff928bde370275bcceb126424a2680ed69e7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:44:49 compute-0 podman[295737]: 2025-11-24 20:44:49.720429272 +0000 UTC m=+0.168985311 container init 594a1bf3f4284db6c69c63e805c2a18500cce6361802aaff2ec57b7398e3a56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:44:49 compute-0 podman[295737]: 2025-11-24 20:44:49.732983689 +0000 UTC m=+0.181539718 container start 594a1bf3f4284db6c69c63e805c2a18500cce6361802aaff2ec57b7398e3a56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 20:44:49 compute-0 podman[295737]: 2025-11-24 20:44:49.737489959 +0000 UTC m=+0.186045978 container attach 594a1bf3f4284db6c69c63e805c2a18500cce6361802aaff2ec57b7398e3a56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 20:44:50 compute-0 ceph-mon[75677]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'default.rgw.log' : 10 ])
Nov 24 20:44:50 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:50.272+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:50.486+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]: {
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "osd_id": 2,
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "type": "bluestore"
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:     },
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "osd_id": 1,
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "type": "bluestore"
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:     },
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "osd_id": 0,
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:         "type": "bluestore"
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]:     }
Nov 24 20:44:50 compute-0 suspicious_montalcini[295753]: }
Nov 24 20:44:50 compute-0 systemd[1]: libpod-594a1bf3f4284db6c69c63e805c2a18500cce6361802aaff2ec57b7398e3a56a.scope: Deactivated successfully.
Nov 24 20:44:50 compute-0 systemd[1]: libpod-594a1bf3f4284db6c69c63e805c2a18500cce6361802aaff2ec57b7398e3a56a.scope: Consumed 1.164s CPU time.
Nov 24 20:44:50 compute-0 podman[295786]: 2025-11-24 20:44:50.940561884 +0000 UTC m=+0.030835896 container died 594a1bf3f4284db6c69c63e805c2a18500cce6361802aaff2ec57b7398e3a56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_montalcini, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 20:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb3cb9ba7f6ff389777f5cf4a50ff928bde370275bcceb126424a2680ed69e7b-merged.mount: Deactivated successfully.
Nov 24 20:44:51 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:51 compute-0 ceph-mon[75677]: pgmap v1895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:51 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:51 compute-0 podman[295786]: 2025-11-24 20:44:51.276733047 +0000 UTC m=+0.367007019 container remove 594a1bf3f4284db6c69c63e805c2a18500cce6361802aaff2ec57b7398e3a56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_montalcini, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:44:51 compute-0 systemd[1]: libpod-conmon-594a1bf3f4284db6c69c63e805c2a18500cce6361802aaff2ec57b7398e3a56a.scope: Deactivated successfully.
Nov 24 20:44:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:51.304+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:51 compute-0 sudo[295630]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:44:51 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:44:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:44:51 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:44:51 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev b40e2821-271c-4e2d-9c56-250432a0ff2c does not exist
Nov 24 20:44:51 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3da8cf6e-f4d3-47ba-98e4-b428552c3ddb does not exist
Nov 24 20:44:51 compute-0 sudo[295802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:44:51 compute-0 sudo[295802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:51 compute-0 sudo[295802]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:51.535+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:51 compute-0 sudo[295827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:44:51 compute-0 sudo[295827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:44:51 compute-0 sudo[295827]: pam_unix(sudo:session): session closed for user root
Nov 24 20:44:51 compute-0 podman[295852]: 2025-11-24 20:44:51.840452118 +0000 UTC m=+0.065454843 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:44:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:52.304+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:52 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:44:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:44:52 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 24 slow ops, oldest one blocked for 3212 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:52.539+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:52 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:53.256+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:53 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:53 compute-0 ceph-mon[75677]: pgmap v1896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:53 compute-0 ceph-mon[75677]: Health check update: 24 slow ops, oldest one blocked for 3212 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:53 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:53.502+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:54.218+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:54 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:54 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:44:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:44:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:44:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:44:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:44:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:44:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:54.508+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:55.228+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:55 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:55 compute-0 ceph-mon[75677]: pgmap v1897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:55 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:55.545+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:55 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:56.225+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:56 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:56 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:56.588+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:56 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:56 compute-0 podman[295874]: 2025-11-24 20:44:56.936855866 +0000 UTC m=+0.165757385 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:44:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:57.211+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:57 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:57 compute-0 ceph-mon[75677]: pgmap v1898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:57 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:57.568+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 24 slow ops, oldest one blocked for 3217 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:44:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:58.197+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:58 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:58 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:58 compute-0 ceph-mon[75677]: Health check update: 24 slow ops, oldest one blocked for 3217 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:44:58 compute-0 ceph-mon[75677]: pgmap v1899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:44:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:58.533+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:44:59.202+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:44:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:59 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:44:59 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:44:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:44:59.549+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:44:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:00.243+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:00 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:00 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:00 compute-0 ceph-mon[75677]: pgmap v1900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:00.559+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:01.237+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:01 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:01 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:01.600+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:02.284+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:02 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:02 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:02 compute-0 ceph-mon[75677]: pgmap v1901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:02.551+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 24 slow ops, oldest one blocked for 3222 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:02 compute-0 sshd-session[295901]: Invalid user support from 78.128.112.74 port 45906
Nov 24 20:45:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:03.294+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:03 compute-0 sshd-session[295901]: Connection closed by invalid user support 78.128.112.74 port 45906 [preauth]
Nov 24 20:45:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:03.510+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:03 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:03 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:03 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:03 compute-0 ceph-mon[75677]: Health check update: 24 slow ops, oldest one blocked for 3222 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:04.309+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:04.542+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:04 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:04 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:04 compute-0 ceph-mon[75677]: pgmap v1902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:05.354+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:05.524+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:05 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:05 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:06.387+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:06.511+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:06 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:06 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:45:06 compute-0 ceph-mon[75677]: pgmap v1903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:07.433+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:07.514+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:07 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:07 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 24 slow ops, oldest one blocked for 3227 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:08.440+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:08.475+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:08 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:08 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:08 compute-0 ceph-mon[75677]: Health check update: 24 slow ops, oldest one blocked for 3227 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:08 compute-0 ceph-mon[75677]: pgmap v1904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:09.401 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:45:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:09.402 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:45:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:09.402 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:45:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:09.484+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:09.508+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:09 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:09 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:10.439+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:10.473+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:10 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:10 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:10 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:10 compute-0 ceph-mon[75677]: pgmap v1905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:11.406+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:11.521+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:11 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:11 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:11 compute-0 podman[295903]: 2025-11-24 20:45:11.845117208 +0000 UTC m=+0.074678908 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 20:45:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:12.377+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:12 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:12.550+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:12 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:12 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:12 compute-0 ceph-mon[75677]: pgmap v1906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3232 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:13 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:13.234 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:45:13 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:13.235 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:45:13 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:13.251 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:45:13 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:13.253 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:45:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:13.368+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:13.558+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:13 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:13 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:13 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3232 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:14.376+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:14.522+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:14 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:14 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:14 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:14 compute-0 ceph-mon[75677]: pgmap v1907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:15.381+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:15.483+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:15 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:15 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:15 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:15 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:16.419+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:45:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2014424445' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:45:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:45:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2014424445' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:45:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:16.483+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:16 compute-0 ceph-mon[75677]: pgmap v1908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:16 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2014424445' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:45:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2014424445' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:45:16 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:17.405+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:17.525+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:17 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3237 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:17 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:17 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:18 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:18.256 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:45:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:18.412+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:18.556+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:18 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3237 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:18 compute-0 ceph-mon[75677]: pgmap v1909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:18 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:18 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:19.417+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:19 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:19.577+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:19 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:19 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:20 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:20.237 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:45:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:20.444+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:20.626+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:20 compute-0 ceph-mon[75677]: pgmap v1910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:20 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:20 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:21.461+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:21.642+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:21 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:21 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:22.474+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:22.631+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:22 compute-0 podman[295922]: 2025-11-24 20:45:22.850106219 +0000 UTC m=+0.081325377 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 20:45:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3242 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:22 compute-0 ceph-mon[75677]: pgmap v1911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:22 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:22 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:23.455+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:23 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:23.626+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:23 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3242 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:23 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:23 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:24.411+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:45:24
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', 'volumes', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 24 20:45:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:45:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:24.623+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:24 compute-0 ceph-mon[75677]: pgmap v1912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:24 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:24 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:25.459+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:25.583+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:26 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:26 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:26.424+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:26.540+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:27 compute-0 ceph-mon[75677]: pgmap v1913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:27 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:27 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:27.447+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:27 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:27.543+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:27 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:27 compute-0 podman[295943]: 2025-11-24 20:45:27.945181148 +0000 UTC m=+0.168463240 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 24 20:45:28 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:28 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:28.479+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:28.526+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:28 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:28.659 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:4a:7b 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-96ebd1ab-10be-43ae-b0f3-7d4229283d7e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96ebd1ab-10be-43ae-b0f3-7d4229283d7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50bc73f546fa46c88ffe7a39112c7628', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cf5811d-b1c7-42d7-82e6-a59679f622dd, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=25b655f6-4a18-4ff0-975e-acdaec1151c1) old=Port_Binding(mac=['fa:16:3e:0a:4a:7b 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-96ebd1ab-10be-43ae-b0f3-7d4229283d7e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96ebd1ab-10be-43ae-b0f3-7d4229283d7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50bc73f546fa46c88ffe7a39112c7628', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:45:28 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:28.660 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 25b655f6-4a18-4ff0-975e-acdaec1151c1 in datapath 96ebd1ab-10be-43ae-b0f3-7d4229283d7e updated
Nov 24 20:45:28 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:28.662 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96ebd1ab-10be-43ae-b0f3-7d4229283d7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:45:28 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:28.663 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7e9bc6-3c69-489a-b16e-e695cf832581]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:45:29 compute-0 ceph-mon[75677]: pgmap v1914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:29 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:29 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:29.431+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:29.483+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:30 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:30 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:30.465+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:30.470+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:31 compute-0 ceph-mon[75677]: pgmap v1915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:31 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:31 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:31.467+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:31.494+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:32 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:32 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:32.486+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:32.539+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3247 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:33 compute-0 ceph-mon[75677]: pgmap v1916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:33 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:33 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:33 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3247 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:33.500+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:33.588+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:34 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:34 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:34.549+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:34.614+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:34 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:45:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:45:35 compute-0 ceph-mon[75677]: pgmap v1917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:35 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:35 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:35.555+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:35.591+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:36 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:36 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:36.603+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:36.604+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:37 compute-0 ceph-mon[75677]: pgmap v1918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:37 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:37 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:37.568+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:37.625+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3257 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:38 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3257 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:38.537+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:38.590+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:39 compute-0 ceph-mon[75677]: pgmap v1919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:39 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:39 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:39.542+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:39.551+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:40 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:40 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:40.589+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:40.592+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:45:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:45:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:45:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:45:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:45:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:41.560+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:41 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:41.596+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:41 compute-0 ceph-mon[75677]: pgmap v1920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:41 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:41 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:42.527+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:42 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:42.635+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:42 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:42 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:42 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:42 compute-0 ceph-mon[75677]: pgmap v1921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:42 compute-0 podman[295970]: 2025-11-24 20:45:42.821872963 +0000 UTC m=+0.054885693 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:45:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3262 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:43.575+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:43.671+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:43 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:43 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:43 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3262 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:43 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:43 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:44.609+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:44.696+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:45 compute-0 ceph-mon[75677]: pgmap v1922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:45 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:45 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:45.581+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:45 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:45.712+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:46 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:46 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:46.626+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:46.674+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:47 compute-0 ceph-mon[75677]: pgmap v1923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:47 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:47 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:47.590+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:47.703+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3266 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:48 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:48 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:48 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3266 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:48.594+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:48.726+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:49 compute-0 ceph-mon[75677]: pgmap v1924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:49 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:49.590+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:49.715+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:50 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:50 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:50 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:50 compute-0 ceph-mon[75677]: pgmap v1925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:50.574+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:50.711+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:51.561+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:51.677+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:51 compute-0 sudo[295991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:45:51 compute-0 sudo[295991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:51 compute-0 sudo[295991]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:51 compute-0 sudo[296016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:45:51 compute-0 sudo[296016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:51 compute-0 sudo[296016]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:51 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:51 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:51 compute-0 sudo[296041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:45:51 compute-0 sudo[296041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:51 compute-0 sudo[296041]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:52 compute-0 sudo[296066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:45:52 compute-0 sudo[296066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:52.527+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:52 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:52 compute-0 sudo[296066]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:52.634+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:45:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:45:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:45:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:45:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:45:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:45:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c899134d-35e9-47aa-8c53-ec60dceb97b5 does not exist
Nov 24 20:45:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9aa7a90c-1151-405f-8e7b-6b03507cdc53 does not exist
Nov 24 20:45:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e1ea286d-ab3d-4251-bf1a-d295ac6522f8 does not exist
Nov 24 20:45:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:45:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:45:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:45:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:45:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:45:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:45:52 compute-0 sudo[296122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:45:52 compute-0 sudo[296122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:52 compute-0 sudo[296122]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3271 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:52 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:52 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:52 compute-0 ceph-mon[75677]: pgmap v1926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:52 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:52 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:45:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:45:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:45:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:45:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:45:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:45:52 compute-0 sudo[296147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:45:52 compute-0 sudo[296147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:52 compute-0 sudo[296147]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:52 compute-0 sudo[296173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:45:52 compute-0 sudo[296173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:52 compute-0 sudo[296173]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:53 compute-0 podman[296171]: 2025-11-24 20:45:53.007221243 +0000 UTC m=+0.100002181 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 24 20:45:53 compute-0 sudo[296217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:45:53 compute-0 sudo[296217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:53.507+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:53 compute-0 podman[296284]: 2025-11-24 20:45:53.513503324 +0000 UTC m=+0.061162303 container create a28af5c782dccdd676545d618c5af68733280cb9896e814f794a9cd5d87289d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamport, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 20:45:53 compute-0 systemd[1]: Started libpod-conmon-a28af5c782dccdd676545d618c5af68733280cb9896e814f794a9cd5d87289d5.scope.
Nov 24 20:45:53 compute-0 podman[296284]: 2025-11-24 20:45:53.490144553 +0000 UTC m=+0.037803552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:45:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:45:53 compute-0 podman[296284]: 2025-11-24 20:45:53.627095221 +0000 UTC m=+0.174754250 container init a28af5c782dccdd676545d618c5af68733280cb9896e814f794a9cd5d87289d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:45:53 compute-0 podman[296284]: 2025-11-24 20:45:53.641219293 +0000 UTC m=+0.188878272 container start a28af5c782dccdd676545d618c5af68733280cb9896e814f794a9cd5d87289d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:45:53 compute-0 podman[296284]: 2025-11-24 20:45:53.645154119 +0000 UTC m=+0.192813178 container attach a28af5c782dccdd676545d618c5af68733280cb9896e814f794a9cd5d87289d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:45:53 compute-0 fervent_lamport[296300]: 167 167
Nov 24 20:45:53 compute-0 systemd[1]: libpod-a28af5c782dccdd676545d618c5af68733280cb9896e814f794a9cd5d87289d5.scope: Deactivated successfully.
Nov 24 20:45:53 compute-0 conmon[296300]: conmon a28af5c782dccdd67654 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a28af5c782dccdd676545d618c5af68733280cb9896e814f794a9cd5d87289d5.scope/container/memory.events
Nov 24 20:45:53 compute-0 podman[296284]: 2025-11-24 20:45:53.652940309 +0000 UTC m=+0.200599298 container died a28af5c782dccdd676545d618c5af68733280cb9896e814f794a9cd5d87289d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamport, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 20:45:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:53.670+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-15ead7b01f2a7c247e03d5bdd8029c06c515218d38d5bcc33b7d2a1b1d10376f-merged.mount: Deactivated successfully.
Nov 24 20:45:53 compute-0 podman[296284]: 2025-11-24 20:45:53.707676907 +0000 UTC m=+0.255335896 container remove a28af5c782dccdd676545d618c5af68733280cb9896e814f794a9cd5d87289d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 20:45:53 compute-0 systemd[1]: libpod-conmon-a28af5c782dccdd676545d618c5af68733280cb9896e814f794a9cd5d87289d5.scope: Deactivated successfully.
Nov 24 20:45:53 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3271 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:45:53 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:53 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:53 compute-0 podman[296324]: 2025-11-24 20:45:53.963422653 +0000 UTC m=+0.068187823 container create f2e63b35f16ea18abdfc8576ad91c49ec0a456e0ab30051165197993f3273dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:45:54 compute-0 systemd[1]: Started libpod-conmon-f2e63b35f16ea18abdfc8576ad91c49ec0a456e0ab30051165197993f3273dcd.scope.
Nov 24 20:45:54 compute-0 podman[296324]: 2025-11-24 20:45:53.93668309 +0000 UTC m=+0.041448300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:45:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7b34350e2e7f82e33995e317596e081e4b098c9fe8bb59ae71bf1d69e3c340/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7b34350e2e7f82e33995e317596e081e4b098c9fe8bb59ae71bf1d69e3c340/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7b34350e2e7f82e33995e317596e081e4b098c9fe8bb59ae71bf1d69e3c340/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7b34350e2e7f82e33995e317596e081e4b098c9fe8bb59ae71bf1d69e3c340/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7b34350e2e7f82e33995e317596e081e4b098c9fe8bb59ae71bf1d69e3c340/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:54 compute-0 podman[296324]: 2025-11-24 20:45:54.077883934 +0000 UTC m=+0.182649134 container init f2e63b35f16ea18abdfc8576ad91c49ec0a456e0ab30051165197993f3273dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 20:45:54 compute-0 podman[296324]: 2025-11-24 20:45:54.092074057 +0000 UTC m=+0.196839217 container start f2e63b35f16ea18abdfc8576ad91c49ec0a456e0ab30051165197993f3273dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:45:54 compute-0 podman[296324]: 2025-11-24 20:45:54.097008459 +0000 UTC m=+0.201773679 container attach f2e63b35f16ea18abdfc8576ad91c49ec0a456e0ab30051165197993f3273dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:45:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:45:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:45:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:45:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:45:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:45:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:45:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:54.491+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:54.692+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:54 compute-0 ceph-mon[75677]: pgmap v1927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:54 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:54 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:55 compute-0 sad_meninsky[296340]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:45:55 compute-0 sad_meninsky[296340]: --> relative data size: 1.0
Nov 24 20:45:55 compute-0 sad_meninsky[296340]: --> All data devices are unavailable
Nov 24 20:45:55 compute-0 systemd[1]: libpod-f2e63b35f16ea18abdfc8576ad91c49ec0a456e0ab30051165197993f3273dcd.scope: Deactivated successfully.
Nov 24 20:45:55 compute-0 conmon[296340]: conmon f2e63b35f16ea18abdfc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2e63b35f16ea18abdfc8576ad91c49ec0a456e0ab30051165197993f3273dcd.scope/container/memory.events
Nov 24 20:45:55 compute-0 podman[296324]: 2025-11-24 20:45:55.111864284 +0000 UTC m=+1.216629444 container died f2e63b35f16ea18abdfc8576ad91c49ec0a456e0ab30051165197993f3273dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 20:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f7b34350e2e7f82e33995e317596e081e4b098c9fe8bb59ae71bf1d69e3c340-merged.mount: Deactivated successfully.
Nov 24 20:45:55 compute-0 podman[296324]: 2025-11-24 20:45:55.170852797 +0000 UTC m=+1.275617927 container remove f2e63b35f16ea18abdfc8576ad91c49ec0a456e0ab30051165197993f3273dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:45:55 compute-0 systemd[1]: libpod-conmon-f2e63b35f16ea18abdfc8576ad91c49ec0a456e0ab30051165197993f3273dcd.scope: Deactivated successfully.
Nov 24 20:45:55 compute-0 sudo[296217]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:55 compute-0 sudo[296383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:45:55 compute-0 sudo[296383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:55 compute-0 sudo[296383]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:55 compute-0 sudo[296408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:45:55 compute-0 sudo[296408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:55 compute-0 sudo[296408]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:55 compute-0 sudo[296433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:45:55 compute-0 sudo[296433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:55 compute-0 sudo[296433]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:55 compute-0 sudo[296458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:45:55 compute-0 sudo[296458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:55.504+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:55 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:55.695+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:55 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:55 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:55 compute-0 podman[296522]: 2025-11-24 20:45:55.910040677 +0000 UTC m=+0.044689568 container create 3d3def9a58f0c36786d03808f3677a67231ceee9c8baef707b8c46f1d6728067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:45:55 compute-0 systemd[1]: Started libpod-conmon-3d3def9a58f0c36786d03808f3677a67231ceee9c8baef707b8c46f1d6728067.scope.
Nov 24 20:45:55 compute-0 podman[296522]: 2025-11-24 20:45:55.890643763 +0000 UTC m=+0.025292684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:45:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:45:56 compute-0 podman[296522]: 2025-11-24 20:45:56.009642425 +0000 UTC m=+0.144291296 container init 3d3def9a58f0c36786d03808f3677a67231ceee9c8baef707b8c46f1d6728067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 20:45:56 compute-0 podman[296522]: 2025-11-24 20:45:56.022119443 +0000 UTC m=+0.156768334 container start 3d3def9a58f0c36786d03808f3677a67231ceee9c8baef707b8c46f1d6728067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 20:45:56 compute-0 podman[296522]: 2025-11-24 20:45:56.026158712 +0000 UTC m=+0.160807583 container attach 3d3def9a58f0c36786d03808f3677a67231ceee9c8baef707b8c46f1d6728067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:45:56 compute-0 nifty_allen[296539]: 167 167
Nov 24 20:45:56 compute-0 systemd[1]: libpod-3d3def9a58f0c36786d03808f3677a67231ceee9c8baef707b8c46f1d6728067.scope: Deactivated successfully.
Nov 24 20:45:56 compute-0 podman[296522]: 2025-11-24 20:45:56.029338607 +0000 UTC m=+0.163987488 container died 3d3def9a58f0c36786d03808f3677a67231ceee9c8baef707b8c46f1d6728067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:45:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-26f148dc75fca650e3b249a53863c6139863dbed1224944819110aa985f777fa-merged.mount: Deactivated successfully.
Nov 24 20:45:56 compute-0 podman[296522]: 2025-11-24 20:45:56.08201221 +0000 UTC m=+0.216661091 container remove 3d3def9a58f0c36786d03808f3677a67231ceee9c8baef707b8c46f1d6728067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 20:45:56 compute-0 systemd[1]: libpod-conmon-3d3def9a58f0c36786d03808f3677a67231ceee9c8baef707b8c46f1d6728067.scope: Deactivated successfully.
Nov 24 20:45:56 compute-0 podman[296564]: 2025-11-24 20:45:56.296642796 +0000 UTC m=+0.069868128 container create 824ccc51714babcc550ecf76d4a3238415e00af1a4206b5e252f6836ea0a4901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sammet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:45:56 compute-0 systemd[1]: Started libpod-conmon-824ccc51714babcc550ecf76d4a3238415e00af1a4206b5e252f6836ea0a4901.scope.
Nov 24 20:45:56 compute-0 podman[296564]: 2025-11-24 20:45:56.269005739 +0000 UTC m=+0.042231111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:45:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb8f368da8f28450ef73d92ebae58ff5f59a6530d03db8e1794b73a6cdebc00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb8f368da8f28450ef73d92ebae58ff5f59a6530d03db8e1794b73a6cdebc00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb8f368da8f28450ef73d92ebae58ff5f59a6530d03db8e1794b73a6cdebc00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb8f368da8f28450ef73d92ebae58ff5f59a6530d03db8e1794b73a6cdebc00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:56 compute-0 podman[296564]: 2025-11-24 20:45:56.407710584 +0000 UTC m=+0.180935926 container init 824ccc51714babcc550ecf76d4a3238415e00af1a4206b5e252f6836ea0a4901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 20:45:56 compute-0 podman[296564]: 2025-11-24 20:45:56.418838385 +0000 UTC m=+0.192063697 container start 824ccc51714babcc550ecf76d4a3238415e00af1a4206b5e252f6836ea0a4901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:45:56 compute-0 podman[296564]: 2025-11-24 20:45:56.421779584 +0000 UTC m=+0.195004896 container attach 824ccc51714babcc550ecf76d4a3238415e00af1a4206b5e252f6836ea0a4901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sammet, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:45:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:56.487+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:56 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:56.685+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:56 compute-0 ceph-mon[75677]: pgmap v1928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:56 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:56 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:57 compute-0 strange_sammet[296580]: {
Nov 24 20:45:57 compute-0 strange_sammet[296580]:     "0": [
Nov 24 20:45:57 compute-0 strange_sammet[296580]:         {
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "devices": [
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "/dev/loop3"
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             ],
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_name": "ceph_lv0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_size": "21470642176",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "name": "ceph_lv0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "tags": {
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.cluster_name": "ceph",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.crush_device_class": "",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.encrypted": "0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.osd_id": "0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.type": "block",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.vdo": "0"
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             },
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "type": "block",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "vg_name": "ceph_vg0"
Nov 24 20:45:57 compute-0 strange_sammet[296580]:         }
Nov 24 20:45:57 compute-0 strange_sammet[296580]:     ],
Nov 24 20:45:57 compute-0 strange_sammet[296580]:     "1": [
Nov 24 20:45:57 compute-0 strange_sammet[296580]:         {
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "devices": [
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "/dev/loop4"
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             ],
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_name": "ceph_lv1",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_size": "21470642176",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "name": "ceph_lv1",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "tags": {
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.cluster_name": "ceph",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.crush_device_class": "",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.encrypted": "0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.osd_id": "1",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.type": "block",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.vdo": "0"
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             },
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "type": "block",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "vg_name": "ceph_vg1"
Nov 24 20:45:57 compute-0 strange_sammet[296580]:         }
Nov 24 20:45:57 compute-0 strange_sammet[296580]:     ],
Nov 24 20:45:57 compute-0 strange_sammet[296580]:     "2": [
Nov 24 20:45:57 compute-0 strange_sammet[296580]:         {
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "devices": [
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "/dev/loop5"
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             ],
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_name": "ceph_lv2",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_size": "21470642176",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "name": "ceph_lv2",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "tags": {
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.cluster_name": "ceph",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.crush_device_class": "",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.encrypted": "0",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.osd_id": "2",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.type": "block",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:                 "ceph.vdo": "0"
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             },
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "type": "block",
Nov 24 20:45:57 compute-0 strange_sammet[296580]:             "vg_name": "ceph_vg2"
Nov 24 20:45:57 compute-0 strange_sammet[296580]:         }
Nov 24 20:45:57 compute-0 strange_sammet[296580]:     ]
Nov 24 20:45:57 compute-0 strange_sammet[296580]: }
Nov 24 20:45:57 compute-0 systemd[1]: libpod-824ccc51714babcc550ecf76d4a3238415e00af1a4206b5e252f6836ea0a4901.scope: Deactivated successfully.
Nov 24 20:45:57 compute-0 podman[296564]: 2025-11-24 20:45:57.258876608 +0000 UTC m=+1.032101940 container died 824ccc51714babcc550ecf76d4a3238415e00af1a4206b5e252f6836ea0a4901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sammet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 20:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-abb8f368da8f28450ef73d92ebae58ff5f59a6530d03db8e1794b73a6cdebc00-merged.mount: Deactivated successfully.
Nov 24 20:45:57 compute-0 podman[296564]: 2025-11-24 20:45:57.329167166 +0000 UTC m=+1.102392458 container remove 824ccc51714babcc550ecf76d4a3238415e00af1a4206b5e252f6836ea0a4901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:45:57 compute-0 systemd[1]: libpod-conmon-824ccc51714babcc550ecf76d4a3238415e00af1a4206b5e252f6836ea0a4901.scope: Deactivated successfully.
Nov 24 20:45:57 compute-0 sudo[296458]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:57.459+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:57 compute-0 sudo[296601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:45:57 compute-0 sudo[296601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:57 compute-0 sudo[296601]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:57 compute-0 sudo[296626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:45:57 compute-0 sudo[296626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:57 compute-0 sudo[296626]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:57 compute-0 sudo[296651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:45:57 compute-0 sudo[296651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:57 compute-0 sudo[296651]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:57.685+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:57 compute-0 sudo[296676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:45:57 compute-0 sudo[296676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:45:57 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:57 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:58 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:58.070 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:78:1b 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0697baac-9c28-4f10-869f-f931c37f5b3e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0697baac-9c28-4f10-869f-f931c37f5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50bc73f546fa46c88ffe7a39112c7628', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80e68706-31fc-4bb9-8a10-c81142be50c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ce1d1207-9151-43a4-9d95-2406a5675969) old=Port_Binding(mac=['fa:16:3e:f5:78:1b 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0697baac-9c28-4f10-869f-f931c37f5b3e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0697baac-9c28-4f10-869f-f931c37f5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50bc73f546fa46c88ffe7a39112c7628', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:45:58 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:58.072 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ce1d1207-9151-43a4-9d95-2406a5675969 in datapath 0697baac-9c28-4f10-869f-f931c37f5b3e updated
Nov 24 20:45:58 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:58.074 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0697baac-9c28-4f10-869f-f931c37f5b3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:45:58 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:45:58.076 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[291f16c7-ed1a-46a7-aadf-b861a9da87cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:45:58 compute-0 podman[296742]: 2025-11-24 20:45:58.201372517 +0000 UTC m=+0.068801468 container create deb803067053ec66e6558ff2b5b79b45f88b5ad79ea53b8a940c02767a63cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:45:58 compute-0 systemd[1]: Started libpod-conmon-deb803067053ec66e6558ff2b5b79b45f88b5ad79ea53b8a940c02767a63cd61.scope.
Nov 24 20:45:58 compute-0 podman[296742]: 2025-11-24 20:45:58.175193331 +0000 UTC m=+0.042622312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:45:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:45:58 compute-0 podman[296742]: 2025-11-24 20:45:58.292916219 +0000 UTC m=+0.160345220 container init deb803067053ec66e6558ff2b5b79b45f88b5ad79ea53b8a940c02767a63cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:45:58 compute-0 podman[296742]: 2025-11-24 20:45:58.308205223 +0000 UTC m=+0.175634164 container start deb803067053ec66e6558ff2b5b79b45f88b5ad79ea53b8a940c02767a63cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:45:58 compute-0 podman[296742]: 2025-11-24 20:45:58.313348481 +0000 UTC m=+0.180777432 container attach deb803067053ec66e6558ff2b5b79b45f88b5ad79ea53b8a940c02767a63cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:45:58 compute-0 objective_goldberg[296759]: 167 167
Nov 24 20:45:58 compute-0 systemd[1]: libpod-deb803067053ec66e6558ff2b5b79b45f88b5ad79ea53b8a940c02767a63cd61.scope: Deactivated successfully.
Nov 24 20:45:58 compute-0 podman[296742]: 2025-11-24 20:45:58.318447649 +0000 UTC m=+0.185876590 container died deb803067053ec66e6558ff2b5b79b45f88b5ad79ea53b8a940c02767a63cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:45:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b8fe12f5e8b1c9080993fbb80b359d07ce9bb98d1c0bb12b381b7727f7ca2ed-merged.mount: Deactivated successfully.
Nov 24 20:45:58 compute-0 podman[296742]: 2025-11-24 20:45:58.371397529 +0000 UTC m=+0.238826470 container remove deb803067053ec66e6558ff2b5b79b45f88b5ad79ea53b8a940c02767a63cd61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:45:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:58 compute-0 podman[296756]: 2025-11-24 20:45:58.393953718 +0000 UTC m=+0.142282744 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:45:58 compute-0 systemd[1]: libpod-conmon-deb803067053ec66e6558ff2b5b79b45f88b5ad79ea53b8a940c02767a63cd61.scope: Deactivated successfully.
Nov 24 20:45:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:58.466+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:58 compute-0 podman[296809]: 2025-11-24 20:45:58.623081505 +0000 UTC m=+0.062137689 container create 92e492424c588159891bc9e7ac6ca2a1395c1379d75dcde6029a71fc24422d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:45:58 compute-0 systemd[1]: Started libpod-conmon-92e492424c588159891bc9e7ac6ca2a1395c1379d75dcde6029a71fc24422d4c.scope.
Nov 24 20:45:58 compute-0 podman[296809]: 2025-11-24 20:45:58.595680375 +0000 UTC m=+0.034736649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:45:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:58.689+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63daf4788733236d6086d6533b313bfa4eb2a4331b7e247c2c77a35aaf70c38a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63daf4788733236d6086d6533b313bfa4eb2a4331b7e247c2c77a35aaf70c38a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63daf4788733236d6086d6533b313bfa4eb2a4331b7e247c2c77a35aaf70c38a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63daf4788733236d6086d6533b313bfa4eb2a4331b7e247c2c77a35aaf70c38a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:45:58 compute-0 podman[296809]: 2025-11-24 20:45:58.725157891 +0000 UTC m=+0.164214115 container init 92e492424c588159891bc9e7ac6ca2a1395c1379d75dcde6029a71fc24422d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 20:45:58 compute-0 podman[296809]: 2025-11-24 20:45:58.738304826 +0000 UTC m=+0.177361030 container start 92e492424c588159891bc9e7ac6ca2a1395c1379d75dcde6029a71fc24422d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 20:45:58 compute-0 podman[296809]: 2025-11-24 20:45:58.742623433 +0000 UTC m=+0.181679717 container attach 92e492424c588159891bc9e7ac6ca2a1395c1379d75dcde6029a71fc24422d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_liskov, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 20:45:58 compute-0 ceph-mon[75677]: pgmap v1929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:45:58 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:58 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:45:59.464+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:45:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:45:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:45:59.664+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:45:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:45:59 compute-0 eager_liskov[296825]: {
Nov 24 20:45:59 compute-0 eager_liskov[296825]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "osd_id": 2,
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "type": "bluestore"
Nov 24 20:45:59 compute-0 eager_liskov[296825]:     },
Nov 24 20:45:59 compute-0 eager_liskov[296825]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "osd_id": 1,
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "type": "bluestore"
Nov 24 20:45:59 compute-0 eager_liskov[296825]:     },
Nov 24 20:45:59 compute-0 eager_liskov[296825]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "osd_id": 0,
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:45:59 compute-0 eager_liskov[296825]:         "type": "bluestore"
Nov 24 20:45:59 compute-0 eager_liskov[296825]:     }
Nov 24 20:45:59 compute-0 eager_liskov[296825]: }
Nov 24 20:45:59 compute-0 systemd[1]: libpod-92e492424c588159891bc9e7ac6ca2a1395c1379d75dcde6029a71fc24422d4c.scope: Deactivated successfully.
Nov 24 20:45:59 compute-0 podman[296809]: 2025-11-24 20:45:59.748244627 +0000 UTC m=+1.187300801 container died 92e492424c588159891bc9e7ac6ca2a1395c1379d75dcde6029a71fc24422d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:45:59 compute-0 systemd[1]: libpod-92e492424c588159891bc9e7ac6ca2a1395c1379d75dcde6029a71fc24422d4c.scope: Consumed 1.017s CPU time.
Nov 24 20:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-63daf4788733236d6086d6533b313bfa4eb2a4331b7e247c2c77a35aaf70c38a-merged.mount: Deactivated successfully.
Nov 24 20:45:59 compute-0 podman[296809]: 2025-11-24 20:45:59.827304142 +0000 UTC m=+1.266360326 container remove 92e492424c588159891bc9e7ac6ca2a1395c1379d75dcde6029a71fc24422d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_liskov, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:45:59 compute-0 systemd[1]: libpod-conmon-92e492424c588159891bc9e7ac6ca2a1395c1379d75dcde6029a71fc24422d4c.scope: Deactivated successfully.
Nov 24 20:45:59 compute-0 sudo[296676]: pam_unix(sudo:session): session closed for user root
Nov 24 20:45:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:45:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:45:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:45:59 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:45:59 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 93fdf85b-d4c0-4804-860d-83fd71b4be43 does not exist
Nov 24 20:45:59 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1eea71e9-06d3-40aa-ada2-adeda6c56a6c does not exist
Nov 24 20:45:59 compute-0 sudo[296872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:45:59 compute-0 sudo[296872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:45:59 compute-0 sudo[296872]: pam_unix(sudo:session): session closed for user root
Nov 24 20:46:00 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:00 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:46:00 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:46:00 compute-0 sudo[296897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:46:00 compute-0 sudo[296897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:46:00 compute-0 sudo[296897]: pam_unix(sudo:session): session closed for user root
Nov 24 20:46:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:00.480+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:00.704+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:01 compute-0 ceph-mon[75677]: pgmap v1930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:01 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:01 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:01.516+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:01.706+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:02 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:02 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:02.533+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:02.696+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3276 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.833161) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017162833215, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1291, "num_deletes": 258, "total_data_size": 1444570, "memory_usage": 1476296, "flush_reason": "Manual Compaction"}
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017162845166, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1410983, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54118, "largest_seqno": 55408, "table_properties": {"data_size": 1405181, "index_size": 2814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16055, "raw_average_key_size": 21, "raw_value_size": 1391958, "raw_average_value_size": 1843, "num_data_blocks": 123, "num_entries": 755, "num_filter_entries": 755, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017082, "oldest_key_time": 1764017082, "file_creation_time": 1764017162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 12078 microseconds, and 7508 cpu microseconds.
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.845235) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1410983 bytes OK
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.845264) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.846530) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.846552) EVENT_LOG_v1 {"time_micros": 1764017162846545, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.846578) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1438275, prev total WAL file size 1438275, number of live WAL files 2.
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.847581) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353134' seq:72057594037927935, type:22 .. '6C6F676D0032373638' seq:0, type:0; will stop at (end)
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1377KB)], [113(10MB)]
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017162847721, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 12048892, "oldest_snapshot_seqno": -1}
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 12870 keys, 11847023 bytes, temperature: kUnknown
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017162925990, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 11847023, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11773991, "index_size": 39768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32197, "raw_key_size": 351352, "raw_average_key_size": 27, "raw_value_size": 11551041, "raw_average_value_size": 897, "num_data_blocks": 1485, "num_entries": 12870, "num_filter_entries": 12870, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.926357) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11847023 bytes
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.927688) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.7 rd, 151.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(16.9) write-amplify(8.4) OK, records in: 13398, records dropped: 528 output_compression: NoCompression
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.927725) EVENT_LOG_v1 {"time_micros": 1764017162927709, "job": 68, "event": "compaction_finished", "compaction_time_micros": 78381, "compaction_time_cpu_micros": 32880, "output_level": 6, "num_output_files": 1, "total_output_size": 11847023, "num_input_records": 13398, "num_output_records": 12870, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017162928315, "job": 68, "event": "table_file_deletion", "file_number": 115}
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017162930451, "job": 68, "event": "table_file_deletion", "file_number": 113}
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.847412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.931348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.931363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.931367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.931370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:46:02 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:46:02.931385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:46:03 compute-0 ceph-mon[75677]: pgmap v1931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:03 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:03 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:03 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3276 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:03.508+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:03 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:03.650+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:04 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:04 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:04.472+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:04.622+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:05 compute-0 ceph-mon[75677]: pgmap v1932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:05 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:05 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:05.441+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:05.619+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:06 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:06 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:06.409+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:06.615+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:07 compute-0 ceph-mon[75677]: pgmap v1933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:07 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:07 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:07 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:07.081 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:43:08 10.100.0.2 2001:db8::f816:3eff:fecd:4308'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fecd:4308/64', 'neutron:device_id': 'ovnmeta-6cf17c80-4fe3-4993-9598-f580837ae4ac', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6cf17c80-4fe3-4993-9598-f580837ae4ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36f2c6c0-678d-4aca-a6d3-66b608c45840, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9ab03299-1831-4a21-af47-ce83fcc7ae50) old=Port_Binding(mac=['fa:16:3e:cd:43:08 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6cf17c80-4fe3-4993-9598-f580837ae4ac', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6cf17c80-4fe3-4993-9598-f580837ae4ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:46:07 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:07.083 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9ab03299-1831-4a21-af47-ce83fcc7ae50 in datapath 6cf17c80-4fe3-4993-9598-f580837ae4ac updated
Nov 24 20:46:07 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:07.084 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6cf17c80-4fe3-4993-9598-f580837ae4ac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:46:07 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:07.085 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac38fd7-9cd0-4298-9ea7-71f1c58a99cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:46:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:07.425+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:07.662+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3287 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:08 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:08 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:08 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3287 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:08.393+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:08.630+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:08 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:08.890 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:78:1b 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-0697baac-9c28-4f10-869f-f931c37f5b3e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0697baac-9c28-4f10-869f-f931c37f5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50bc73f546fa46c88ffe7a39112c7628', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80e68706-31fc-4bb9-8a10-c81142be50c7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ce1d1207-9151-43a4-9d95-2406a5675969) old=Port_Binding(mac=['fa:16:3e:f5:78:1b 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0697baac-9c28-4f10-869f-f931c37f5b3e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0697baac-9c28-4f10-869f-f931c37f5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50bc73f546fa46c88ffe7a39112c7628', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:46:08 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:08.892 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ce1d1207-9151-43a4-9d95-2406a5675969 in datapath 0697baac-9c28-4f10-869f-f931c37f5b3e updated
Nov 24 20:46:08 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:08.895 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0697baac-9c28-4f10-869f-f931c37f5b3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:46:08 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:08.896 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[ccea294d-503a-4451-bb8c-3d582b8204f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:46:09 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:09 compute-0 ceph-mon[75677]: pgmap v1934: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:09 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:09.402 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:46:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:09.403 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:46:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:09.403 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:46:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:09.443+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:09.659+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:10 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:10 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:10.411+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:10 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:10.686+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:11 compute-0 ceph-mon[75677]: pgmap v1935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:11 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:11 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:11.456+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:11.683+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:12 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:12 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:12.411+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:12.658+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:12 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:13 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3291 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:13 compute-0 ceph-mon[75677]: pgmap v1936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:13 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:13 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:13.427+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:13 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:13.565 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:46:13 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:13.566 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:46:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:13.627+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:13 compute-0 podman[296922]: 2025-11-24 20:46:13.855538839 +0000 UTC m=+0.089397294 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 20:46:14 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3291 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:14 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:14 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:14.417+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:14 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:14.582+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:15 compute-0 ceph-mon[75677]: pgmap v1937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:15 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:15 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:15.418+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:15.601+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:16 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:16 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:16.403+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:46:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2314451869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:46:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:46:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2314451869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:46:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:16.598+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:17 compute-0 ceph-mon[75677]: pgmap v1938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:17 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2314451869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:46:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2314451869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:46:17 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:17.393+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:17 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:17.628+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:18 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:18 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:18.381+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:18.672+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:19 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:19 compute-0 ceph-mon[75677]: pgmap v1939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:19 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:19.421+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:19.677+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:19 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:20 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:20 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:20.383+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:20 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:46:20.568 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:46:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:20.717+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:21 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:21 compute-0 ceph-mon[75677]: pgmap v1940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:21 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:21.395+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:21.744+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:22 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:22.418+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:22.698+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3297 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:23 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:23 compute-0 ceph-mon[75677]: pgmap v1941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:23 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:23 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:23 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3297 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:23.420+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:23.739+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:23 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:23 compute-0 podman[296941]: 2025-11-24 20:46:23.850766693 +0000 UTC m=+0.072110857 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 24 20:46:24 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:24.430+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:46:24
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['backups', '.mgr', '.rgw.root', 'images', 'default.rgw.control', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 24 20:46:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:46:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:24.754+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:25 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:25 compute-0 ceph-mon[75677]: pgmap v1942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:25 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:25.390+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:25.774+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:26 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:26 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:26.401+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:26.734+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:27 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:27 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:27 compute-0 ceph-mon[75677]: pgmap v1943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:27.394+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:27 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:27.720+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:27 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3306 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:28 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:28 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:28 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:28 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3306 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:28.352+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:28.707+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:28 compute-0 podman[296961]: 2025-11-24 20:46:28.899911625 +0000 UTC m=+0.121748499 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 24 20:46:29 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:29 compute-0 ceph-mon[75677]: pgmap v1944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:29 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:29.337+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:29.678+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:30 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:30 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:30.304+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:30.697+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:31 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:31 compute-0 ceph-mon[75677]: pgmap v1945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:31 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:31.334+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:31.711+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:32 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:32 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:32.348+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:32.686+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:33 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3311 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:33 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:33 compute-0 ceph-mon[75677]: pgmap v1946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:33 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:33.363+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:33.688+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:34 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3311 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:34 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:34 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:34.330+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:34.643+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:34 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:46:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:46:35 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:35 compute-0 ceph-mon[75677]: pgmap v1947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:35 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:35.341+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:35.620+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:36 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:36 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:36.352+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:36.572+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:37 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:37 compute-0 ceph-mon[75677]: pgmap v1948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:37 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:37.342+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:37.525+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:38.356+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:38.537+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:39.373+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:39 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:39 compute-0 ceph-mon[75677]: pgmap v1949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:39 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:39.497+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:40.348+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:40.458+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:46:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:46:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:46:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:46:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:46:40 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:40 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:40 compute-0 ceph-mon[75677]: pgmap v1950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:41.303+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:41 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:41.443+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:46:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 55K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1907 writes, 9905 keys, 1907 commit groups, 1.0 writes per commit group, ingest: 10.51 MB, 0.02 MB/s
                                           Interval WAL: 1906 writes, 1906 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     62.5      0.90              0.25        34    0.026       0      0       0.0       0.0
                                             L6      1/0   11.30 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   5.0    105.0     90.5      3.08              1.16        33    0.093    298K    18K       0.0       0.0
                                            Sum      1/0   11.30 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   6.0     81.3     84.2      3.97              1.40        67    0.059    298K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.4    103.0    106.6      0.86              0.36        16    0.054     99K   4161       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    105.0     90.5      3.08              1.16        33    0.093    298K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     62.6      0.89              0.25        33    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.055, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.33 GB write, 0.09 MB/s write, 0.32 GB read, 0.09 MB/s read, 4.0 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b73e09f1f0#2 capacity: 304.00 MB usage: 32.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000529 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2185,30.61 MB,10.0681%) FilterBlock(68,830.36 KB,0.266743%) IndexBlock(68,1.10 MB,0.362607%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 20:46:41 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:41 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:42.290+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:42 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:42.395+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:42 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3322 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:43 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:43 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:43 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:43 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:43 compute-0 ceph-mon[75677]: pgmap v1951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:43.338+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:43.437+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:44 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3322 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:44 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:44 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:44.350+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:44.469+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:44 compute-0 podman[296987]: 2025-11-24 20:46:44.881661721 +0000 UTC m=+0.102910170 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 20:46:45 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:45 compute-0 ceph-mon[75677]: pgmap v1952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:45 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:45.324+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:45 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:45.474+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:46.289+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:46 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:46 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:46.468+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:47.250+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:47 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:47 compute-0 ceph-mon[75677]: pgmap v1953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:47 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:47.485+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3327 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:48.259+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:48 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:48 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:48 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3327 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:48.509+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:49.258+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:49.478+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:49 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:49 compute-0 ceph-mon[75677]: pgmap v1954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:49 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:50.224+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:50.505+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:50 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:50 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:50 compute-0 ceph-mon[75677]: pgmap v1955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:51.224+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:51 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:51 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:51.547+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:52.236+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:52 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:52 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:52 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:52 compute-0 ceph-mon[75677]: pgmap v1956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:52.515+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3332 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:53.202+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:53.487+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:53 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:53 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:53 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3332 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:54.191+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:46:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:46:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:46:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:46:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:46:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:46:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:54.501+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:54 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:54 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:54 compute-0 ceph-mon[75677]: pgmap v1957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:54 compute-0 podman[297007]: 2025-11-24 20:46:54.844955734 +0000 UTC m=+0.074377379 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:46:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:55.170+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:55 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:55.483+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:55 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:55 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:56.194+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:56 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:56.512+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:56 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:56 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:56 compute-0 ceph-mon[75677]: pgmap v1958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:57.149+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:57.537+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:57 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:57 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:57 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:57 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3337 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:46:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:58.124+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:58.494+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:58 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3337 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:46:58 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:58 compute-0 ceph-mon[75677]: pgmap v1959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:46:58 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:46:59.119+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:46:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:46:59.541+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:46:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:46:59 compute-0 podman[297029]: 2025-11-24 20:46:59.872661373 +0000 UTC m=+0.096731333 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 24 20:46:59 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:46:59 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:00.139+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:00 compute-0 sudo[297055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:47:00 compute-0 sudo[297055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:00 compute-0 sudo[297055]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:00 compute-0 sudo[297080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:47:00 compute-0 sudo[297080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:00 compute-0 sudo[297080]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:00 compute-0 sudo[297105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:47:00 compute-0 sudo[297105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:00 compute-0 sudo[297105]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:00 compute-0 sudo[297130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:47:00 compute-0 sudo[297130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:00.494+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:00 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:00 compute-0 ceph-mon[75677]: pgmap v1960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:00 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:01 compute-0 sudo[297130]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:47:01 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:47:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:47:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:47:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:47:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:47:01 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d0a7b597-d6e4-4416-8d18-6097dfa62f7c does not exist
Nov 24 20:47:01 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 420c009e-03ee-4609-90fe-06fb2854efa3 does not exist
Nov 24 20:47:01 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 874f1d46-df64-4903-b6b9-ed1c6f08c80c does not exist
Nov 24 20:47:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:47:01 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:47:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:47:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:47:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:47:01 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:47:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:01.182+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:01 compute-0 sudo[297186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:47:01 compute-0 sudo[297186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:01 compute-0 sudo[297186]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:01 compute-0 sudo[297211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:47:01 compute-0 sudo[297211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:01 compute-0 sudo[297211]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:01 compute-0 sudo[297236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:47:01 compute-0 sudo[297236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:01 compute-0 sudo[297236]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:01 compute-0 sudo[297261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:47:01 compute-0 sudo[297261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:01.504+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:01 compute-0 podman[297325]: 2025-11-24 20:47:01.726963833 +0000 UTC m=+0.029012384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:47:01 compute-0 podman[297325]: 2025-11-24 20:47:01.871874687 +0000 UTC m=+0.173923178 container create de4fb2009e57576460dd4235919606890572fe2630251b6ebc9c847b4290eda7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:47:01 compute-0 systemd[1]: Started libpod-conmon-de4fb2009e57576460dd4235919606890572fe2630251b6ebc9c847b4290eda7.scope.
Nov 24 20:47:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:47:02 compute-0 podman[297325]: 2025-11-24 20:47:02.047013145 +0000 UTC m=+0.349061706 container init de4fb2009e57576460dd4235919606890572fe2630251b6ebc9c847b4290eda7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 20:47:02 compute-0 podman[297325]: 2025-11-24 20:47:02.055828183 +0000 UTC m=+0.357876694 container start de4fb2009e57576460dd4235919606890572fe2630251b6ebc9c847b4290eda7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 20:47:02 compute-0 heuristic_colden[297342]: 167 167
Nov 24 20:47:02 compute-0 systemd[1]: libpod-de4fb2009e57576460dd4235919606890572fe2630251b6ebc9c847b4290eda7.scope: Deactivated successfully.
Nov 24 20:47:02 compute-0 podman[297325]: 2025-11-24 20:47:02.127716025 +0000 UTC m=+0.429764516 container attach de4fb2009e57576460dd4235919606890572fe2630251b6ebc9c847b4290eda7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:47:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:47:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:47:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:47:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:47:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:47:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:47:02 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:02 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:02 compute-0 podman[297325]: 2025-11-24 20:47:02.12977782 +0000 UTC m=+0.431826331 container died de4fb2009e57576460dd4235919606890572fe2630251b6ebc9c847b4290eda7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 20:47:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:02.133+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dd81c75493fab493089e4fd47d9b5c9fa1cbeafcd02fbe46a5a5b5ed278b2fb-merged.mount: Deactivated successfully.
Nov 24 20:47:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:02.518+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:02 compute-0 podman[297325]: 2025-11-24 20:47:02.80897785 +0000 UTC m=+1.111026361 container remove de4fb2009e57576460dd4235919606890572fe2630251b6ebc9c847b4290eda7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_colden, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:47:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:02 compute-0 systemd[1]: libpod-conmon-de4fb2009e57576460dd4235919606890572fe2630251b6ebc9c847b4290eda7.scope: Deactivated successfully.
Nov 24 20:47:03 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:03.095+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:03 compute-0 podman[297366]: 2025-11-24 20:47:03.031763106 +0000 UTC m=+0.044822631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:47:03 compute-0 podman[297366]: 2025-11-24 20:47:03.158054886 +0000 UTC m=+0.171114381 container create 9b46a476ada9e4e429f591cdb331951770ac632eda5ee4f61d30bc64908bacf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 20:47:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3342 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:03 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:03 compute-0 ceph-mon[75677]: pgmap v1961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:03 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:03 compute-0 systemd[1]: Started libpod-conmon-9b46a476ada9e4e429f591cdb331951770ac632eda5ee4f61d30bc64908bacf6.scope.
Nov 24 20:47:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a8ecf66952c133dc259e8e927ee7d71aea37b5c73c87309b9d463de2000797/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a8ecf66952c133dc259e8e927ee7d71aea37b5c73c87309b9d463de2000797/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a8ecf66952c133dc259e8e927ee7d71aea37b5c73c87309b9d463de2000797/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a8ecf66952c133dc259e8e927ee7d71aea37b5c73c87309b9d463de2000797/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a8ecf66952c133dc259e8e927ee7d71aea37b5c73c87309b9d463de2000797/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:03 compute-0 podman[297366]: 2025-11-24 20:47:03.511286925 +0000 UTC m=+0.524346440 container init 9b46a476ada9e4e429f591cdb331951770ac632eda5ee4f61d30bc64908bacf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:47:03 compute-0 podman[297366]: 2025-11-24 20:47:03.518118029 +0000 UTC m=+0.531177564 container start 9b46a476ada9e4e429f591cdb331951770ac632eda5ee4f61d30bc64908bacf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 20:47:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:03.551+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:03 compute-0 podman[297366]: 2025-11-24 20:47:03.603944047 +0000 UTC m=+0.617003582 container attach 9b46a476ada9e4e429f591cdb331951770ac632eda5ee4f61d30bc64908bacf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:47:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:04.098+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:04 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:04 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3342 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:04 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:04.578+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:04 compute-0 silly_cohen[297383]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:47:04 compute-0 silly_cohen[297383]: --> relative data size: 1.0
Nov 24 20:47:04 compute-0 silly_cohen[297383]: --> All data devices are unavailable
Nov 24 20:47:04 compute-0 systemd[1]: libpod-9b46a476ada9e4e429f591cdb331951770ac632eda5ee4f61d30bc64908bacf6.scope: Deactivated successfully.
Nov 24 20:47:04 compute-0 systemd[1]: libpod-9b46a476ada9e4e429f591cdb331951770ac632eda5ee4f61d30bc64908bacf6.scope: Consumed 1.340s CPU time.
Nov 24 20:47:04 compute-0 podman[297412]: 2025-11-24 20:47:04.955921774 +0000 UTC m=+0.031873403 container died 9b46a476ada9e4e429f591cdb331951770ac632eda5ee4f61d30bc64908bacf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cohen, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:47:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:05.056+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8a8ecf66952c133dc259e8e927ee7d71aea37b5c73c87309b9d463de2000797-merged.mount: Deactivated successfully.
Nov 24 20:47:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:05.537+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:05 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:05 compute-0 ceph-mon[75677]: pgmap v1962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:05 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:05 compute-0 podman[297412]: 2025-11-24 20:47:05.791308481 +0000 UTC m=+0.867260090 container remove 9b46a476ada9e4e429f591cdb331951770ac632eda5ee4f61d30bc64908bacf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:47:05 compute-0 systemd[1]: libpod-conmon-9b46a476ada9e4e429f591cdb331951770ac632eda5ee4f61d30bc64908bacf6.scope: Deactivated successfully.
Nov 24 20:47:05 compute-0 sudo[297261]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:05 compute-0 sudo[297427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:47:05 compute-0 sudo[297427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:05 compute-0 sudo[297427]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:06.024+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:06 compute-0 sudo[297452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:47:06 compute-0 sudo[297452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:06 compute-0 sudo[297452]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:06 compute-0 sudo[297477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:47:06 compute-0 sudo[297477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:06 compute-0 sudo[297477]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:06 compute-0 sudo[297502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:47:06 compute-0 sudo[297502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:06.521+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:06 compute-0 podman[297567]: 2025-11-24 20:47:06.677275344 +0000 UTC m=+0.072627672 container create bf2d706db72180e9a55359770cefe2fa2871f8faaa263bcdaa2b8c5203c52258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:47:06 compute-0 podman[297567]: 2025-11-24 20:47:06.643100332 +0000 UTC m=+0.038452680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:47:06 compute-0 systemd[1]: Started libpod-conmon-bf2d706db72180e9a55359770cefe2fa2871f8faaa263bcdaa2b8c5203c52258.scope.
Nov 24 20:47:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:47:07 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:07 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:07 compute-0 ceph-mon[75677]: pgmap v1963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:07 compute-0 podman[297567]: 2025-11-24 20:47:07.040826641 +0000 UTC m=+0.436178979 container init bf2d706db72180e9a55359770cefe2fa2871f8faaa263bcdaa2b8c5203c52258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:47:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:07.044+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:07 compute-0 podman[297567]: 2025-11-24 20:47:07.051848739 +0000 UTC m=+0.447201047 container start bf2d706db72180e9a55359770cefe2fa2871f8faaa263bcdaa2b8c5203c52258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 20:47:07 compute-0 youthful_hypatia[297584]: 167 167
Nov 24 20:47:07 compute-0 systemd[1]: libpod-bf2d706db72180e9a55359770cefe2fa2871f8faaa263bcdaa2b8c5203c52258.scope: Deactivated successfully.
Nov 24 20:47:07 compute-0 podman[297567]: 2025-11-24 20:47:07.141671894 +0000 UTC m=+0.537024232 container attach bf2d706db72180e9a55359770cefe2fa2871f8faaa263bcdaa2b8c5203c52258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:47:07 compute-0 podman[297567]: 2025-11-24 20:47:07.142347282 +0000 UTC m=+0.537699610 container died bf2d706db72180e9a55359770cefe2fa2871f8faaa263bcdaa2b8c5203c52258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:47:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-91178d97c19d1c4d3a1d3299018ec5bd816a996b5d563b7237d732956a554eeb-merged.mount: Deactivated successfully.
Nov 24 20:47:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:07.538+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:07 compute-0 podman[297567]: 2025-11-24 20:47:07.752367365 +0000 UTC m=+1.147719693 container remove bf2d706db72180e9a55359770cefe2fa2871f8faaa263bcdaa2b8c5203c52258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:47:07 compute-0 systemd[1]: libpod-conmon-bf2d706db72180e9a55359770cefe2fa2871f8faaa263bcdaa2b8c5203c52258.scope: Deactivated successfully.
Nov 24 20:47:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:08.011+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:08 compute-0 podman[297610]: 2025-11-24 20:47:07.999351934 +0000 UTC m=+0.041334788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:47:08 compute-0 podman[297610]: 2025-11-24 20:47:08.10625175 +0000 UTC m=+0.148234544 container create fd74b5b232f0d9944efc51eb6cb23d5835a0ef21f791fa84de12f304de314206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:47:08 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:08 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:08 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:08 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:08 compute-0 systemd[1]: Started libpod-conmon-fd74b5b232f0d9944efc51eb6cb23d5835a0ef21f791fa84de12f304de314206.scope.
Nov 24 20:47:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e21404e2a759250b549f4fc5f864724070893fc1fd93681e17fa3c445ddde47c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e21404e2a759250b549f4fc5f864724070893fc1fd93681e17fa3c445ddde47c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e21404e2a759250b549f4fc5f864724070893fc1fd93681e17fa3c445ddde47c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e21404e2a759250b549f4fc5f864724070893fc1fd93681e17fa3c445ddde47c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:08 compute-0 podman[297610]: 2025-11-24 20:47:08.468862732 +0000 UTC m=+0.510845516 container init fd74b5b232f0d9944efc51eb6cb23d5835a0ef21f791fa84de12f304de314206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:47:08 compute-0 podman[297610]: 2025-11-24 20:47:08.477251729 +0000 UTC m=+0.519234483 container start fd74b5b232f0d9944efc51eb6cb23d5835a0ef21f791fa84de12f304de314206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:47:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:08.532+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:08 compute-0 podman[297610]: 2025-11-24 20:47:08.546477157 +0000 UTC m=+0.588459931 container attach fd74b5b232f0d9944efc51eb6cb23d5835a0ef21f791fa84de12f304de314206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:47:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:08.999+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]: {
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:     "0": [
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:         {
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "devices": [
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "/dev/loop3"
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             ],
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_name": "ceph_lv0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_size": "21470642176",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "name": "ceph_lv0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "tags": {
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.cluster_name": "ceph",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.crush_device_class": "",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.encrypted": "0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.osd_id": "0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.type": "block",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.vdo": "0"
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             },
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "type": "block",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "vg_name": "ceph_vg0"
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:         }
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:     ],
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:     "1": [
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:         {
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "devices": [
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "/dev/loop4"
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             ],
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_name": "ceph_lv1",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_size": "21470642176",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "name": "ceph_lv1",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "tags": {
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.cluster_name": "ceph",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.crush_device_class": "",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.encrypted": "0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.osd_id": "1",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.type": "block",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.vdo": "0"
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             },
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "type": "block",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "vg_name": "ceph_vg1"
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:         }
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:     ],
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:     "2": [
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:         {
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "devices": [
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "/dev/loop5"
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             ],
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_name": "ceph_lv2",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_size": "21470642176",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "name": "ceph_lv2",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "tags": {
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.cluster_name": "ceph",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.crush_device_class": "",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.encrypted": "0",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.osd_id": "2",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.type": "block",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:                 "ceph.vdo": "0"
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             },
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "type": "block",
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:             "vg_name": "ceph_vg2"
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:         }
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]:     ]
Nov 24 20:47:09 compute-0 sweet_mestorf[297627]: }
Nov 24 20:47:09 compute-0 systemd[1]: libpod-fd74b5b232f0d9944efc51eb6cb23d5835a0ef21f791fa84de12f304de314206.scope: Deactivated successfully.
Nov 24 20:47:09 compute-0 podman[297610]: 2025-11-24 20:47:09.274965158 +0000 UTC m=+1.316947962 container died fd74b5b232f0d9944efc51eb6cb23d5835a0ef21f791fa84de12f304de314206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:47:09 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:09 compute-0 ceph-mon[75677]: pgmap v1964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:09 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:47:09.403 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:47:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:47:09.404 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:47:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:47:09.404 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:47:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:09.563+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e21404e2a759250b549f4fc5f864724070893fc1fd93681e17fa3c445ddde47c-merged.mount: Deactivated successfully.
Nov 24 20:47:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:09.992+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:10 compute-0 podman[297610]: 2025-11-24 20:47:10.303950054 +0000 UTC m=+2.345932808 container remove fd74b5b232f0d9944efc51eb6cb23d5835a0ef21f791fa84de12f304de314206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:47:10 compute-0 systemd[1]: libpod-conmon-fd74b5b232f0d9944efc51eb6cb23d5835a0ef21f791fa84de12f304de314206.scope: Deactivated successfully.
Nov 24 20:47:10 compute-0 sudo[297502]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:10 compute-0 sudo[297650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:47:10 compute-0 sudo[297650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:10 compute-0 sudo[297650]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:10.537+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:10 compute-0 sudo[297675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:47:10 compute-0 sudo[297675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:10 compute-0 sudo[297675]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:10 compute-0 sudo[297700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:47:10 compute-0 sudo[297700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:10 compute-0 sudo[297700]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:10 compute-0 sudo[297725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:47:10 compute-0 sudo[297725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:10 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:10 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:11.009+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:11 compute-0 podman[297788]: 2025-11-24 20:47:11.263632817 +0000 UTC m=+0.039665822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:47:11 compute-0 podman[297788]: 2025-11-24 20:47:11.377380689 +0000 UTC m=+0.153413644 container create bbba730ce3956035e78aa2aa9f672db88f1fd7fb5c9750519987d3b777816584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:47:11 compute-0 systemd[1]: Started libpod-conmon-bbba730ce3956035e78aa2aa9f672db88f1fd7fb5c9750519987d3b777816584.scope.
Nov 24 20:47:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:47:11 compute-0 podman[297788]: 2025-11-24 20:47:11.550258467 +0000 UTC m=+0.326291472 container init bbba730ce3956035e78aa2aa9f672db88f1fd7fb5c9750519987d3b777816584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:47:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:11.564+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:11 compute-0 podman[297788]: 2025-11-24 20:47:11.565506569 +0000 UTC m=+0.341539524 container start bbba730ce3956035e78aa2aa9f672db88f1fd7fb5c9750519987d3b777816584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:47:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:11 compute-0 gallant_lederberg[297805]: 167 167
Nov 24 20:47:11 compute-0 systemd[1]: libpod-bbba730ce3956035e78aa2aa9f672db88f1fd7fb5c9750519987d3b777816584.scope: Deactivated successfully.
Nov 24 20:47:11 compute-0 podman[297788]: 2025-11-24 20:47:11.707989046 +0000 UTC m=+0.484022061 container attach bbba730ce3956035e78aa2aa9f672db88f1fd7fb5c9750519987d3b777816584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 20:47:11 compute-0 podman[297788]: 2025-11-24 20:47:11.70887251 +0000 UTC m=+0.484905465 container died bbba730ce3956035e78aa2aa9f672db88f1fd7fb5c9750519987d3b777816584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:47:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:11.970+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:12 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1da13c4691f4c813e9d5952027cc07e629d393ef3613f82c900ae1f1092351d-merged.mount: Deactivated successfully.
Nov 24 20:47:12 compute-0 ceph-mon[75677]: pgmap v1965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:12 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:12 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:12 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:12 compute-0 podman[297788]: 2025-11-24 20:47:12.125731536 +0000 UTC m=+0.901764491 container remove bbba730ce3956035e78aa2aa9f672db88f1fd7fb5c9750519987d3b777816584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:47:12 compute-0 systemd[1]: libpod-conmon-bbba730ce3956035e78aa2aa9f672db88f1fd7fb5c9750519987d3b777816584.scope: Deactivated successfully.
Nov 24 20:47:12 compute-0 podman[297829]: 2025-11-24 20:47:12.350741532 +0000 UTC m=+0.030609128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:47:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:12 compute-0 podman[297829]: 2025-11-24 20:47:12.527230887 +0000 UTC m=+0.207098483 container create 3712406c54171d19e5f0e6cff54ab2d3483a84a8b17e7b719354d969ac1713c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 20:47:12 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:12.577+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:12 compute-0 systemd[1]: Started libpod-conmon-3712406c54171d19e5f0e6cff54ab2d3483a84a8b17e7b719354d969ac1713c1.scope.
Nov 24 20:47:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a225252c386514a952494111d3864d293a63ec3822a3a108f5161c56eeba375/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a225252c386514a952494111d3864d293a63ec3822a3a108f5161c56eeba375/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a225252c386514a952494111d3864d293a63ec3822a3a108f5161c56eeba375/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a225252c386514a952494111d3864d293a63ec3822a3a108f5161c56eeba375/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:47:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3347 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:12 compute-0 podman[297829]: 2025-11-24 20:47:12.876501839 +0000 UTC m=+0.556369485 container init 3712406c54171d19e5f0e6cff54ab2d3483a84a8b17e7b719354d969ac1713c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:47:12 compute-0 podman[297829]: 2025-11-24 20:47:12.890764884 +0000 UTC m=+0.570632480 container start 3712406c54171d19e5f0e6cff54ab2d3483a84a8b17e7b719354d969ac1713c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclean, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:47:12 compute-0 podman[297829]: 2025-11-24 20:47:12.943242701 +0000 UTC m=+0.623110297 container attach 3712406c54171d19e5f0e6cff54ab2d3483a84a8b17e7b719354d969ac1713c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:47:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:12.968+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:13 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:13 compute-0 ceph-mon[75677]: pgmap v1966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:13 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:13 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3347 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:13.537+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:13.961+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]: {
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "osd_id": 2,
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "type": "bluestore"
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:     },
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "osd_id": 1,
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "type": "bluestore"
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:     },
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "osd_id": 0,
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:         "type": "bluestore"
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]:     }
Nov 24 20:47:14 compute-0 inspiring_mclean[297846]: }
Nov 24 20:47:14 compute-0 systemd[1]: libpod-3712406c54171d19e5f0e6cff54ab2d3483a84a8b17e7b719354d969ac1713c1.scope: Deactivated successfully.
Nov 24 20:47:14 compute-0 systemd[1]: libpod-3712406c54171d19e5f0e6cff54ab2d3483a84a8b17e7b719354d969ac1713c1.scope: Consumed 1.184s CPU time.
Nov 24 20:47:14 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:14 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:14 compute-0 podman[297879]: 2025-11-24 20:47:14.127148799 +0000 UTC m=+0.039034325 container died 3712406c54171d19e5f0e6cff54ab2d3483a84a8b17e7b719354d969ac1713c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclean, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a225252c386514a952494111d3864d293a63ec3822a3a108f5161c56eeba375-merged.mount: Deactivated successfully.
Nov 24 20:47:14 compute-0 podman[297879]: 2025-11-24 20:47:14.185260978 +0000 UTC m=+0.097146494 container remove 3712406c54171d19e5f0e6cff54ab2d3483a84a8b17e7b719354d969ac1713c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclean, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:47:14 compute-0 systemd[1]: libpod-conmon-3712406c54171d19e5f0e6cff54ab2d3483a84a8b17e7b719354d969ac1713c1.scope: Deactivated successfully.
Nov 24 20:47:14 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:47:14.205 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:47:14 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:47:14.207 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:47:14 compute-0 sudo[297725]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:47:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:47:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:47:14 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:47:14 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 996ff0ad-bab8-46e0-9e13-436740ed57c0 does not exist
Nov 24 20:47:14 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 334f77ce-fc6b-4cce-93ef-fc49fa3b38a7 does not exist
Nov 24 20:47:14 compute-0 sudo[297894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:47:14 compute-0 sudo[297894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:14 compute-0 sudo[297894]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:14 compute-0 sudo[297919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:47:14 compute-0 sudo[297919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:47:14 compute-0 sudo[297919]: pam_unix(sudo:session): session closed for user root
Nov 24 20:47:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:14.503+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:15.001+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:15 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:47:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:47:15 compute-0 ceph-mon[75677]: pgmap v1967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:15 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:15.531+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:15 compute-0 podman[297944]: 2025-11-24 20:47:15.87445446 +0000 UTC m=+0.102391255 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:47:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:15.998+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:16 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:16 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:47:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3012306924' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:47:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:47:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3012306924' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:47:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:16.542+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:16.966+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:17 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:17 compute-0 ceph-mon[75677]: pgmap v1968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3012306924' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:47:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3012306924' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:47:17 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:17.572+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3357 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:17 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Nov 24 20:47:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:17.913771) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:47:17 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Nov 24 20:47:17 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017237913811, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1183, "num_deletes": 251, "total_data_size": 1316681, "memory_usage": 1343504, "flush_reason": "Manual Compaction"}
Nov 24 20:47:17 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Nov 24 20:47:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:18.000+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017238072823, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 1297163, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55409, "largest_seqno": 56591, "table_properties": {"data_size": 1291641, "index_size": 2662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13983, "raw_average_key_size": 19, "raw_value_size": 1279369, "raw_average_value_size": 1825, "num_data_blocks": 117, "num_entries": 701, "num_filter_entries": 701, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017163, "oldest_key_time": 1764017163, "file_creation_time": 1764017237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 159118 microseconds, and 6641 cpu microseconds.
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.072885) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 1297163 bytes OK
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.072911) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.201761) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.201833) EVENT_LOG_v1 {"time_micros": 1764017238201820, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.201861) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 1310861, prev total WAL file size 1310861, number of live WAL files 2.
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.202898) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(1266KB)], [116(11MB)]
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017238202972, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 13144186, "oldest_snapshot_seqno": -1}
Nov 24 20:47:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 13057 keys, 12410550 bytes, temperature: kUnknown
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017238508062, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 12410550, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12335734, "index_size": 41040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32709, "raw_key_size": 358135, "raw_average_key_size": 27, "raw_value_size": 12108659, "raw_average_value_size": 927, "num_data_blocks": 1525, "num_entries": 13057, "num_filter_entries": 13057, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017238, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.508426) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 12410550 bytes
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.523566) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 43.1 rd, 40.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.3 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(19.7) write-amplify(9.6) OK, records in: 13571, records dropped: 514 output_compression: NoCompression
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.523623) EVENT_LOG_v1 {"time_micros": 1764017238523608, "job": 70, "event": "compaction_finished", "compaction_time_micros": 305223, "compaction_time_cpu_micros": 34432, "output_level": 6, "num_output_files": 1, "total_output_size": 12410550, "num_input_records": 13571, "num_output_records": 13057, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017238524431, "job": 70, "event": "table_file_deletion", "file_number": 118}
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017238528506, "job": 70, "event": "table_file_deletion", "file_number": 116}
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.202720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.528634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.528639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.528642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.528644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:18 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:18.528647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:18.602+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:18 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:18 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:18 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3357 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:18 compute-0 ceph-mon[75677]: pgmap v1969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:19.015+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:19 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:47:19.210 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:47:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:19.607+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:19 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:19 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:19 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:20.013+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:20.591+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:20.999+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:21 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:21 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:21 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:21 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:21.546+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:22.040+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:22 compute-0 ceph-mon[75677]: pgmap v1970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:22 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:22 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:22.585+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:23.040+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:23 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3362 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:23 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:23 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:23.539+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:23 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:24.003+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:24.509+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:47:24
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['volumes', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'vms', 'default.rgw.control', '.rgw.root', 'default.rgw.meta']
Nov 24 20:47:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:47:24 compute-0 ceph-mon[75677]: pgmap v1971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:24 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:24 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3362 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:24 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:25.029+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:25.543+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:25 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:25 compute-0 ceph-mon[75677]: pgmap v1972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:25 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:25 compute-0 podman[297963]: 2025-11-24 20:47:25.868917492 +0000 UTC m=+0.089878757 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 20:47:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:26.010+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:26.538+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:26 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:26 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.679420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017246679461, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 359, "num_deletes": 251, "total_data_size": 174354, "memory_usage": 182408, "flush_reason": "Manual Compaction"}
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017246708272, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 172160, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56592, "largest_seqno": 56950, "table_properties": {"data_size": 169942, "index_size": 318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5867, "raw_average_key_size": 19, "raw_value_size": 165537, "raw_average_value_size": 537, "num_data_blocks": 14, "num_entries": 308, "num_filter_entries": 308, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017238, "oldest_key_time": 1764017238, "file_creation_time": 1764017246, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 28920 microseconds, and 1475 cpu microseconds.
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.708338) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 172160 bytes OK
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.708357) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.725191) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.725218) EVENT_LOG_v1 {"time_micros": 1764017246725211, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.725238) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 171902, prev total WAL file size 171902, number of live WAL files 2.
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.725643) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(168KB)], [119(11MB)]
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017246725675, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 12582710, "oldest_snapshot_seqno": -1}
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 12856 keys, 11050429 bytes, temperature: kUnknown
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017246923943, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 11050429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10978368, "index_size": 38797, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32197, "raw_key_size": 354783, "raw_average_key_size": 27, "raw_value_size": 10756189, "raw_average_value_size": 836, "num_data_blocks": 1422, "num_entries": 12856, "num_filter_entries": 12856, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017246, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.924317) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 11050429 bytes
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.942975) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.4 rd, 55.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(137.3) write-amplify(64.2) OK, records in: 13365, records dropped: 509 output_compression: NoCompression
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.943016) EVENT_LOG_v1 {"time_micros": 1764017246942999, "job": 72, "event": "compaction_finished", "compaction_time_micros": 198375, "compaction_time_cpu_micros": 37202, "output_level": 6, "num_output_files": 1, "total_output_size": 11050429, "num_input_records": 13365, "num_output_records": 12856, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017246943256, "job": 72, "event": "table_file_deletion", "file_number": 121}
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017246947338, "job": 72, "event": "table_file_deletion", "file_number": 119}
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.725533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.947480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.947490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.947493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.947497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:47:26.947501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:47:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:26.993+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:27.575+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:27 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:27 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:27 compute-0 ceph-mon[75677]: pgmap v1973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:27 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:27 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:27.993+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:28.580+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:28.999+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:29.581+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:29 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:47:29.612 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:39:6d 2001:db8:0:1:f816:3eff:fe8b:396d 2001:db8::f816:3eff:fe8b:396d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8b:396d/64 2001:db8::f816:3eff:fe8b:396d/64', 'neutron:device_id': 'ovnmeta-3e93c8e4-761e-434f-8e54-190cfee8e635', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e93c8e4-761e-434f-8e54-190cfee8e635', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b3d3d52-3995-49c7-a80b-b62183f4e1ea, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=4beb226a-9b85-4605-bdf3-d91398c022ab) old=Port_Binding(mac=['fa:16:3e:8b:39:6d 2001:db8::f816:3eff:fe8b:396d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8b:396d/64', 'neutron:device_id': 'ovnmeta-3e93c8e4-761e-434f-8e54-190cfee8e635', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e93c8e4-761e-434f-8e54-190cfee8e635', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:47:29 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:47:29.613 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 4beb226a-9b85-4605-bdf3-d91398c022ab in datapath 3e93c8e4-761e-434f-8e54-190cfee8e635 updated
Nov 24 20:47:29 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:47:29.615 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e93c8e4-761e-434f-8e54-190cfee8e635, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:47:29 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:47:29.618 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[73438cab-207b-4008-9102-abc4dcca428c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:47:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:29.970+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:30.566+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:30 compute-0 podman[297984]: 2025-11-24 20:47:30.88247611 +0000 UTC m=+0.117397801 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 20:47:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:30.939+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:31.565+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:31.897+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:32.587+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:32.868+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:33.578+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:33 compute-0 ceph-mds[102499]: mds.beacon.cephfs.compute-0.jkqrlp missed beacon ack from the monitors
Nov 24 20:47:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:33.869+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:34.566+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:34 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:34.917+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:47:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:47:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:35.545+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:35.889+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:36.585+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3367 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:36.880+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:37 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:37 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:37 compute-0 sshd-session[298010]: Invalid user local from 182.93.7.194 port 49542
Nov 24 20:47:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:37.568+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:37 compute-0 sshd-session[298010]: Received disconnect from 182.93.7.194 port 49542:11: Bye Bye [preauth]
Nov 24 20:47:37 compute-0 sshd-session[298010]: Disconnected from invalid user local 182.93.7.194 port 49542 [preauth]
Nov 24 20:47:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:37.875+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: pgmap v1974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: pgmap v1975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: pgmap v1976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: pgmap v1977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: pgmap v1978: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3367 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:38.611+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:38.851+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:39.564+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:39 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:39 compute-0 ceph-mon[75677]: pgmap v1979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:39 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:39.842+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:40.611+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:47:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:47:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:47:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:47:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:47:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:40.804+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:41 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:41 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:41.647+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:41.782+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:41 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3377 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:42 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:42 compute-0 ceph-mon[75677]: pgmap v1980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:42 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:42 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:42 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:42 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3377 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:42 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:42.646+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:42.765+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:42 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:43 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:43 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:43.663+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:43.731+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:44.672+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:44.773+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:44 compute-0 ceph-mon[75677]: pgmap v1981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:44 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:44 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:44 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:45.643+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:45.758+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:45 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:46 compute-0 ceph-mon[75677]: pgmap v1982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:46 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:46 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:46 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:46 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:46.602+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:46.783+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3382 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:46 compute-0 podman[298012]: 2025-11-24 20:47:46.858891842 +0000 UTC m=+0.084374081 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 20:47:47 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:47 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3382 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:47.624+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:47.739+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:48 compute-0 ceph-mon[75677]: pgmap v1983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:48 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:48 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:48 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:48.665+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:48.699+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:49 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:49 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:49.662+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:49.703+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:50 compute-0 ceph-mon[75677]: pgmap v1984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:50 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:50 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:50.699+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:50.745+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:51 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:51 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:51.669+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:51.720+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3387 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:52 compute-0 ceph-mon[75677]: pgmap v1985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:52 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:52 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:52 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3387 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:52.624+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:52.763+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:52 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:53 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:53 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:53.579+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:53.745+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:54 compute-0 ceph-mon[75677]: pgmap v1986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:54 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:54 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:47:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:47:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:54.541+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:54.715+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:55 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:55 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:55.510+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:55.713+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:55 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:56 compute-0 ceph-mon[75677]: pgmap v1987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:56 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:56 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:56.527+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:56.680+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:56 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3391 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:47:56 compute-0 podman[298032]: 2025-11-24 20:47:56.853428558 +0000 UTC m=+0.070289159 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 20:47:57 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:57 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:57 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3391 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:47:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:57.573+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:57.674+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:58 compute-0 ceph-mon[75677]: pgmap v1988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:58 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:58 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:47:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:58.620+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:58.646+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:59 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:47:59 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:47:59.659+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:47:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:47:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:47:59.663+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:47:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:00 compute-0 ceph-mon[75677]: pgmap v1989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:00 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:00 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:00.662+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:00.689+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:01 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:01 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:01.641+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:01.685+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3397 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:01 compute-0 podman[298052]: 2025-11-24 20:48:01.87680759 +0000 UTC m=+0.106339213 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 20:48:02 compute-0 ceph-mon[75677]: pgmap v1990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:02 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:02 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:02 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3397 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:02.627+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:02.643+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:03 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:03 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:03.645+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:03 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:03.659+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:04 compute-0 ceph-mon[75677]: pgmap v1991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:04 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:04 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:04.627+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:04.659+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:05 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:05 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:05.582+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:05.649+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:06 compute-0 ceph-mon[75677]: pgmap v1992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:06 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:06 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:06.537+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:06.654+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3402 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:07 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:07 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:07 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3402 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:07.545+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:07.690+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:08 compute-0 ceph-mon[75677]: pgmap v1993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:08 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:08 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:08.512+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:08.719+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:09 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:09 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:48:09.403 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:48:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:48:09.404 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:48:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:48:09.404 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:48:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:09.503+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:09.688+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:10 compute-0 ceph-mon[75677]: pgmap v1994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:10 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:10 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:10.471+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:10.673+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:10 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:11 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:11 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:11.463+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:11.631+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3407 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:12 compute-0 ceph-mon[75677]: pgmap v1995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:12 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:12 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:12 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3407 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:12.499+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:12 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:12.589+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:13 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:13 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:13.534+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:13.563+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:14 compute-0 ceph-mon[75677]: pgmap v1996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:14 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:14 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:14.493+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:14 compute-0 sudo[298080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:14 compute-0 sudo[298080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:14.565+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:14 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:14 compute-0 sudo[298080]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:14 compute-0 sudo[298105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:48:14 compute-0 sudo[298105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:14 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:48:14.658 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:48:14 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:48:14.659 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:48:14 compute-0 sudo[298105]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:14 compute-0 sudo[298130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:14 compute-0 sudo[298130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:14 compute-0 sudo[298130]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:14 compute-0 sudo[298155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:48:14 compute-0 sudo[298155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:15 compute-0 sudo[298155]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:15.469+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:15 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:15 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:15 compute-0 sudo[298210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:15 compute-0 sudo[298210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:15 compute-0 sudo[298210]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:15.591+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:15 compute-0 sudo[298235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:48:15 compute-0 sudo[298235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:15 compute-0 sudo[298235]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:15 compute-0 sudo[298260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:15 compute-0 sudo[298260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:15 compute-0 sudo[298260]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:15 compute-0 sudo[298285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 24 20:48:15 compute-0 sudo[298285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:16 compute-0 sudo[298285]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:48:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:16.445+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2637140313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2637140313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:48:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:16.544+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:16 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:48:16.661 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:48:16 compute-0 ceph-mon[75677]: pgmap v1997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:16 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:16 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:16 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:48:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2637140313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:48:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2637140313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3412 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:48:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d57855da-f143-4b5a-a8b7-b68d8120377e does not exist
Nov 24 20:48:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev dbb05209-49eb-4e99-a6b2-3216980cb6b6 does not exist
Nov 24 20:48:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 74d85684-4f17-4bb5-88ac-6e6ce90570e6 does not exist
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:48:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:48:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:48:16 compute-0 sudo[298329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:16 compute-0 sudo[298329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:16 compute-0 sudo[298329]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:16 compute-0 sudo[298360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:48:16 compute-0 sudo[298360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:16 compute-0 sudo[298360]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:17 compute-0 podman[298353]: 2025-11-24 20:48:17.013048158 +0000 UTC m=+0.087690529 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 20:48:17 compute-0 sudo[298398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:17 compute-0 sudo[298398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:17 compute-0 sudo[298398]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:17 compute-0 sudo[298423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:48:17 compute-0 sudo[298423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:17.456+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:17.582+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:17 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:17 compute-0 podman[298488]: 2025-11-24 20:48:17.509303568 +0000 UTC m=+0.044840402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:48:17 compute-0 podman[298488]: 2025-11-24 20:48:17.607400627 +0000 UTC m=+0.142937391 container create 51ebb3d6ff7861186e18a93760e35cfff644340e21b1f8e87477fff8e7a45c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 20:48:17 compute-0 systemd[1]: Started libpod-conmon-51ebb3d6ff7861186e18a93760e35cfff644340e21b1f8e87477fff8e7a45c9f.scope.
Nov 24 20:48:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:48:17 compute-0 podman[298488]: 2025-11-24 20:48:17.739797622 +0000 UTC m=+0.275334336 container init 51ebb3d6ff7861186e18a93760e35cfff644340e21b1f8e87477fff8e7a45c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 20:48:17 compute-0 podman[298488]: 2025-11-24 20:48:17.754948581 +0000 UTC m=+0.290485335 container start 51ebb3d6ff7861186e18a93760e35cfff644340e21b1f8e87477fff8e7a45c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 20:48:17 compute-0 podman[298488]: 2025-11-24 20:48:17.759893484 +0000 UTC m=+0.295430248 container attach 51ebb3d6ff7861186e18a93760e35cfff644340e21b1f8e87477fff8e7a45c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 20:48:17 compute-0 sharp_cray[298504]: 167 167
Nov 24 20:48:17 compute-0 systemd[1]: libpod-51ebb3d6ff7861186e18a93760e35cfff644340e21b1f8e87477fff8e7a45c9f.scope: Deactivated successfully.
Nov 24 20:48:17 compute-0 podman[298488]: 2025-11-24 20:48:17.764748886 +0000 UTC m=+0.300285650 container died 51ebb3d6ff7861186e18a93760e35cfff644340e21b1f8e87477fff8e7a45c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:48:17 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:17 compute-0 ceph-mon[75677]: pgmap v1998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:17 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:48:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:48:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:48:17 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3412 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:48:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:48:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:48:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d1bf86f511b102420eda6e1e7a2294d27dc2910a9a94e599d0964e411dcf958-merged.mount: Deactivated successfully.
Nov 24 20:48:17 compute-0 podman[298488]: 2025-11-24 20:48:17.809457113 +0000 UTC m=+0.344993847 container remove 51ebb3d6ff7861186e18a93760e35cfff644340e21b1f8e87477fff8e7a45c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 20:48:17 compute-0 systemd[1]: libpod-conmon-51ebb3d6ff7861186e18a93760e35cfff644340e21b1f8e87477fff8e7a45c9f.scope: Deactivated successfully.
Nov 24 20:48:17 compute-0 podman[298528]: 2025-11-24 20:48:17.998882797 +0000 UTC m=+0.050301668 container create d1ccc613c4be2a20c527e786b4ab1c84b47d1710bc8906f331f87770fbbee1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_black, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:48:18 compute-0 systemd[1]: Started libpod-conmon-d1ccc613c4be2a20c527e786b4ab1c84b47d1710bc8906f331f87770fbbee1f6.scope.
Nov 24 20:48:18 compute-0 podman[298528]: 2025-11-24 20:48:17.97303941 +0000 UTC m=+0.024458291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:48:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98e29bdb1a059cfb604c5fb9fe04ce844d35f8167385e3e36f3b40dc7502dc68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98e29bdb1a059cfb604c5fb9fe04ce844d35f8167385e3e36f3b40dc7502dc68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98e29bdb1a059cfb604c5fb9fe04ce844d35f8167385e3e36f3b40dc7502dc68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98e29bdb1a059cfb604c5fb9fe04ce844d35f8167385e3e36f3b40dc7502dc68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98e29bdb1a059cfb604c5fb9fe04ce844d35f8167385e3e36f3b40dc7502dc68/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:18 compute-0 podman[298528]: 2025-11-24 20:48:18.098996011 +0000 UTC m=+0.150414902 container init d1ccc613c4be2a20c527e786b4ab1c84b47d1710bc8906f331f87770fbbee1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 20:48:18 compute-0 podman[298528]: 2025-11-24 20:48:18.113124272 +0000 UTC m=+0.164543113 container start d1ccc613c4be2a20c527e786b4ab1c84b47d1710bc8906f331f87770fbbee1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:48:18 compute-0 podman[298528]: 2025-11-24 20:48:18.116827653 +0000 UTC m=+0.168246544 container attach d1ccc613c4be2a20c527e786b4ab1c84b47d1710bc8906f331f87770fbbee1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_black, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:48:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:18.443+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:18.570+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:18 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:18 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:18 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:19 compute-0 modest_black[298545]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:48:19 compute-0 modest_black[298545]: --> relative data size: 1.0
Nov 24 20:48:19 compute-0 modest_black[298545]: --> All data devices are unavailable
Nov 24 20:48:19 compute-0 systemd[1]: libpod-d1ccc613c4be2a20c527e786b4ab1c84b47d1710bc8906f331f87770fbbee1f6.scope: Deactivated successfully.
Nov 24 20:48:19 compute-0 systemd[1]: libpod-d1ccc613c4be2a20c527e786b4ab1c84b47d1710bc8906f331f87770fbbee1f6.scope: Consumed 1.058s CPU time.
Nov 24 20:48:19 compute-0 podman[298528]: 2025-11-24 20:48:19.219154348 +0000 UTC m=+1.270573249 container died d1ccc613c4be2a20c527e786b4ab1c84b47d1710bc8906f331f87770fbbee1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_black, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 20:48:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-98e29bdb1a059cfb604c5fb9fe04ce844d35f8167385e3e36f3b40dc7502dc68-merged.mount: Deactivated successfully.
Nov 24 20:48:19 compute-0 podman[298528]: 2025-11-24 20:48:19.294065001 +0000 UTC m=+1.345483832 container remove d1ccc613c4be2a20c527e786b4ab1c84b47d1710bc8906f331f87770fbbee1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:48:19 compute-0 systemd[1]: libpod-conmon-d1ccc613c4be2a20c527e786b4ab1c84b47d1710bc8906f331f87770fbbee1f6.scope: Deactivated successfully.
Nov 24 20:48:19 compute-0 sudo[298423]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:19.425+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:19 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:19 compute-0 sudo[298586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:19 compute-0 sudo[298586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:19 compute-0 sudo[298586]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:19 compute-0 sudo[298611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:48:19 compute-0 sudo[298611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:19 compute-0 sudo[298611]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:19.588+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:19 compute-0 sudo[298636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:19 compute-0 sudo[298636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:19 compute-0 sudo[298636]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:19 compute-0 sudo[298661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:48:19 compute-0 sudo[298661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:19 compute-0 ceph-mon[75677]: pgmap v1999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:19 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:19 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:20 compute-0 podman[298729]: 2025-11-24 20:48:20.079960572 +0000 UTC m=+0.053088285 container create 2c3bdf87564c80a9d5821e22e03ee4182529b505259824ec25521ad8723650e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_turing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 20:48:20 compute-0 systemd[1]: Started libpod-conmon-2c3bdf87564c80a9d5821e22e03ee4182529b505259824ec25521ad8723650e1.scope.
Nov 24 20:48:20 compute-0 podman[298729]: 2025-11-24 20:48:20.055151151 +0000 UTC m=+0.028278914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:48:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:48:20 compute-0 podman[298729]: 2025-11-24 20:48:20.176053887 +0000 UTC m=+0.149181620 container init 2c3bdf87564c80a9d5821e22e03ee4182529b505259824ec25521ad8723650e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_turing, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:48:20 compute-0 podman[298729]: 2025-11-24 20:48:20.185541863 +0000 UTC m=+0.158669566 container start 2c3bdf87564c80a9d5821e22e03ee4182529b505259824ec25521ad8723650e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:48:20 compute-0 podman[298729]: 2025-11-24 20:48:20.189405717 +0000 UTC m=+0.162533430 container attach 2c3bdf87564c80a9d5821e22e03ee4182529b505259824ec25521ad8723650e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:48:20 compute-0 competent_turing[298745]: 167 167
Nov 24 20:48:20 compute-0 systemd[1]: libpod-2c3bdf87564c80a9d5821e22e03ee4182529b505259824ec25521ad8723650e1.scope: Deactivated successfully.
Nov 24 20:48:20 compute-0 conmon[298745]: conmon 2c3bdf87564c80a9d582 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2c3bdf87564c80a9d5821e22e03ee4182529b505259824ec25521ad8723650e1.scope/container/memory.events
Nov 24 20:48:20 compute-0 podman[298729]: 2025-11-24 20:48:20.19469572 +0000 UTC m=+0.167823433 container died 2c3bdf87564c80a9d5821e22e03ee4182529b505259824ec25521ad8723650e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_turing, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 20:48:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-468a06f75d089b43b38c114ac35c4bde4bef01781475b26889fa0a8f455a7628-merged.mount: Deactivated successfully.
Nov 24 20:48:20 compute-0 podman[298729]: 2025-11-24 20:48:20.237736683 +0000 UTC m=+0.210864426 container remove 2c3bdf87564c80a9d5821e22e03ee4182529b505259824ec25521ad8723650e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:48:20 compute-0 systemd[1]: libpod-conmon-2c3bdf87564c80a9d5821e22e03ee4182529b505259824ec25521ad8723650e1.scope: Deactivated successfully.
Nov 24 20:48:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:20.392+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:20 compute-0 podman[298769]: 2025-11-24 20:48:20.495144493 +0000 UTC m=+0.073493556 container create f00bfb672b114c008b3615ae21e57a5ada13fadbb45a3e583a794ef3a7dd2944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hertz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:48:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:20 compute-0 systemd[1]: Started libpod-conmon-f00bfb672b114c008b3615ae21e57a5ada13fadbb45a3e583a794ef3a7dd2944.scope.
Nov 24 20:48:20 compute-0 podman[298769]: 2025-11-24 20:48:20.467277971 +0000 UTC m=+0.045627074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:48:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77989e4e32431f2900ecafa4005044d9ed63238c9976c258f361cc4e1ba3c67d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77989e4e32431f2900ecafa4005044d9ed63238c9976c258f361cc4e1ba3c67d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77989e4e32431f2900ecafa4005044d9ed63238c9976c258f361cc4e1ba3c67d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77989e4e32431f2900ecafa4005044d9ed63238c9976c258f361cc4e1ba3c67d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:20 compute-0 podman[298769]: 2025-11-24 20:48:20.605302598 +0000 UTC m=+0.183651671 container init f00bfb672b114c008b3615ae21e57a5ada13fadbb45a3e583a794ef3a7dd2944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:48:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:20.616+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:20 compute-0 podman[298769]: 2025-11-24 20:48:20.625667207 +0000 UTC m=+0.204016230 container start f00bfb672b114c008b3615ae21e57a5ada13fadbb45a3e583a794ef3a7dd2944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hertz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 20:48:20 compute-0 podman[298769]: 2025-11-24 20:48:20.630730514 +0000 UTC m=+0.209079587 container attach f00bfb672b114c008b3615ae21e57a5ada13fadbb45a3e583a794ef3a7dd2944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hertz, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:48:20 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:20 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:21.420+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:21 compute-0 funny_hertz[298785]: {
Nov 24 20:48:21 compute-0 funny_hertz[298785]:     "0": [
Nov 24 20:48:21 compute-0 funny_hertz[298785]:         {
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "devices": [
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "/dev/loop3"
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             ],
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_name": "ceph_lv0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_size": "21470642176",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "name": "ceph_lv0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "tags": {
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.cluster_name": "ceph",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.crush_device_class": "",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.encrypted": "0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.osd_id": "0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.type": "block",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.vdo": "0"
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             },
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "type": "block",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "vg_name": "ceph_vg0"
Nov 24 20:48:21 compute-0 funny_hertz[298785]:         }
Nov 24 20:48:21 compute-0 funny_hertz[298785]:     ],
Nov 24 20:48:21 compute-0 funny_hertz[298785]:     "1": [
Nov 24 20:48:21 compute-0 funny_hertz[298785]:         {
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "devices": [
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "/dev/loop4"
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             ],
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_name": "ceph_lv1",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_size": "21470642176",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "name": "ceph_lv1",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "tags": {
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.cluster_name": "ceph",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.crush_device_class": "",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.encrypted": "0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.osd_id": "1",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.type": "block",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.vdo": "0"
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             },
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "type": "block",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "vg_name": "ceph_vg1"
Nov 24 20:48:21 compute-0 funny_hertz[298785]:         }
Nov 24 20:48:21 compute-0 funny_hertz[298785]:     ],
Nov 24 20:48:21 compute-0 funny_hertz[298785]:     "2": [
Nov 24 20:48:21 compute-0 funny_hertz[298785]:         {
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "devices": [
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "/dev/loop5"
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             ],
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_name": "ceph_lv2",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_size": "21470642176",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "name": "ceph_lv2",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "tags": {
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.cluster_name": "ceph",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.crush_device_class": "",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.encrypted": "0",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.osd_id": "2",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.type": "block",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:                 "ceph.vdo": "0"
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             },
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "type": "block",
Nov 24 20:48:21 compute-0 funny_hertz[298785]:             "vg_name": "ceph_vg2"
Nov 24 20:48:21 compute-0 funny_hertz[298785]:         }
Nov 24 20:48:21 compute-0 funny_hertz[298785]:     ]
Nov 24 20:48:21 compute-0 funny_hertz[298785]: }
Nov 24 20:48:21 compute-0 systemd[1]: libpod-f00bfb672b114c008b3615ae21e57a5ada13fadbb45a3e583a794ef3a7dd2944.scope: Deactivated successfully.
Nov 24 20:48:21 compute-0 podman[298769]: 2025-11-24 20:48:21.565325651 +0000 UTC m=+1.143674774 container died f00bfb672b114c008b3615ae21e57a5ada13fadbb45a3e583a794ef3a7dd2944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:48:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:21.647+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-77989e4e32431f2900ecafa4005044d9ed63238c9976c258f361cc4e1ba3c67d-merged.mount: Deactivated successfully.
Nov 24 20:48:21 compute-0 podman[298769]: 2025-11-24 20:48:21.785880176 +0000 UTC m=+1.364229189 container remove f00bfb672b114c008b3615ae21e57a5ada13fadbb45a3e583a794ef3a7dd2944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:48:21 compute-0 systemd[1]: libpod-conmon-f00bfb672b114c008b3615ae21e57a5ada13fadbb45a3e583a794ef3a7dd2944.scope: Deactivated successfully.
Nov 24 20:48:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3417 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:21 compute-0 ceph-mon[75677]: pgmap v2000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:21 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:21 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:21 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:21 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3417 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:21 compute-0 sudo[298661]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:21 compute-0 sudo[298806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:21 compute-0 sudo[298806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:21 compute-0 sudo[298806]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:21 compute-0 sudo[298831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:48:21 compute-0 sudo[298831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:21 compute-0 sudo[298831]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:22 compute-0 sudo[298856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:22 compute-0 sudo[298856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:22 compute-0 sudo[298856]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:22 compute-0 sudo[298881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:48:22 compute-0 sudo[298881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:22.390+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:22 compute-0 podman[298947]: 2025-11-24 20:48:22.518968351 +0000 UTC m=+0.044776090 container create 753ef8912adc5314c02d013b6489ad5a478c8db35eba559434e0cb94ae75056e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:48:22 compute-0 systemd[1]: Started libpod-conmon-753ef8912adc5314c02d013b6489ad5a478c8db35eba559434e0cb94ae75056e.scope.
Nov 24 20:48:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:48:22 compute-0 podman[298947]: 2025-11-24 20:48:22.497977505 +0000 UTC m=+0.023785284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:48:22 compute-0 podman[298947]: 2025-11-24 20:48:22.596810493 +0000 UTC m=+0.122618252 container init 753ef8912adc5314c02d013b6489ad5a478c8db35eba559434e0cb94ae75056e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:48:22 compute-0 podman[298947]: 2025-11-24 20:48:22.604183392 +0000 UTC m=+0.129991131 container start 753ef8912adc5314c02d013b6489ad5a478c8db35eba559434e0cb94ae75056e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:48:22 compute-0 podman[298947]: 2025-11-24 20:48:22.607760859 +0000 UTC m=+0.133568598 container attach 753ef8912adc5314c02d013b6489ad5a478c8db35eba559434e0cb94ae75056e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wright, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:48:22 compute-0 beautiful_wright[298963]: 167 167
Nov 24 20:48:22 compute-0 systemd[1]: libpod-753ef8912adc5314c02d013b6489ad5a478c8db35eba559434e0cb94ae75056e.scope: Deactivated successfully.
Nov 24 20:48:22 compute-0 podman[298947]: 2025-11-24 20:48:22.611058558 +0000 UTC m=+0.136866287 container died 753ef8912adc5314c02d013b6489ad5a478c8db35eba559434e0cb94ae75056e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:48:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-611db2a0c3f6bc7e5b0fa08c582e3e3832cc1dad72b3c1caf38e9aca5eba2d59-merged.mount: Deactivated successfully.
Nov 24 20:48:22 compute-0 podman[298947]: 2025-11-24 20:48:22.658088858 +0000 UTC m=+0.183896597 container remove 753ef8912adc5314c02d013b6489ad5a478c8db35eba559434e0cb94ae75056e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wright, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:48:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:22.657+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:22 compute-0 systemd[1]: libpod-conmon-753ef8912adc5314c02d013b6489ad5a478c8db35eba559434e0cb94ae75056e.scope: Deactivated successfully.
Nov 24 20:48:22 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:22 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:22 compute-0 podman[298986]: 2025-11-24 20:48:22.835113448 +0000 UTC m=+0.041327597 container create bd3b7b280b5779926bab5dbdc630e3b2b16ea8e830f80dd3b6f3234e61092bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:48:22 compute-0 systemd[1]: Started libpod-conmon-bd3b7b280b5779926bab5dbdc630e3b2b16ea8e830f80dd3b6f3234e61092bb8.scope.
Nov 24 20:48:22 compute-0 podman[298986]: 2025-11-24 20:48:22.81738704 +0000 UTC m=+0.023601209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:48:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06dabb1b9138c81183545fe36936753d0b585910dfc3d61f935983161898896f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06dabb1b9138c81183545fe36936753d0b585910dfc3d61f935983161898896f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06dabb1b9138c81183545fe36936753d0b585910dfc3d61f935983161898896f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06dabb1b9138c81183545fe36936753d0b585910dfc3d61f935983161898896f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:48:22 compute-0 podman[298986]: 2025-11-24 20:48:22.942763854 +0000 UTC m=+0.148978113 container init bd3b7b280b5779926bab5dbdc630e3b2b16ea8e830f80dd3b6f3234e61092bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wright, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:48:22 compute-0 podman[298986]: 2025-11-24 20:48:22.96070999 +0000 UTC m=+0.166924179 container start bd3b7b280b5779926bab5dbdc630e3b2b16ea8e830f80dd3b6f3234e61092bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:48:22 compute-0 podman[298986]: 2025-11-24 20:48:22.965321853 +0000 UTC m=+0.171536042 container attach bd3b7b280b5779926bab5dbdc630e3b2b16ea8e830f80dd3b6f3234e61092bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:48:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 8270 writes, 32K keys, 8270 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 8270 writes, 1961 syncs, 4.22 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 901 writes, 2566 keys, 901 commit groups, 1.0 writes per commit group, ingest: 1.59 MB, 0.00 MB/s
                                           Interval WAL: 901 writes, 378 syncs, 2.38 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:48:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:23.401+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:23 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:23.622+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:23 compute-0 ceph-mon[75677]: pgmap v2001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:23 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:23 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:23 compute-0 boring_wright[299003]: {
Nov 24 20:48:23 compute-0 boring_wright[299003]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "osd_id": 2,
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "type": "bluestore"
Nov 24 20:48:23 compute-0 boring_wright[299003]:     },
Nov 24 20:48:23 compute-0 boring_wright[299003]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "osd_id": 1,
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "type": "bluestore"
Nov 24 20:48:23 compute-0 boring_wright[299003]:     },
Nov 24 20:48:23 compute-0 boring_wright[299003]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "osd_id": 0,
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:48:23 compute-0 boring_wright[299003]:         "type": "bluestore"
Nov 24 20:48:23 compute-0 boring_wright[299003]:     }
Nov 24 20:48:23 compute-0 boring_wright[299003]: }
Nov 24 20:48:23 compute-0 systemd[1]: libpod-bd3b7b280b5779926bab5dbdc630e3b2b16ea8e830f80dd3b6f3234e61092bb8.scope: Deactivated successfully.
Nov 24 20:48:23 compute-0 podman[298986]: 2025-11-24 20:48:23.97729158 +0000 UTC m=+1.183505789 container died bd3b7b280b5779926bab5dbdc630e3b2b16ea8e830f80dd3b6f3234e61092bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wright, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:48:23 compute-0 systemd[1]: libpod-bd3b7b280b5779926bab5dbdc630e3b2b16ea8e830f80dd3b6f3234e61092bb8.scope: Consumed 1.018s CPU time.
Nov 24 20:48:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-06dabb1b9138c81183545fe36936753d0b585910dfc3d61f935983161898896f-merged.mount: Deactivated successfully.
Nov 24 20:48:24 compute-0 podman[298986]: 2025-11-24 20:48:24.120474436 +0000 UTC m=+1.326688595 container remove bd3b7b280b5779926bab5dbdc630e3b2b16ea8e830f80dd3b6f3234e61092bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wright, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 20:48:24 compute-0 systemd[1]: libpod-conmon-bd3b7b280b5779926bab5dbdc630e3b2b16ea8e830f80dd3b6f3234e61092bb8.scope: Deactivated successfully.
Nov 24 20:48:24 compute-0 sudo[298881]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:48:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:48:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:48:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f7bd0ce3-57ca-4e34-879c-60d3e5ec0e7c does not exist
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d7dae268-c609-4766-a55b-12850008f149 does not exist
Nov 24 20:48:24 compute-0 sudo[299050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:48:24 compute-0 sudo[299050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:24 compute-0 sudo[299050]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:24.375+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:24 compute-0 sudo[299075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:48:24 compute-0 sudo[299075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:48:24 compute-0 sudo[299075]: pam_unix(sudo:session): session closed for user root
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:48:24
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'backups', 'images', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log']
Nov 24 20:48:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:48:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:24.599+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:48:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:48:25 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:25 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:25.422+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:25.567+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:26 compute-0 ceph-mon[75677]: pgmap v2002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:26 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:26 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:26.406+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:26.603+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3422 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.814334) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017306814393, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 936, "num_deletes": 251, "total_data_size": 993876, "memory_usage": 1011232, "flush_reason": "Manual Compaction"}
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017306833311, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 701444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56951, "largest_seqno": 57886, "table_properties": {"data_size": 697396, "index_size": 1508, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12874, "raw_average_key_size": 22, "raw_value_size": 687907, "raw_average_value_size": 1194, "num_data_blocks": 65, "num_entries": 576, "num_filter_entries": 576, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017247, "oldest_key_time": 1764017247, "file_creation_time": 1764017306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 19056 microseconds, and 4834 cpu microseconds.
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.833386) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 701444 bytes OK
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.833411) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.838395) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.838424) EVENT_LOG_v1 {"time_micros": 1764017306838414, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.838446) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 988993, prev total WAL file size 988993, number of live WAL files 2.
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.839292) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353031' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(685KB)], [122(10MB)]
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017306839354, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11751873, "oldest_snapshot_seqno": -1}
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 12938 keys, 8657019 bytes, temperature: kUnknown
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017306904041, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 8657019, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8588823, "index_size": 34740, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32389, "raw_key_size": 357624, "raw_average_key_size": 27, "raw_value_size": 8369229, "raw_average_value_size": 646, "num_data_blocks": 1253, "num_entries": 12938, "num_filter_entries": 12938, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.904293) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 8657019 bytes
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.905440) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.5 rd, 133.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.5 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(29.1) write-amplify(12.3) OK, records in: 13432, records dropped: 494 output_compression: NoCompression
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.905458) EVENT_LOG_v1 {"time_micros": 1764017306905449, "job": 74, "event": "compaction_finished", "compaction_time_micros": 64735, "compaction_time_cpu_micros": 25265, "output_level": 6, "num_output_files": 1, "total_output_size": 8657019, "num_input_records": 13432, "num_output_records": 12938, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017306905679, "job": 74, "event": "table_file_deletion", "file_number": 124}
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017306907533, "job": 74, "event": "table_file_deletion", "file_number": 122}
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.839196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.907560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.907564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.907566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.907567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:48:26 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:48:26.907568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:48:27 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:27 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:27 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3422 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:27 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:48:27.327 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:84:fd 2001:db8:0:1:f816:3eff:fe5b:84fd 2001:db8::f816:3eff:fe5b:84fd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe5b:84fd/64 2001:db8::f816:3eff:fe5b:84fd/64', 'neutron:device_id': 'ovnmeta-3535cbbc-fb1c-40c7-9883-72b8639130dc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3535cbbc-fb1c-40c7-9883-72b8639130dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c686094-1208-41c4-b154-369b069cb453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8574a7dd-61db-42a5-a740-cfe06add4926) old=Port_Binding(mac=['fa:16:3e:5b:84:fd 2001:db8::f816:3eff:fe5b:84fd'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5b:84fd/64', 'neutron:device_id': 'ovnmeta-3535cbbc-fb1c-40c7-9883-72b8639130dc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3535cbbc-fb1c-40c7-9883-72b8639130dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:48:27 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:48:27.329 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8574a7dd-61db-42a5-a740-cfe06add4926 in datapath 3535cbbc-fb1c-40c7-9883-72b8639130dc updated
Nov 24 20:48:27 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:48:27.331 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3535cbbc-fb1c-40c7-9883-72b8639130dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:48:27 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:48:27.332 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae1c0b7-9cdc-4c13-918b-4ade35ce5f42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:48:27 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:27.411+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:27.555+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:27 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:27 compute-0 podman[299100]: 2025-11-24 20:48:27.884751139 +0000 UTC m=+0.095417097 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 20:48:28 compute-0 ceph-mon[75677]: pgmap v2003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:28 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:28 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:28.362+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:28.600+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:48:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 9377 writes, 36K keys, 9377 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9377 writes, 2341 syncs, 4.01 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1039 writes, 2890 keys, 1039 commit groups, 1.0 writes per commit group, ingest: 1.52 MB, 0.00 MB/s
                                           Interval WAL: 1039 writes, 444 syncs, 2.34 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:48:29 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:29 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:29.316+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:29.643+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:30.271+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:30 compute-0 ceph-mon[75677]: pgmap v2004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:30 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:30 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:30.617+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:31.228+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:31 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:31 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:31.569+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3427 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:32.262+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:32 compute-0 ceph-mon[75677]: pgmap v2005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:32 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:32 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:32 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3427 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:32.546+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:32 compute-0 podman[299122]: 2025-11-24 20:48:32.887357421 +0000 UTC m=+0.124260085 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:48:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:33.244+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:33 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:33 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:33.503+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:34.273+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:34 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:34 compute-0 ceph-mon[75677]: pgmap v2006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:34 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:34 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:48:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 7894 writes, 31K keys, 7894 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7894 writes, 1805 syncs, 4.37 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1063 writes, 3149 keys, 1063 commit groups, 1.0 writes per commit group, ingest: 1.58 MB, 0.00 MB/s
                                           Interval WAL: 1063 writes, 441 syncs, 2.41 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:48:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:34.515+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:48:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:48:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:35.280+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:35 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:35 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:35.555+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:36.267+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:36 compute-0 ceph-mon[75677]: pgmap v2007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:36 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:36 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:36.535+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3432 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:37.248+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:37 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:37 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:37 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3432 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:37.493+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:38.241+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:38 compute-0 ceph-mon[75677]: pgmap v2008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:38.467+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:39.271+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:39 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:39 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:39.430+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:39 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Check health
Nov 24 20:48:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:40.277+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:40 compute-0 ceph-mon[75677]: pgmap v2009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:40 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:40 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:40.457+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:48:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:48:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:48:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:48:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:48:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:41.244+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:41 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:41 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:41.464+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:41 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3442 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:42.285+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:42 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:42 compute-0 ceph-mon[75677]: pgmap v2010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:42 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:42 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:42 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3442 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:42 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:42.481+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:43.316+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:43 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:43 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:43.474+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:44.330+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:44 compute-0 ceph-mon[75677]: pgmap v2011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:44 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:44 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:44.478+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:45.351+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:45.457+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:45 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:45 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:45 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:46.311+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:46.449+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:46 compute-0 ceph-mon[75677]: pgmap v2012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:46 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:46 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:47.344+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:47.439+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3447 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:47 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:47 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:47 compute-0 podman[299150]: 2025-11-24 20:48:47.833464101 +0000 UTC m=+0.067343769 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:48:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:48.391+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:48.394+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:48 compute-0 ceph-mon[75677]: pgmap v2013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:48 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:48 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:48 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3447 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:49.377+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:49.417+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:49 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:49 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:50.379+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:50.425+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:50 compute-0 ceph-mon[75677]: pgmap v2014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:50 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:50 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:51.376+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:51.422+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:51 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:51 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:51 compute-0 ceph-mon[75677]: pgmap v2015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:52.336+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:52 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:52.457+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:52 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:52 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:53.298+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:53.474+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:53 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:53 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:53 compute-0 ceph-mon[75677]: pgmap v2016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:54.339+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:48:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:48:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:54.444+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:54 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:54 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:55 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:55.369+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:55.403+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:55 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:55 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:55 compute-0 ceph-mon[75677]: pgmap v2017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:56.322+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:56 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:56.355+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3457 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:56 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:56 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:48:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:57.319+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:57.325+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:57 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:57 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:57 compute-0 ceph-mon[75677]: pgmap v2018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:57 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3457 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:48:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:58.299+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:58.368+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:58 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:58 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:58 compute-0 podman[299172]: 2025-11-24 20:48:58.862568453 +0000 UTC m=+0.089925079 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 20:48:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:48:59.281+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:48:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:48:59.381+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:48:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:59 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:48:59 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:48:59 compute-0 ceph-mon[75677]: pgmap v2019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:48:59 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:00.331+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:00.396+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:00 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:00 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:01.293+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:01.403+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3462 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:01 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:01 compute-0 ceph-mon[75677]: pgmap v2020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:01 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:01 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:01 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3462 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:02.251+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:02.416+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:02 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:02 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:03.257+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:03.437+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:03 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:03 compute-0 podman[299190]: 2025-11-24 20:49:03.946128551 +0000 UTC m=+0.170763642 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 20:49:03 compute-0 ceph-mon[75677]: pgmap v2021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:03 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:03 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:04.262+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:04.456+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:04 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:04 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:05.260+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:05.492+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:06 compute-0 ceph-mon[75677]: pgmap v2022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:06 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:06 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:06.255+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:06.467+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3467 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:07 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:07 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:07.240+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:07.497+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:08 compute-0 ceph-mon[75677]: pgmap v2023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:08 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3467 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:08 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:08 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:08.271+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:08.528+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:09 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:09 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:09.314+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:09.404 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:49:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:09.405 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:49:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:09.405 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:49:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:09.520+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:10 compute-0 ceph-mon[75677]: pgmap v2024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:10 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:10 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:10.286+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:10.556+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:10 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:11 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:11 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:11.280+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:11.520+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:12 compute-0 ceph-mon[75677]: pgmap v2025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:12 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:12 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:12.250+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:12 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:12.480+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:13 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:13 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:13.202+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:13.512+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:14 compute-0 ceph-mon[75677]: pgmap v2026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:14 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:14 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:14.247+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:14 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:14.488+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:15 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:15 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:15.230+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:15.449+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:15 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:15.516 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:49:15 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:15.518 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:49:15 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:15.519 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:49:16 compute-0 ceph-mon[75677]: pgmap v2027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:16 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:16 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:16.231+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:49:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:16.474+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/975732920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:49:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:49:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/975732920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:49:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3472 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:17 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:17 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/975732920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:49:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/975732920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:49:17 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3472 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:17.207+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:17.495+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:17 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:18 compute-0 ceph-mon[75677]: pgmap v2028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:18 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:18 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:18.194+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:18.542+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:18 compute-0 podman[299216]: 2025-11-24 20:49:18.841656611 +0000 UTC m=+0.072096617 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 20:49:19 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:19 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:19.238+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:19 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:19.515+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:20 compute-0 ceph-mon[75677]: pgmap v2029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:20 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:20 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:20.211+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:20.539+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:21.248+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:21.781+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:21 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:21 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 34 slow ops, oldest one blocked for 3482 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:22.297+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:22 compute-0 ceph-mon[75677]: pgmap v2030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:22 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:22 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:22 compute-0 ceph-mon[75677]: Health check update: 34 slow ops, oldest one blocked for 3482 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:22 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:49:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:22.821+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:23.341+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:23 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:23.778+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:23 compute-0 ceph-mon[75677]: pgmap v2031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:23 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:24.357+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:49:24 compute-0 sudo[299238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:49:24
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:49:24 compute-0 sudo[299238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.rgw.root', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr']
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:49:24 compute-0 sudo[299238]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Nov 24 20:49:24 compute-0 sudo[299263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:49:24 compute-0 sudo[299263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:24 compute-0 sudo[299263]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:24 compute-0 sudo[299288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:24 compute-0 sudo[299288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:24 compute-0 sudo[299288]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:24.743+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:24 compute-0 sudo[299313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:49:24 compute-0 sudo[299313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:24 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:24 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:24 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:25.355+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:25 compute-0 sudo[299313]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:25 compute-0 sudo[299371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:25 compute-0 sudo[299371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:25 compute-0 sudo[299371]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:25 compute-0 sudo[299396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:49:25 compute-0 sudo[299396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:25 compute-0 sudo[299396]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:25 compute-0 sudo[299421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:25 compute-0 sudo[299421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:25 compute-0 sudo[299421]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:25.752+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:25 compute-0 ceph-mon[75677]: pgmap v2032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Nov 24 20:49:25 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:25 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:25 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:25 compute-0 sudo[299446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- inventory --format=json-pretty --filter-for-batch
Nov 24 20:49:25 compute-0 sudo[299446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:26 compute-0 podman[299511]: 2025-11-24 20:49:26.292663797 +0000 UTC m=+0.076024104 container create 35c07ff41105b26c82cdf66867ef9d2a1701e6431c4f0f9b174080eea4448452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:49:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:26.311+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:26 compute-0 systemd[1]: Started libpod-conmon-35c07ff41105b26c82cdf66867ef9d2a1701e6431c4f0f9b174080eea4448452.scope.
Nov 24 20:49:26 compute-0 podman[299511]: 2025-11-24 20:49:26.260326494 +0000 UTC m=+0.043686642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:49:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:49:26 compute-0 podman[299511]: 2025-11-24 20:49:26.392095843 +0000 UTC m=+0.175455970 container init 35c07ff41105b26c82cdf66867ef9d2a1701e6431c4f0f9b174080eea4448452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:49:26 compute-0 podman[299511]: 2025-11-24 20:49:26.400310533 +0000 UTC m=+0.183670580 container start 35c07ff41105b26c82cdf66867ef9d2a1701e6431c4f0f9b174080eea4448452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 20:49:26 compute-0 podman[299511]: 2025-11-24 20:49:26.403683365 +0000 UTC m=+0.187043492 container attach 35c07ff41105b26c82cdf66867ef9d2a1701e6431c4f0f9b174080eea4448452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:49:26 compute-0 suspicious_newton[299528]: 167 167
Nov 24 20:49:26 compute-0 systemd[1]: libpod-35c07ff41105b26c82cdf66867ef9d2a1701e6431c4f0f9b174080eea4448452.scope: Deactivated successfully.
Nov 24 20:49:26 compute-0 conmon[299528]: conmon 35c07ff41105b26c82cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35c07ff41105b26c82cdf66867ef9d2a1701e6431c4f0f9b174080eea4448452.scope/container/memory.events
Nov 24 20:49:26 compute-0 podman[299511]: 2025-11-24 20:49:26.409485602 +0000 UTC m=+0.192845649 container died 35c07ff41105b26c82cdf66867ef9d2a1701e6431c4f0f9b174080eea4448452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 20:49:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9063d5d74ba071cd9d6ac43744959424111fae7ed7c1ef3800d76acacebb0b54-merged.mount: Deactivated successfully.
Nov 24 20:49:26 compute-0 podman[299511]: 2025-11-24 20:49:26.451450035 +0000 UTC m=+0.234810082 container remove 35c07ff41105b26c82cdf66867ef9d2a1701e6431c4f0f9b174080eea4448452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:49:26 compute-0 systemd[1]: libpod-conmon-35c07ff41105b26c82cdf66867ef9d2a1701e6431c4f0f9b174080eea4448452.scope: Deactivated successfully.
Nov 24 20:49:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 20:49:26 compute-0 podman[299553]: 2025-11-24 20:49:26.66866616 +0000 UTC m=+0.054606885 container create 6a1d17ee72a91864be5fe07221081f827a4419138c0f138b18d39cd4f31d0247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:49:26 compute-0 systemd[1]: Started libpod-conmon-6a1d17ee72a91864be5fe07221081f827a4419138c0f138b18d39cd4f31d0247.scope.
Nov 24 20:49:26 compute-0 podman[299553]: 2025-11-24 20:49:26.645854284 +0000 UTC m=+0.031795149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:49:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23e0124b39e2e4526002f833a4019e94833ec8ee938f98d82f54e980e4525c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23e0124b39e2e4526002f833a4019e94833ec8ee938f98d82f54e980e4525c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23e0124b39e2e4526002f833a4019e94833ec8ee938f98d82f54e980e4525c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23e0124b39e2e4526002f833a4019e94833ec8ee938f98d82f54e980e4525c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:26.763+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:26 compute-0 podman[299553]: 2025-11-24 20:49:26.779150164 +0000 UTC m=+0.165090859 container init 6a1d17ee72a91864be5fe07221081f827a4419138c0f138b18d39cd4f31d0247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:49:26 compute-0 podman[299553]: 2025-11-24 20:49:26.787664644 +0000 UTC m=+0.173605339 container start 6a1d17ee72a91864be5fe07221081f827a4419138c0f138b18d39cd4f31d0247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:49:26 compute-0 podman[299553]: 2025-11-24 20:49:26.794512868 +0000 UTC m=+0.180453553 container attach 6a1d17ee72a91864be5fe07221081f827a4419138c0f138b18d39cd4f31d0247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:49:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 3487 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:26 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:26 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:27.354+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:27 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:27 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:27.791+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:27 compute-0 ceph-mon[75677]: pgmap v2033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 20:49:27 compute-0 ceph-mon[75677]: Health check update: 29 slow ops, oldest one blocked for 3487 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:27 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:27 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]: [
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:     {
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:         "available": false,
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:         "ceph_device": false,
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:         "lsm_data": {},
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:         "lvs": [],
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:         "path": "/dev/sr0",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:         "rejected_reasons": [
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "Insufficient space (<5GB)",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "Has a FileSystem"
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:         ],
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:         "sys_api": {
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "actuators": null,
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "device_nodes": "sr0",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "devname": "sr0",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "human_readable_size": "482.00 KB",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "id_bus": "ata",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "model": "QEMU DVD-ROM",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "nr_requests": "2",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "parent": "/dev/sr0",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "partitions": {},
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "path": "/dev/sr0",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "removable": "1",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "rev": "2.5+",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "ro": "0",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "rotational": "1",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "sas_address": "",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "sas_device_handle": "",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "scheduler_mode": "mq-deadline",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "sectors": 0,
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "sectorsize": "2048",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "size": 493568.0,
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "support_discard": "2048",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "type": "disk",
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:             "vendor": "QEMU"
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:         }
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]:     }
Nov 24 20:49:28 compute-0 festive_dubinsky[299570]: ]
Nov 24 20:49:28 compute-0 systemd[1]: libpod-6a1d17ee72a91864be5fe07221081f827a4419138c0f138b18d39cd4f31d0247.scope: Deactivated successfully.
Nov 24 20:49:28 compute-0 systemd[1]: libpod-6a1d17ee72a91864be5fe07221081f827a4419138c0f138b18d39cd4f31d0247.scope: Consumed 1.496s CPU time.
Nov 24 20:49:28 compute-0 podman[299553]: 2025-11-24 20:49:28.252265482 +0000 UTC m=+1.638206207 container died 6a1d17ee72a91864be5fe07221081f827a4419138c0f138b18d39cd4f31d0247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 20:49:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f23e0124b39e2e4526002f833a4019e94833ec8ee938f98d82f54e980e4525c7-merged.mount: Deactivated successfully.
Nov 24 20:49:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:28.351+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:28 compute-0 podman[299553]: 2025-11-24 20:49:28.356076168 +0000 UTC m=+1.742016893 container remove 6a1d17ee72a91864be5fe07221081f827a4419138c0f138b18d39cd4f31d0247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:49:28 compute-0 systemd[1]: libpod-conmon-6a1d17ee72a91864be5fe07221081f827a4419138c0f138b18d39cd4f31d0247.scope: Deactivated successfully.
Nov 24 20:49:28 compute-0 sudo[299446]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:49:28 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d1447dac-1d39-4627-baef-58c43aba075b does not exist
Nov 24 20:49:28 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 38b1a2a1-af30-46df-ab2f-345659c64b8d does not exist
Nov 24 20:49:28 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9a3fab67-a947-46bc-8bba-63614000525f does not exist
Nov 24 20:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:49:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:49:28 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:49:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 20:49:28 compute-0 sudo[301412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:28 compute-0 sudo[301412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:28 compute-0 sudo[301412]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:28 compute-0 sudo[301437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:49:28 compute-0 sudo[301437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:28 compute-0 sudo[301437]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:28 compute-0 sudo[301462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:28 compute-0 sudo[301462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:28 compute-0 sudo[301462]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:28.806+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:28 compute-0 sudo[301487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:49:28 compute-0 sudo[301487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:29 compute-0 podman[301551]: 2025-11-24 20:49:29.319862299 +0000 UTC m=+0.065887326 container create d77ad557817ba47585a8dba0362f97a22125d5b6ee6bd051c8767d8ff4fa90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 20:49:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:29.345+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:29 compute-0 systemd[1]: Started libpod-conmon-d77ad557817ba47585a8dba0362f97a22125d5b6ee6bd051c8767d8ff4fa90ba.scope.
Nov 24 20:49:29 compute-0 podman[301551]: 2025-11-24 20:49:29.292237152 +0000 UTC m=+0.038262259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:49:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:49:29 compute-0 podman[301551]: 2025-11-24 20:49:29.415272481 +0000 UTC m=+0.161297578 container init d77ad557817ba47585a8dba0362f97a22125d5b6ee6bd051c8767d8ff4fa90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 20:49:29 compute-0 podman[301565]: 2025-11-24 20:49:29.425367817 +0000 UTC m=+0.071068547 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 20:49:29 compute-0 podman[301551]: 2025-11-24 20:49:29.426828937 +0000 UTC m=+0.172853984 container start d77ad557817ba47585a8dba0362f97a22125d5b6ee6bd051c8767d8ff4fa90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 20:49:29 compute-0 podman[301551]: 2025-11-24 20:49:29.430900278 +0000 UTC m=+0.176925325 container attach d77ad557817ba47585a8dba0362f97a22125d5b6ee6bd051c8767d8ff4fa90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:49:29 compute-0 eager_dewdney[301573]: 167 167
Nov 24 20:49:29 compute-0 systemd[1]: libpod-d77ad557817ba47585a8dba0362f97a22125d5b6ee6bd051c8767d8ff4fa90ba.scope: Deactivated successfully.
Nov 24 20:49:29 compute-0 podman[301551]: 2025-11-24 20:49:29.432750259 +0000 UTC m=+0.178775266 container died d77ad557817ba47585a8dba0362f97a22125d5b6ee6bd051c8767d8ff4fa90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:49:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d3efa9ff23e4d7335e2717f6a858e50f42f11c814a4aa4f6a30aa2b356463d1-merged.mount: Deactivated successfully.
Nov 24 20:49:29 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:49:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:49:29 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:29 compute-0 podman[301551]: 2025-11-24 20:49:29.478679547 +0000 UTC m=+0.224704564 container remove d77ad557817ba47585a8dba0362f97a22125d5b6ee6bd051c8767d8ff4fa90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:49:29 compute-0 systemd[1]: libpod-conmon-d77ad557817ba47585a8dba0362f97a22125d5b6ee6bd051c8767d8ff4fa90ba.scope: Deactivated successfully.
Nov 24 20:49:29 compute-0 podman[301611]: 2025-11-24 20:49:29.735292304 +0000 UTC m=+0.066273926 container create 25a8beb84de1cbf243bb05a7532ee1fe8a016cfffc0a565ecda088533735d232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:49:29 compute-0 systemd[1]: Started libpod-conmon-25a8beb84de1cbf243bb05a7532ee1fe8a016cfffc0a565ecda088533735d232.scope.
Nov 24 20:49:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:29.793+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:29 compute-0 podman[301611]: 2025-11-24 20:49:29.706109205 +0000 UTC m=+0.037090897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:49:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8ae7cd8105018bea520851d35bd2b7e602f5b3d57ce554f3e70eb690f56139/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8ae7cd8105018bea520851d35bd2b7e602f5b3d57ce554f3e70eb690f56139/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8ae7cd8105018bea520851d35bd2b7e602f5b3d57ce554f3e70eb690f56139/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8ae7cd8105018bea520851d35bd2b7e602f5b3d57ce554f3e70eb690f56139/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8ae7cd8105018bea520851d35bd2b7e602f5b3d57ce554f3e70eb690f56139/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:29 compute-0 podman[301611]: 2025-11-24 20:49:29.86226027 +0000 UTC m=+0.193241892 container init 25a8beb84de1cbf243bb05a7532ee1fe8a016cfffc0a565ecda088533735d232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:49:29 compute-0 podman[301611]: 2025-11-24 20:49:29.881669312 +0000 UTC m=+0.212650934 container start 25a8beb84de1cbf243bb05a7532ee1fe8a016cfffc0a565ecda088533735d232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:49:29 compute-0 podman[301611]: 2025-11-24 20:49:29.889489666 +0000 UTC m=+0.220471278 container attach 25a8beb84de1cbf243bb05a7532ee1fe8a016cfffc0a565ecda088533735d232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 20:49:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:30.297+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:30 compute-0 ceph-mon[75677]: pgmap v2034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 20:49:30 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:30 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 20:49:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:30.790+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:30 compute-0 agitated_bell[301627]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:49:30 compute-0 agitated_bell[301627]: --> relative data size: 1.0
Nov 24 20:49:30 compute-0 agitated_bell[301627]: --> All data devices are unavailable
Nov 24 20:49:31 compute-0 systemd[1]: libpod-25a8beb84de1cbf243bb05a7532ee1fe8a016cfffc0a565ecda088533735d232.scope: Deactivated successfully.
Nov 24 20:49:31 compute-0 podman[301611]: 2025-11-24 20:49:31.009374601 +0000 UTC m=+1.340356203 container died 25a8beb84de1cbf243bb05a7532ee1fe8a016cfffc0a565ecda088533735d232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 24 20:49:31 compute-0 systemd[1]: libpod-25a8beb84de1cbf243bb05a7532ee1fe8a016cfffc0a565ecda088533735d232.scope: Consumed 1.076s CPU time.
Nov 24 20:49:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc8ae7cd8105018bea520851d35bd2b7e602f5b3d57ce554f3e70eb690f56139-merged.mount: Deactivated successfully.
Nov 24 20:49:31 compute-0 podman[301611]: 2025-11-24 20:49:31.062132776 +0000 UTC m=+1.393114358 container remove 25a8beb84de1cbf243bb05a7532ee1fe8a016cfffc0a565ecda088533735d232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:49:31 compute-0 systemd[1]: libpod-conmon-25a8beb84de1cbf243bb05a7532ee1fe8a016cfffc0a565ecda088533735d232.scope: Deactivated successfully.
Nov 24 20:49:31 compute-0 sudo[301487]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:31 compute-0 sudo[301669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:31 compute-0 sudo[301669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:31 compute-0 sudo[301669]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:31.251+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:31 compute-0 sudo[301694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:49:31 compute-0 sudo[301694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:31 compute-0 sudo[301694]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:31 compute-0 sudo[301719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:31 compute-0 sudo[301719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:31 compute-0 sudo[301719]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:31 compute-0 sudo[301744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:49:31 compute-0 sudo[301744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:31 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:31 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:31.753+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 3492 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:31 compute-0 podman[301811]: 2025-11-24 20:49:31.903404602 +0000 UTC m=+0.062614245 container create ac82369a341dc8932383b07d03cb60fac1bf1e7a47734b5f98c3b55aa029d534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_solomon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:49:31 compute-0 systemd[1]: Started libpod-conmon-ac82369a341dc8932383b07d03cb60fac1bf1e7a47734b5f98c3b55aa029d534.scope.
Nov 24 20:49:31 compute-0 podman[301811]: 2025-11-24 20:49:31.873725079 +0000 UTC m=+0.032934772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:49:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:49:32 compute-0 podman[301811]: 2025-11-24 20:49:32.006822734 +0000 UTC m=+0.166032417 container init ac82369a341dc8932383b07d03cb60fac1bf1e7a47734b5f98c3b55aa029d534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_solomon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 20:49:32 compute-0 podman[301811]: 2025-11-24 20:49:32.018823522 +0000 UTC m=+0.178033165 container start ac82369a341dc8932383b07d03cb60fac1bf1e7a47734b5f98c3b55aa029d534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 20:49:32 compute-0 podman[301811]: 2025-11-24 20:49:32.024069906 +0000 UTC m=+0.183279589 container attach ac82369a341dc8932383b07d03cb60fac1bf1e7a47734b5f98c3b55aa029d534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_solomon, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 20:49:32 compute-0 vibrant_solomon[301827]: 167 167
Nov 24 20:49:32 compute-0 systemd[1]: libpod-ac82369a341dc8932383b07d03cb60fac1bf1e7a47734b5f98c3b55aa029d534.scope: Deactivated successfully.
Nov 24 20:49:32 compute-0 podman[301811]: 2025-11-24 20:49:32.031218652 +0000 UTC m=+0.190428285 container died ac82369a341dc8932383b07d03cb60fac1bf1e7a47734b5f98c3b55aa029d534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 24 20:49:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f20ac8d9b69ea35996cd9d0d0586868ffcc4164cf245d5863939f60e9942bd6c-merged.mount: Deactivated successfully.
Nov 24 20:49:32 compute-0 podman[301811]: 2025-11-24 20:49:32.085424146 +0000 UTC m=+0.244633779 container remove ac82369a341dc8932383b07d03cb60fac1bf1e7a47734b5f98c3b55aa029d534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:49:32 compute-0 systemd[1]: libpod-conmon-ac82369a341dc8932383b07d03cb60fac1bf1e7a47734b5f98c3b55aa029d534.scope: Deactivated successfully.
Nov 24 20:49:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:32.265+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:32 compute-0 podman[301851]: 2025-11-24 20:49:32.30104444 +0000 UTC m=+0.051881662 container create 4cfc330a6e49a47c279befd5626aab7928e051ed6120a8bd509a7ec2faa8686a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 20:49:32 compute-0 systemd[1]: Started libpod-conmon-4cfc330a6e49a47c279befd5626aab7928e051ed6120a8bd509a7ec2faa8686a.scope.
Nov 24 20:49:32 compute-0 podman[301851]: 2025-11-24 20:49:32.280490808 +0000 UTC m=+0.031328050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:49:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:49:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9b88dc443d72a2b67c0f1b0b2e023e498e937d80560c0a1c707d66c200e772/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9b88dc443d72a2b67c0f1b0b2e023e498e937d80560c0a1c707d66c200e772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9b88dc443d72a2b67c0f1b0b2e023e498e937d80560c0a1c707d66c200e772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9b88dc443d72a2b67c0f1b0b2e023e498e937d80560c0a1c707d66c200e772/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:32 compute-0 podman[301851]: 2025-11-24 20:49:32.400472622 +0000 UTC m=+0.151309944 container init 4cfc330a6e49a47c279befd5626aab7928e051ed6120a8bd509a7ec2faa8686a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 20:49:32 compute-0 podman[301851]: 2025-11-24 20:49:32.413945712 +0000 UTC m=+0.164782964 container start 4cfc330a6e49a47c279befd5626aab7928e051ed6120a8bd509a7ec2faa8686a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khayyam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:49:32 compute-0 podman[301851]: 2025-11-24 20:49:32.418162668 +0000 UTC m=+0.168999920 container attach 4cfc330a6e49a47c279befd5626aab7928e051ed6120a8bd509a7ec2faa8686a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 20:49:32 compute-0 ceph-mon[75677]: pgmap v2035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 20:49:32 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:32 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:32 compute-0 ceph-mon[75677]: Health check update: 29 slow ops, oldest one blocked for 3492 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 20:49:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:32.769+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]: {
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:     "0": [
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:         {
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "devices": [
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "/dev/loop3"
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             ],
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_name": "ceph_lv0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_size": "21470642176",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "name": "ceph_lv0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "tags": {
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.cluster_name": "ceph",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.crush_device_class": "",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.encrypted": "0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.osd_id": "0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.type": "block",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.vdo": "0"
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             },
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "type": "block",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "vg_name": "ceph_vg0"
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:         }
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:     ],
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:     "1": [
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:         {
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "devices": [
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "/dev/loop4"
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             ],
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_name": "ceph_lv1",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_size": "21470642176",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "name": "ceph_lv1",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "tags": {
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.cluster_name": "ceph",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.crush_device_class": "",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.encrypted": "0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.osd_id": "1",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.type": "block",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.vdo": "0"
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             },
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "type": "block",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "vg_name": "ceph_vg1"
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:         }
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:     ],
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:     "2": [
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:         {
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "devices": [
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "/dev/loop5"
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             ],
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_name": "ceph_lv2",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_size": "21470642176",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "name": "ceph_lv2",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "tags": {
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.cluster_name": "ceph",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.crush_device_class": "",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.encrypted": "0",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.osd_id": "2",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.type": "block",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:                 "ceph.vdo": "0"
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             },
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "type": "block",
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:             "vg_name": "ceph_vg2"
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:         }
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]:     ]
Nov 24 20:49:33 compute-0 gallant_khayyam[301868]: }
Nov 24 20:49:33 compute-0 systemd[1]: libpod-4cfc330a6e49a47c279befd5626aab7928e051ed6120a8bd509a7ec2faa8686a.scope: Deactivated successfully.
Nov 24 20:49:33 compute-0 podman[301851]: 2025-11-24 20:49:33.197377554 +0000 UTC m=+0.948214826 container died 4cfc330a6e49a47c279befd5626aab7928e051ed6120a8bd509a7ec2faa8686a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 20:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b9b88dc443d72a2b67c0f1b0b2e023e498e937d80560c0a1c707d66c200e772-merged.mount: Deactivated successfully.
Nov 24 20:49:33 compute-0 podman[301851]: 2025-11-24 20:49:33.274720461 +0000 UTC m=+1.025557713 container remove 4cfc330a6e49a47c279befd5626aab7928e051ed6120a8bd509a7ec2faa8686a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:49:33 compute-0 systemd[1]: libpod-conmon-4cfc330a6e49a47c279befd5626aab7928e051ed6120a8bd509a7ec2faa8686a.scope: Deactivated successfully.
Nov 24 20:49:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:33.294+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:33 compute-0 sudo[301744]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:33 compute-0 sudo[301887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:33 compute-0 sudo[301887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:33 compute-0 sudo[301887]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:33 compute-0 sudo[301912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:49:33 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:33 compute-0 ceph-mon[75677]: pgmap v2036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 20:49:33 compute-0 sudo[301912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:33 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:33 compute-0 sudo[301912]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:33 compute-0 sudo[301937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:33 compute-0 sudo[301937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:33 compute-0 sudo[301937]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:33 compute-0 sudo[301962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:49:33 compute-0 sudo[301962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:33.806+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:34 compute-0 podman[302028]: 2025-11-24 20:49:34.134864644 +0000 UTC m=+0.044788457 container create b37f480e638a3bae4438c8db72f677383147fae19e7e4b97c9e448419e1a6c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:49:34 compute-0 systemd[1]: Started libpod-conmon-b37f480e638a3bae4438c8db72f677383147fae19e7e4b97c9e448419e1a6c0f.scope.
Nov 24 20:49:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:49:34 compute-0 podman[302028]: 2025-11-24 20:49:34.113987603 +0000 UTC m=+0.023911446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:49:34 compute-0 podman[302028]: 2025-11-24 20:49:34.238573004 +0000 UTC m=+0.148496857 container init b37f480e638a3bae4438c8db72f677383147fae19e7e4b97c9e448419e1a6c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:49:34 compute-0 podman[302028]: 2025-11-24 20:49:34.251686863 +0000 UTC m=+0.161610686 container start b37f480e638a3bae4438c8db72f677383147fae19e7e4b97c9e448419e1a6c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:49:34 compute-0 festive_payne[302045]: 167 167
Nov 24 20:49:34 compute-0 systemd[1]: libpod-b37f480e638a3bae4438c8db72f677383147fae19e7e4b97c9e448419e1a6c0f.scope: Deactivated successfully.
Nov 24 20:49:34 compute-0 podman[302028]: 2025-11-24 20:49:34.27273324 +0000 UTC m=+0.182657093 container attach b37f480e638a3bae4438c8db72f677383147fae19e7e4b97c9e448419e1a6c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:49:34 compute-0 podman[302028]: 2025-11-24 20:49:34.27384863 +0000 UTC m=+0.183772433 container died b37f480e638a3bae4438c8db72f677383147fae19e7e4b97c9e448419e1a6c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:49:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:34.278+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:34 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e3fc34958783d611fbdf556a2b1a554dcc7c7e672da07372582cc28ab4a86a1-merged.mount: Deactivated successfully.
Nov 24 20:49:34 compute-0 podman[302028]: 2025-11-24 20:49:34.516436883 +0000 UTC m=+0.426360726 container remove b37f480e638a3bae4438c8db72f677383147fae19e7e4b97c9e448419e1a6c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 20:49:34 compute-0 podman[302042]: 2025-11-24 20:49:34.526402106 +0000 UTC m=+0.337092481 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 24 20:49:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 20:49:34 compute-0 systemd[1]: libpod-conmon-b37f480e638a3bae4438c8db72f677383147fae19e7e4b97c9e448419e1a6c0f.scope: Deactivated successfully.
Nov 24 20:49:34 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:34 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:34 compute-0 podman[302096]: 2025-11-24 20:49:34.752752584 +0000 UTC m=+0.061495005 container create e5cbb62b3e899a98af5a516e58b9be693bfb873e98c5e4d3946c0d8f15bdec13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lewin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:49:34 compute-0 systemd[1]: Started libpod-conmon-e5cbb62b3e899a98af5a516e58b9be693bfb873e98c5e4d3946c0d8f15bdec13.scope.
Nov 24 20:49:34 compute-0 podman[302096]: 2025-11-24 20:49:34.72448894 +0000 UTC m=+0.033231411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:49:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b842dc5cc02832af1d14a319aced165de3f67fded1ce8fd556994d5bcec8cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b842dc5cc02832af1d14a319aced165de3f67fded1ce8fd556994d5bcec8cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b842dc5cc02832af1d14a319aced165de3f67fded1ce8fd556994d5bcec8cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b842dc5cc02832af1d14a319aced165de3f67fded1ce8fd556994d5bcec8cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:49:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:34.849+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:34 compute-0 podman[302096]: 2025-11-24 20:49:34.855638541 +0000 UTC m=+0.164380952 container init e5cbb62b3e899a98af5a516e58b9be693bfb873e98c5e4d3946c0d8f15bdec13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:49:34 compute-0 podman[302096]: 2025-11-24 20:49:34.874376084 +0000 UTC m=+0.183118485 container start e5cbb62b3e899a98af5a516e58b9be693bfb873e98c5e4d3946c0d8f15bdec13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lewin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:49:34 compute-0 podman[302096]: 2025-11-24 20:49:34.877527241 +0000 UTC m=+0.186269642 container attach e5cbb62b3e899a98af5a516e58b9be693bfb873e98c5e4d3946c0d8f15bdec13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:49:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:49:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:35.286+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:35 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:35 compute-0 ceph-mon[75677]: pgmap v2037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 20:49:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:35.823+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:35 compute-0 gifted_lewin[302112]: {
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "osd_id": 2,
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "type": "bluestore"
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:     },
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "osd_id": 1,
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "type": "bluestore"
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:     },
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "osd_id": 0,
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:         "type": "bluestore"
Nov 24 20:49:35 compute-0 gifted_lewin[302112]:     }
Nov 24 20:49:35 compute-0 gifted_lewin[302112]: }
Nov 24 20:49:35 compute-0 systemd[1]: libpod-e5cbb62b3e899a98af5a516e58b9be693bfb873e98c5e4d3946c0d8f15bdec13.scope: Deactivated successfully.
Nov 24 20:49:35 compute-0 podman[302096]: 2025-11-24 20:49:35.994851205 +0000 UTC m=+1.303593616 container died e5cbb62b3e899a98af5a516e58b9be693bfb873e98c5e4d3946c0d8f15bdec13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:49:35 compute-0 systemd[1]: libpod-e5cbb62b3e899a98af5a516e58b9be693bfb873e98c5e4d3946c0d8f15bdec13.scope: Consumed 1.098s CPU time.
Nov 24 20:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-79b842dc5cc02832af1d14a319aced165de3f67fded1ce8fd556994d5bcec8cc-merged.mount: Deactivated successfully.
Nov 24 20:49:36 compute-0 podman[302096]: 2025-11-24 20:49:36.051410395 +0000 UTC m=+1.360152786 container remove e5cbb62b3e899a98af5a516e58b9be693bfb873e98c5e4d3946c0d8f15bdec13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 20:49:36 compute-0 systemd[1]: libpod-conmon-e5cbb62b3e899a98af5a516e58b9be693bfb873e98c5e4d3946c0d8f15bdec13.scope: Deactivated successfully.
Nov 24 20:49:36 compute-0 sudo[301962]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:49:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:49:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:49:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:49:36 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev dfb8b77f-17c9-4ffd-b248-4b0e4f22c7f4 does not exist
Nov 24 20:49:36 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d88c2586-e290-4520-9fd9-e2cce831afb5 does not exist
Nov 24 20:49:36 compute-0 sudo[302158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:49:36 compute-0 sudo[302158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:36 compute-0 sudo[302158]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:36 compute-0 sudo[302183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:49:36 compute-0 sudo[302183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:49:36 compute-0 sudo[302183]: pam_unix(sudo:session): session closed for user root
Nov 24 20:49:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:36.314+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Nov 24 20:49:36 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:36 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:49:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:49:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:36.843+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 3497 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:37.344+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:37 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:37 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:37 compute-0 ceph-mon[75677]: pgmap v2038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Nov 24 20:49:37 compute-0 ceph-mon[75677]: Health check update: 29 slow ops, oldest one blocked for 3497 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:37.872+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:38.319+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:38 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:38.893+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:39.284+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:39 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:39 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:39 compute-0 ceph-mon[75677]: pgmap v2039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:39.905+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:40.238+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:40 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:40 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:49:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:49:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:49:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:49:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:49:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:40.878+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:41.240+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:41 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:41 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:41 compute-0 ceph-mon[75677]: pgmap v2040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:41.893+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:41 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:42.289+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:42 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:42 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:42 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:42.930+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:42 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:43.247+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:43 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:43 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:43 compute-0 ceph-mon[75677]: pgmap v2041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:43.953+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:44.262+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:44 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:44 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:44 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:44.906 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:73:13 10.100.0.2 2001:db8::f816:3eff:fe36:7313'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe36:7313/64', 'neutron:device_id': 'ovnmeta-84e42453-4477-4c26-a600-e91a74adbc41', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84e42453-4477-4c26-a600-e91a74adbc41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ee405eb-e6f9-4216-b811-af2c9fa887bb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=210ba9db-42fc-42a0-a100-94a4bb3fef24) old=Port_Binding(mac=['fa:16:3e:36:73:13 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-84e42453-4477-4c26-a600-e91a74adbc41', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84e42453-4477-4c26-a600-e91a74adbc41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:49:44 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:44.908 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 210ba9db-42fc-42a0-a100-94a4bb3fef24 in datapath 84e42453-4477-4c26-a600-e91a74adbc41 updated
Nov 24 20:49:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:44.909+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:44 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:44.911 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84e42453-4477-4c26-a600-e91a74adbc41, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:49:44 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:44.912 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[50daebc3-e35a-4402-8ee9-378f42347d80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:49:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:45.248+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:45 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:45 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:45 compute-0 ceph-mon[75677]: pgmap v2042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:45.945+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:45 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:46.298+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 3507 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:46 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:46 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:46.937+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:47.277+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:47 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:47 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:47 compute-0 ceph-mon[75677]: pgmap v2043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:47 compute-0 ceph-mon[75677]: Health check update: 29 slow ops, oldest one blocked for 3507 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:47.967+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:48.271+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:48 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:48 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:48.996+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:49.308+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:49 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:49 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:49 compute-0 ceph-mon[75677]: pgmap v2044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:49 compute-0 podman[302208]: 2025-11-24 20:49:49.853662223 +0000 UTC m=+0.077845863 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 20:49:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:49.959+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:50.330+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:50 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:50 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:50.937+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:51.292+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:51 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:51 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:51 compute-0 ceph-mon[75677]: pgmap v2045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 3512 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:51.980+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:52.264+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:52.546 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:73:13 10.100.0.2 2001:db8:0:1:f816:3eff:fe36:7313 2001:db8::f816:3eff:fe36:7313'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe36:7313/64 2001:db8::f816:3eff:fe36:7313/64', 'neutron:device_id': 'ovnmeta-84e42453-4477-4c26-a600-e91a74adbc41', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84e42453-4477-4c26-a600-e91a74adbc41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ee405eb-e6f9-4216-b811-af2c9fa887bb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=210ba9db-42fc-42a0-a100-94a4bb3fef24) old=Port_Binding(mac=['fa:16:3e:36:73:13 10.100.0.2 2001:db8::f816:3eff:fe36:7313'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe36:7313/64', 'neutron:device_id': 'ovnmeta-84e42453-4477-4c26-a600-e91a74adbc41', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84e42453-4477-4c26-a600-e91a74adbc41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:49:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:52.548 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 210ba9db-42fc-42a0-a100-94a4bb3fef24 in datapath 84e42453-4477-4c26-a600-e91a74adbc41 updated
Nov 24 20:49:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:52.550 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84e42453-4477-4c26-a600-e91a74adbc41, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:49:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:52 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:49:52.550 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[5197633a-55bd-4c8d-bee0-1f7381408b3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:49:52 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:52 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:52 compute-0 ceph-mon[75677]: Health check update: 29 slow ops, oldest one blocked for 3512 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:49:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:53.017+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:53.228+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:53 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:53 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'default.rgw.log' : 15 ])
Nov 24 20:49:53 compute-0 ceph-mon[75677]: pgmap v2046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:54.016+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:54.241+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:49:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:49:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:54 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:54.997+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:55.219+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:55 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:55 compute-0 ceph-mon[75677]: pgmap v2047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:56.003+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:56 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:56.215+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:56 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:49:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:57.003+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:57.183+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:57 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:57 compute-0 ceph-mon[75677]: pgmap v2048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:57 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:58.025+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:58.171+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:49:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:49:59.046+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:49:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:59 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:49:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:49:59.145+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:49:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:49:59 compute-0 podman[302227]: 2025-11-24 20:49:59.881551668 +0000 UTC m=+0.104450690 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:50:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:00.011+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:00.118+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:00 compute-0 ceph-mon[75677]: pgmap v2049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:00 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:01.009+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:01.123+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3522 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:01 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:01.963+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:02.125+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:02 compute-0 ceph-mon[75677]: pgmap v2050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:02 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:02 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3522 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:02.918+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:03.095+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:03 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:03.880+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:03 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:04.075+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:04 compute-0 ceph-mon[75677]: pgmap v2051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:04 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:04.891+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:05 compute-0 podman[302250]: 2025-11-24 20:50:05.02216825 +0000 UTC m=+0.245497573 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 20:50:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:05.029+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:05 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:05.915+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:06.003+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:06 compute-0 ceph-mon[75677]: pgmap v2052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:06 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3527 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:06.951+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:06.999+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:07 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:07 compute-0 ceph-mon[75677]: pgmap v2053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:07 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3527 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:07.975+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:08.014+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:08 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:08 compute-0 sshd-session[302276]: Connection closed by authenticating user nobody 185.156.73.233 port 26496 [preauth]
Nov 24 20:50:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:08.975+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:09.058+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:09.405 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:50:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:09.405 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:50:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:09.406 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:50:09 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:09 compute-0 ceph-mon[75677]: pgmap v2054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:09.947+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:10.066+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:10 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:10.935+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:10 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:11.083+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:11 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:11 compute-0 ceph-mon[75677]: pgmap v2055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3532 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:11.908+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:12.061+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:12 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:12 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:12 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3532 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:12.893+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:13.044+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:13 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:13 compute-0 ceph-mon[75677]: pgmap v2056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:13.917+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:14.004+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:14.948+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:14 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:14 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:15.035+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:15.916+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:16.044+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:16 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:16.071 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:50:16 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:16.073 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:50:16 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:16 compute-0 ceph-mon[75677]: pgmap v2057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:16 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:50:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1871560243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:50:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:50:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1871560243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:50:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:16.965+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:17.001+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:17 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:17.075 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:50:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3537 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:17 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1871560243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:50:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1871560243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:50:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:17.932+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:17 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:17.963+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:18 compute-0 ceph-mon[75677]: pgmap v2058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:18 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:18 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3537 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:18.921+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:18.956+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:19 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.388495) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017419388540, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1781, "num_deletes": 374, "total_data_size": 2010943, "memory_usage": 2051048, "flush_reason": "Manual Compaction"}
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017419610543, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1966888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57887, "largest_seqno": 59667, "table_properties": {"data_size": 1959175, "index_size": 4019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 23739, "raw_average_key_size": 22, "raw_value_size": 1940760, "raw_average_value_size": 1858, "num_data_blocks": 176, "num_entries": 1044, "num_filter_entries": 1044, "num_deletions": 374, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017307, "oldest_key_time": 1764017307, "file_creation_time": 1764017419, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 222209 microseconds, and 9204 cpu microseconds.
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.610695) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1966888 bytes OK
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.610726) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.645571) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.645701) EVENT_LOG_v1 {"time_micros": 1764017419645687, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.645733) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 2002206, prev total WAL file size 2002206, number of live WAL files 2.
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.647069) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1920KB)], [125(8454KB)]
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017419647115, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10623907, "oldest_snapshot_seqno": -1}
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 13222 keys, 9079460 bytes, temperature: kUnknown
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017419749238, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9079460, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9009058, "index_size": 36241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33093, "raw_key_size": 364802, "raw_average_key_size": 27, "raw_value_size": 8784063, "raw_average_value_size": 664, "num_data_blocks": 1311, "num_entries": 13222, "num_filter_entries": 13222, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017419, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.749574) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9079460 bytes
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.755416) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.9 rd, 88.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 8.3 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 13982, records dropped: 760 output_compression: NoCompression
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.755449) EVENT_LOG_v1 {"time_micros": 1764017419755434, "job": 76, "event": "compaction_finished", "compaction_time_micros": 102229, "compaction_time_cpu_micros": 37563, "output_level": 6, "num_output_files": 1, "total_output_size": 9079460, "num_input_records": 13982, "num_output_records": 13222, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017419756447, "job": 76, "event": "table_file_deletion", "file_number": 127}
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017419758969, "job": 76, "event": "table_file_deletion", "file_number": 125}
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.647002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.759259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.759270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.759273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.759276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:19 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:19.759279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:19.910+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:19 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:19.953+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:20 compute-0 ceph-mon[75677]: pgmap v2059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:20 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:20 compute-0 podman[302280]: 2025-11-24 20:50:20.861889679 +0000 UTC m=+0.085002809 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:50:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:20.893+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:20.938+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:21 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:21.937+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:21.963+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:22 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:22.194 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:c5:32 10.100.0.2 2001:db8::f816:3eff:fe56:c532'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe56:c532/64', 'neutron:device_id': 'ovnmeta-ffb0ece0-28c7-4a57-bb50-bf7879ba563f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ffb0ece0-28c7-4a57-bb50-bf7879ba563f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf745b4f-590e-4fad-9d6f-f278c49a0cbc, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d0ee6a56-e8e3-49af-9ebb-017d0cd8123d) old=Port_Binding(mac=['fa:16:3e:56:c5:32 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ffb0ece0-28c7-4a57-bb50-bf7879ba563f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ffb0ece0-28c7-4a57-bb50-bf7879ba563f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:50:22 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:22.196 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d0ee6a56-e8e3-49af-9ebb-017d0cd8123d in datapath ffb0ece0-28c7-4a57-bb50-bf7879ba563f updated
Nov 24 20:50:22 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:22.198 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ffb0ece0-28c7-4a57-bb50-bf7879ba563f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:50:22 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:22.199 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[90bb0277-857b-4b11-b54d-025d5a6dded1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:50:22 compute-0 ceph-mon[75677]: pgmap v2060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:22 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:22.920+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:22.967+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:23 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:23 compute-0 ceph-mon[75677]: pgmap v2061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:23.934+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:24.004+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:50:24
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.log', 'vms', '.mgr']
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:50:24 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:24.952+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:24.992+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:25 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:25 compute-0 ceph-mon[75677]: pgmap v2062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:25.923+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:26.009+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:26 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3547 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:26.906+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:26.972+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:27 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:27 compute-0 ceph-mon[75677]: pgmap v2063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:27 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3547 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:27.927+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:27 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:28.000+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:28 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:28.937+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:29.013+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:29 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:29 compute-0 ceph-mon[75677]: pgmap v2064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:29.946+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:30.042+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:30 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:30.342 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:c5:32 10.100.0.2 2001:db8:0:1:f816:3eff:fe56:c532 2001:db8::f816:3eff:fe56:c532'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe56:c532/64 2001:db8::f816:3eff:fe56:c532/64', 'neutron:device_id': 'ovnmeta-ffb0ece0-28c7-4a57-bb50-bf7879ba563f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ffb0ece0-28c7-4a57-bb50-bf7879ba563f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf745b4f-590e-4fad-9d6f-f278c49a0cbc, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d0ee6a56-e8e3-49af-9ebb-017d0cd8123d) old=Port_Binding(mac=['fa:16:3e:56:c5:32 10.100.0.2 2001:db8::f816:3eff:fe56:c532'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe56:c532/64', 'neutron:device_id': 'ovnmeta-ffb0ece0-28c7-4a57-bb50-bf7879ba563f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ffb0ece0-28c7-4a57-bb50-bf7879ba563f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:50:30 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:30.345 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d0ee6a56-e8e3-49af-9ebb-017d0cd8123d in datapath ffb0ece0-28c7-4a57-bb50-bf7879ba563f updated
Nov 24 20:50:30 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:30.347 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ffb0ece0-28c7-4a57-bb50-bf7879ba563f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:50:30 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:50:30.348 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[19096159-c40b-4113-8152-9d0e5829e9c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:50:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:30 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:30 compute-0 podman[302302]: 2025-11-24 20:50:30.866629804 +0000 UTC m=+0.088858074 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:50:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:30.907+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:31.050+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:31 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:31 compute-0 ceph-mon[75677]: pgmap v2065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3552 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:31.940+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:32.007+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:32 compute-0 sshd-session[302278]: Received disconnect from 14.63.196.175 port 53380:11: Bye Bye [preauth]
Nov 24 20:50:32 compute-0 sshd-session[302278]: Disconnected from authenticating user root 14.63.196.175 port 53380 [preauth]
Nov 24 20:50:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:32 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:32 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3552 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:32.921+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:33.019+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:33 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:33 compute-0 ceph-mon[75677]: pgmap v2066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:33.905+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:34.048+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:34 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:34 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:34.877+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:35.025+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:50:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:50:35 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:35 compute-0 ceph-mon[75677]: pgmap v2067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:35.860+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:35 compute-0 podman[302321]: 2025-11-24 20:50:35.90371424 +0000 UTC m=+0.133788785 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 20:50:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:36.020+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:36 compute-0 sudo[302347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:36 compute-0 sudo[302347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:36 compute-0 sudo[302347]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:36 compute-0 sudo[302372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:50:36 compute-0 sudo[302372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:36 compute-0 sudo[302372]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:36 compute-0 sudo[302397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:36 compute-0 sudo[302397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:36 compute-0 sudo[302397]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:36 compute-0 sudo[302422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 20:50:36 compute-0 sudo[302422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:36 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3557 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.867014) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017436867048, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 520, "num_deletes": 303, "total_data_size": 313316, "memory_usage": 324088, "flush_reason": "Manual Compaction"}
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017436870283, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 308246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59668, "largest_seqno": 60187, "table_properties": {"data_size": 305470, "index_size": 619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8204, "raw_average_key_size": 20, "raw_value_size": 299343, "raw_average_value_size": 730, "num_data_blocks": 27, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 303, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017420, "oldest_key_time": 1764017420, "file_creation_time": 1764017436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 3329 microseconds, and 1528 cpu microseconds.
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.870339) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 308246 bytes OK
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.870365) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.872072) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.872096) EVENT_LOG_v1 {"time_micros": 1764017436872088, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.872118) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 310021, prev total WAL file size 310021, number of live WAL files 2.
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.873185) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373637' seq:72057594037927935, type:22 .. '6C6F676D0033303231' seq:0, type:0; will stop at (end)
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(301KB)], [128(8866KB)]
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017436873238, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 9387706, "oldest_snapshot_seqno": -1}
Nov 24 20:50:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:36.881+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:36 compute-0 sudo[302422]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 13018 keys, 9190390 bytes, temperature: kUnknown
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017436966005, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 9190390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9120868, "index_size": 35873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32581, "raw_key_size": 361368, "raw_average_key_size": 27, "raw_value_size": 8898968, "raw_average_value_size": 683, "num_data_blocks": 1291, "num_entries": 13018, "num_filter_entries": 13018, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.966333) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 9190390 bytes
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.970100) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.1 rd, 99.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.7 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(60.3) write-amplify(29.8) OK, records in: 13632, records dropped: 614 output_compression: NoCompression
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.970129) EVENT_LOG_v1 {"time_micros": 1764017436970115, "job": 78, "event": "compaction_finished", "compaction_time_micros": 92849, "compaction_time_cpu_micros": 50048, "output_level": 6, "num_output_files": 1, "total_output_size": 9190390, "num_input_records": 13632, "num_output_records": 13018, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.873079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.970292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.970300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.970303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.970306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:50:36.970309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017436972432, "job": 0, "event": "table_file_deletion", "file_number": 130}
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:50:36 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017436976060, "job": 0, "event": "table_file_deletion", "file_number": 128}
Nov 24 20:50:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:50:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:50:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:50:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:37.031+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:37 compute-0 sudo[302465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:37 compute-0 sudo[302465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:37 compute-0 sudo[302465]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:37 compute-0 sudo[302490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:50:37 compute-0 sudo[302490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:37 compute-0 sudo[302490]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:37 compute-0 sudo[302515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:37 compute-0 sudo[302515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:37 compute-0 sudo[302515]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:37 compute-0 sudo[302540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:50:37 compute-0 sudo[302540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:37 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:37 compute-0 ceph-mon[75677]: pgmap v2068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:37 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3557 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:50:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:50:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:37.917+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:37 compute-0 sudo[302540]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:50:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:50:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:50:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:50:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:38.051+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:50:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:50:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 19605e8f-e335-442d-88ff-bef9df433597 does not exist
Nov 24 20:50:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c4bda3d8-3049-436c-9ccd-7c79d18d4ff5 does not exist
Nov 24 20:50:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3c8d59ea-2c20-4097-902d-aff0c01d1a74 does not exist
Nov 24 20:50:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:50:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:50:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:50:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:50:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:50:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:50:38 compute-0 sudo[302596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:38 compute-0 sudo[302596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:38 compute-0 sudo[302596]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:38 compute-0 sudo[302621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:50:38 compute-0 sudo[302621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:38 compute-0 sudo[302621]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:38 compute-0 sudo[302646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:38 compute-0 sudo[302646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:38 compute-0 sudo[302646]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:38 compute-0 sudo[302671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:50:38 compute-0 sudo[302671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:50:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:50:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:50:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:50:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:50:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:50:38 compute-0 podman[302738]: 2025-11-24 20:50:38.82968783 +0000 UTC m=+0.070469361 container create 847b1aff36b75ee839030adeea8f5c1827f364bdc0f69abeb1c7560bd95b79f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:50:38 compute-0 systemd[1]: Started libpod-conmon-847b1aff36b75ee839030adeea8f5c1827f364bdc0f69abeb1c7560bd95b79f3.scope.
Nov 24 20:50:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:38.877+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:38 compute-0 podman[302738]: 2025-11-24 20:50:38.800375408 +0000 UTC m=+0.041156979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:50:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:50:38 compute-0 podman[302738]: 2025-11-24 20:50:38.934249094 +0000 UTC m=+0.175030655 container init 847b1aff36b75ee839030adeea8f5c1827f364bdc0f69abeb1c7560bd95b79f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:50:38 compute-0 podman[302738]: 2025-11-24 20:50:38.9461812 +0000 UTC m=+0.186962731 container start 847b1aff36b75ee839030adeea8f5c1827f364bdc0f69abeb1c7560bd95b79f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:50:38 compute-0 podman[302738]: 2025-11-24 20:50:38.950305162 +0000 UTC m=+0.191086683 container attach 847b1aff36b75ee839030adeea8f5c1827f364bdc0f69abeb1c7560bd95b79f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:50:38 compute-0 zealous_fermi[302754]: 167 167
Nov 24 20:50:38 compute-0 systemd[1]: libpod-847b1aff36b75ee839030adeea8f5c1827f364bdc0f69abeb1c7560bd95b79f3.scope: Deactivated successfully.
Nov 24 20:50:38 compute-0 podman[302738]: 2025-11-24 20:50:38.954383225 +0000 UTC m=+0.195164746 container died 847b1aff36b75ee839030adeea8f5c1827f364bdc0f69abeb1c7560bd95b79f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 20:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-06abf3841a7e6b384cadfbedd009eee965041f4a891eb600423998091373a1bb-merged.mount: Deactivated successfully.
Nov 24 20:50:39 compute-0 podman[302738]: 2025-11-24 20:50:39.014558622 +0000 UTC m=+0.255340153 container remove 847b1aff36b75ee839030adeea8f5c1827f364bdc0f69abeb1c7560bd95b79f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:50:39 compute-0 systemd[1]: libpod-conmon-847b1aff36b75ee839030adeea8f5c1827f364bdc0f69abeb1c7560bd95b79f3.scope: Deactivated successfully.
Nov 24 20:50:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:39.091+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:39 compute-0 podman[302777]: 2025-11-24 20:50:39.254172304 +0000 UTC m=+0.065237217 container create c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meninsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:50:39 compute-0 systemd[1]: Started libpod-conmon-c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b.scope.
Nov 24 20:50:39 compute-0 podman[302777]: 2025-11-24 20:50:39.232376597 +0000 UTC m=+0.043441520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:50:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f878023c06fb32dd0f2595a2a4f7afb3007c13e75fa0dca9b6f6154c9ba7119/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f878023c06fb32dd0f2595a2a4f7afb3007c13e75fa0dca9b6f6154c9ba7119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f878023c06fb32dd0f2595a2a4f7afb3007c13e75fa0dca9b6f6154c9ba7119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f878023c06fb32dd0f2595a2a4f7afb3007c13e75fa0dca9b6f6154c9ba7119/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f878023c06fb32dd0f2595a2a4f7afb3007c13e75fa0dca9b6f6154c9ba7119/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:39 compute-0 podman[302777]: 2025-11-24 20:50:39.347932951 +0000 UTC m=+0.158997824 container init c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:50:39 compute-0 podman[302777]: 2025-11-24 20:50:39.362356355 +0000 UTC m=+0.173421228 container start c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:50:39 compute-0 podman[302777]: 2025-11-24 20:50:39.365556384 +0000 UTC m=+0.176621267 container attach c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meninsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:50:39 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:39 compute-0 ceph-mon[75677]: pgmap v2069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:39.843+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:40.044+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:40 compute-0 adoring_meninsky[302793]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:50:40 compute-0 adoring_meninsky[302793]: --> relative data size: 1.0
Nov 24 20:50:40 compute-0 adoring_meninsky[302793]: --> All data devices are unavailable
Nov 24 20:50:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:40 compute-0 systemd[1]: libpod-c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b.scope: Deactivated successfully.
Nov 24 20:50:40 compute-0 systemd[1]: libpod-c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b.scope: Consumed 1.179s CPU time.
Nov 24 20:50:40 compute-0 conmon[302793]: conmon c6cac5e4f29bbb1dfb0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b.scope/container/memory.events
Nov 24 20:50:40 compute-0 podman[302777]: 2025-11-24 20:50:40.599360738 +0000 UTC m=+1.410425611 container died c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f878023c06fb32dd0f2595a2a4f7afb3007c13e75fa0dca9b6f6154c9ba7119-merged.mount: Deactivated successfully.
Nov 24 20:50:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:50:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:50:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:50:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:50:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:50:40 compute-0 podman[302777]: 2025-11-24 20:50:40.655777503 +0000 UTC m=+1.466842376 container remove c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:50:40 compute-0 systemd[1]: libpod-conmon-c6cac5e4f29bbb1dfb0d00d4750f0d6340eaa2c090031309cf27cdbd8e5efb1b.scope: Deactivated successfully.
Nov 24 20:50:40 compute-0 sudo[302671]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:40 compute-0 sudo[302833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:40 compute-0 sudo[302833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:40 compute-0 sudo[302833]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:40 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:40 compute-0 sudo[302858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:50:40 compute-0 sudo[302858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:40 compute-0 sudo[302858]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:40.873+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:40 compute-0 sudo[302883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:40 compute-0 sudo[302883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:40 compute-0 sudo[302883]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:40 compute-0 sudo[302908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:50:40 compute-0 sudo[302908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:41.020+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:41 compute-0 podman[302974]: 2025-11-24 20:50:41.365858827 +0000 UTC m=+0.057427214 container create e3c0ded16613b24f33c0dcf575ae87932c0a0388bf928e0f0e719cc2f1dea744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:50:41 compute-0 systemd[1]: Started libpod-conmon-e3c0ded16613b24f33c0dcf575ae87932c0a0388bf928e0f0e719cc2f1dea744.scope.
Nov 24 20:50:41 compute-0 podman[302974]: 2025-11-24 20:50:41.338181629 +0000 UTC m=+0.029750066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:50:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:50:41 compute-0 podman[302974]: 2025-11-24 20:50:41.46201384 +0000 UTC m=+0.153582257 container init e3c0ded16613b24f33c0dcf575ae87932c0a0388bf928e0f0e719cc2f1dea744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:50:41 compute-0 podman[302974]: 2025-11-24 20:50:41.473557945 +0000 UTC m=+0.165126333 container start e3c0ded16613b24f33c0dcf575ae87932c0a0388bf928e0f0e719cc2f1dea744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:50:41 compute-0 podman[302974]: 2025-11-24 20:50:41.477623067 +0000 UTC m=+0.169191454 container attach e3c0ded16613b24f33c0dcf575ae87932c0a0388bf928e0f0e719cc2f1dea744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 20:50:41 compute-0 cranky_hypatia[302990]: 167 167
Nov 24 20:50:41 compute-0 systemd[1]: libpod-e3c0ded16613b24f33c0dcf575ae87932c0a0388bf928e0f0e719cc2f1dea744.scope: Deactivated successfully.
Nov 24 20:50:41 compute-0 podman[302974]: 2025-11-24 20:50:41.483815137 +0000 UTC m=+0.175383514 container died e3c0ded16613b24f33c0dcf575ae87932c0a0388bf928e0f0e719cc2f1dea744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hypatia, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8b9c4ac4fd264008e2ee53e939ebcc095dc4568b496db35af17dbfbfb532dd1-merged.mount: Deactivated successfully.
Nov 24 20:50:41 compute-0 podman[302974]: 2025-11-24 20:50:41.539234134 +0000 UTC m=+0.230802491 container remove e3c0ded16613b24f33c0dcf575ae87932c0a0388bf928e0f0e719cc2f1dea744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:50:41 compute-0 systemd[1]: libpod-conmon-e3c0ded16613b24f33c0dcf575ae87932c0a0388bf928e0f0e719cc2f1dea744.scope: Deactivated successfully.
Nov 24 20:50:41 compute-0 podman[303013]: 2025-11-24 20:50:41.780510181 +0000 UTC m=+0.077598606 container create 12386878a1dc1e93e2a473f3713e59ac9a352f60482f8056bbff696d49f9ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:50:41 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:41.826+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:41 compute-0 systemd[1]: Started libpod-conmon-12386878a1dc1e93e2a473f3713e59ac9a352f60482f8056bbff696d49f9ed2e.scope.
Nov 24 20:50:41 compute-0 podman[303013]: 2025-11-24 20:50:41.748154455 +0000 UTC m=+0.045242900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:50:41 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:41 compute-0 ceph-mon[75677]: pgmap v2070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:50:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3562 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78db2691ce79e9506b6ff19faeef7cfbf867f649f51d508c97cf20cb57ed2f76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78db2691ce79e9506b6ff19faeef7cfbf867f649f51d508c97cf20cb57ed2f76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78db2691ce79e9506b6ff19faeef7cfbf867f649f51d508c97cf20cb57ed2f76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78db2691ce79e9506b6ff19faeef7cfbf867f649f51d508c97cf20cb57ed2f76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:41 compute-0 podman[303013]: 2025-11-24 20:50:41.906652325 +0000 UTC m=+0.203740810 container init 12386878a1dc1e93e2a473f3713e59ac9a352f60482f8056bbff696d49f9ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 20:50:41 compute-0 podman[303013]: 2025-11-24 20:50:41.913619816 +0000 UTC m=+0.210708201 container start 12386878a1dc1e93e2a473f3713e59ac9a352f60482f8056bbff696d49f9ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:50:41 compute-0 podman[303013]: 2025-11-24 20:50:41.917008518 +0000 UTC m=+0.214097003 container attach 12386878a1dc1e93e2a473f3713e59ac9a352f60482f8056bbff696d49f9ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:50:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:41.971+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:42 compute-0 fervent_haslett[303029]: {
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:     "0": [
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:         {
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "devices": [
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "/dev/loop3"
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             ],
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_name": "ceph_lv0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_size": "21470642176",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "name": "ceph_lv0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "tags": {
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.cluster_name": "ceph",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.crush_device_class": "",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.encrypted": "0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.osd_id": "0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.type": "block",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.vdo": "0"
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             },
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "type": "block",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "vg_name": "ceph_vg0"
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:         }
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:     ],
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:     "1": [
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:         {
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "devices": [
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "/dev/loop4"
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             ],
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_name": "ceph_lv1",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_size": "21470642176",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "name": "ceph_lv1",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "tags": {
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.cluster_name": "ceph",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.crush_device_class": "",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.encrypted": "0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.osd_id": "1",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.type": "block",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.vdo": "0"
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             },
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "type": "block",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "vg_name": "ceph_vg1"
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:         }
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:     ],
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:     "2": [
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:         {
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "devices": [
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "/dev/loop5"
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             ],
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_name": "ceph_lv2",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_size": "21470642176",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "name": "ceph_lv2",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "tags": {
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.cluster_name": "ceph",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.crush_device_class": "",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.encrypted": "0",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.osd_id": "2",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.type": "block",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:                 "ceph.vdo": "0"
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             },
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "type": "block",
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:             "vg_name": "ceph_vg2"
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:         }
Nov 24 20:50:42 compute-0 fervent_haslett[303029]:     ]
Nov 24 20:50:42 compute-0 fervent_haslett[303029]: }
Nov 24 20:50:42 compute-0 systemd[1]: libpod-12386878a1dc1e93e2a473f3713e59ac9a352f60482f8056bbff696d49f9ed2e.scope: Deactivated successfully.
Nov 24 20:50:42 compute-0 podman[303013]: 2025-11-24 20:50:42.732353364 +0000 UTC m=+1.029441779 container died 12386878a1dc1e93e2a473f3713e59ac9a352f60482f8056bbff696d49f9ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:50:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-78db2691ce79e9506b6ff19faeef7cfbf867f649f51d508c97cf20cb57ed2f76-merged.mount: Deactivated successfully.
Nov 24 20:50:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:42.783+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:42 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:42 compute-0 podman[303013]: 2025-11-24 20:50:42.817324852 +0000 UTC m=+1.114413277 container remove 12386878a1dc1e93e2a473f3713e59ac9a352f60482f8056bbff696d49f9ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:50:42 compute-0 systemd[1]: libpod-conmon-12386878a1dc1e93e2a473f3713e59ac9a352f60482f8056bbff696d49f9ed2e.scope: Deactivated successfully.
Nov 24 20:50:42 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:42 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:42 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3562 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:42 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:42 compute-0 sudo[302908]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:42 compute-0 sudo[303052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:42 compute-0 sudo[303052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:42 compute-0 sudo[303052]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:43 compute-0 sudo[303077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:50:43 compute-0 sudo[303077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:43 compute-0 sudo[303077]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:43.017+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:43 compute-0 sudo[303102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:43 compute-0 sudo[303102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:43 compute-0 sudo[303102]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:43 compute-0 sudo[303127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:50:43 compute-0 sudo[303127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:43 compute-0 podman[303195]: 2025-11-24 20:50:43.627869326 +0000 UTC m=+0.055657765 container create 6af55dac3529cec37acea0e52668d08cd1c2f66711dfe72186b0d6658de52160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 20:50:43 compute-0 systemd[1]: Started libpod-conmon-6af55dac3529cec37acea0e52668d08cd1c2f66711dfe72186b0d6658de52160.scope.
Nov 24 20:50:43 compute-0 podman[303195]: 2025-11-24 20:50:43.604555228 +0000 UTC m=+0.032343487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:50:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:50:43 compute-0 podman[303195]: 2025-11-24 20:50:43.736711986 +0000 UTC m=+0.164500225 container init 6af55dac3529cec37acea0e52668d08cd1c2f66711dfe72186b0d6658de52160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:50:43 compute-0 podman[303195]: 2025-11-24 20:50:43.748201111 +0000 UTC m=+0.175989370 container start 6af55dac3529cec37acea0e52668d08cd1c2f66711dfe72186b0d6658de52160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:50:43 compute-0 podman[303195]: 2025-11-24 20:50:43.752574631 +0000 UTC m=+0.180362880 container attach 6af55dac3529cec37acea0e52668d08cd1c2f66711dfe72186b0d6658de52160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 20:50:43 compute-0 charming_kalam[303211]: 167 167
Nov 24 20:50:43 compute-0 systemd[1]: libpod-6af55dac3529cec37acea0e52668d08cd1c2f66711dfe72186b0d6658de52160.scope: Deactivated successfully.
Nov 24 20:50:43 compute-0 podman[303195]: 2025-11-24 20:50:43.7562171 +0000 UTC m=+0.184005319 container died 6af55dac3529cec37acea0e52668d08cd1c2f66711dfe72186b0d6658de52160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 20:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef8c1c889baf57daa4ec781052f8bf16ae94dc5f9e1c5db076e2863bf8e4f907-merged.mount: Deactivated successfully.
Nov 24 20:50:43 compute-0 podman[303195]: 2025-11-24 20:50:43.794563121 +0000 UTC m=+0.222351340 container remove 6af55dac3529cec37acea0e52668d08cd1c2f66711dfe72186b0d6658de52160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:50:43 compute-0 systemd[1]: libpod-conmon-6af55dac3529cec37acea0e52668d08cd1c2f66711dfe72186b0d6658de52160.scope: Deactivated successfully.
Nov 24 20:50:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:43.814+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:43 compute-0 ceph-mon[75677]: pgmap v2071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:43 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:43 compute-0 podman[303234]: 2025-11-24 20:50:43.986741253 +0000 UTC m=+0.045453866 container create 2a65fa29fdcaa3b5541584ca4e060c00e64ca05808300237f3a744008a5ac314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 20:50:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:44.045+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:44 compute-0 systemd[1]: Started libpod-conmon-2a65fa29fdcaa3b5541584ca4e060c00e64ca05808300237f3a744008a5ac314.scope.
Nov 24 20:50:44 compute-0 podman[303234]: 2025-11-24 20:50:43.967522056 +0000 UTC m=+0.026234629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:50:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee38d01f6b0dc00a26ec97bd3dc2167f532a2fa7ad8154c25d336aaaa30cfa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee38d01f6b0dc00a26ec97bd3dc2167f532a2fa7ad8154c25d336aaaa30cfa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee38d01f6b0dc00a26ec97bd3dc2167f532a2fa7ad8154c25d336aaaa30cfa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee38d01f6b0dc00a26ec97bd3dc2167f532a2fa7ad8154c25d336aaaa30cfa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:50:44 compute-0 podman[303234]: 2025-11-24 20:50:44.113401281 +0000 UTC m=+0.172113904 container init 2a65fa29fdcaa3b5541584ca4e060c00e64ca05808300237f3a744008a5ac314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:50:44 compute-0 podman[303234]: 2025-11-24 20:50:44.12248487 +0000 UTC m=+0.181197443 container start 2a65fa29fdcaa3b5541584ca4e060c00e64ca05808300237f3a744008a5ac314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swartz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 20:50:44 compute-0 podman[303234]: 2025-11-24 20:50:44.12613685 +0000 UTC m=+0.184849523 container attach 2a65fa29fdcaa3b5541584ca4e060c00e64ca05808300237f3a744008a5ac314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swartz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:50:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:44.777+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:44 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:45.039+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:45 compute-0 exciting_swartz[303252]: {
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "osd_id": 2,
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "type": "bluestore"
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:     },
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "osd_id": 1,
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "type": "bluestore"
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:     },
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "osd_id": 0,
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:         "type": "bluestore"
Nov 24 20:50:45 compute-0 exciting_swartz[303252]:     }
Nov 24 20:50:45 compute-0 exciting_swartz[303252]: }
Nov 24 20:50:45 compute-0 systemd[1]: libpod-2a65fa29fdcaa3b5541584ca4e060c00e64ca05808300237f3a744008a5ac314.scope: Deactivated successfully.
Nov 24 20:50:45 compute-0 systemd[1]: libpod-2a65fa29fdcaa3b5541584ca4e060c00e64ca05808300237f3a744008a5ac314.scope: Consumed 1.140s CPU time.
Nov 24 20:50:45 compute-0 podman[303286]: 2025-11-24 20:50:45.31120971 +0000 UTC m=+0.034017092 container died 2a65fa29fdcaa3b5541584ca4e060c00e64ca05808300237f3a744008a5ac314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 20:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ee38d01f6b0dc00a26ec97bd3dc2167f532a2fa7ad8154c25d336aaaa30cfa9-merged.mount: Deactivated successfully.
Nov 24 20:50:45 compute-0 podman[303286]: 2025-11-24 20:50:45.376137778 +0000 UTC m=+0.098945090 container remove 2a65fa29fdcaa3b5541584ca4e060c00e64ca05808300237f3a744008a5ac314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:50:45 compute-0 systemd[1]: libpod-conmon-2a65fa29fdcaa3b5541584ca4e060c00e64ca05808300237f3a744008a5ac314.scope: Deactivated successfully.
Nov 24 20:50:45 compute-0 sudo[303127]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:50:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:50:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:50:45 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:50:45 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev bd8020bf-4dcf-46c9-b63e-1b946779eac5 does not exist
Nov 24 20:50:45 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6a49fdc8-eca3-4fa5-a194-b100506702fa does not exist
Nov 24 20:50:45 compute-0 sudo[303301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:50:45 compute-0 sudo[303301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:45 compute-0 sudo[303301]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:45 compute-0 sudo[303326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:50:45 compute-0 sudo[303326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:50:45 compute-0 sudo[303326]: pam_unix(sudo:session): session closed for user root
Nov 24 20:50:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:45.821+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:45 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:46.078+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:46 compute-0 ceph-mon[75677]: pgmap v2072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:50:46 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:50:46 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:46.871+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:47.045+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3567 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:47.894+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:48.033+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:48 compute-0 ceph-mon[75677]: pgmap v2073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:48 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:48 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3567 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:48.878+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:49.003+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:49 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:49.921+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:50.032+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:50 compute-0 ceph-mon[75677]: pgmap v2074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:50 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:50.942+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:51.049+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:51 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:51 compute-0 podman[303351]: 2025-11-24 20:50:51.840290022 +0000 UTC m=+0.062791981 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:50:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:51.973+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:52.027+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:52 compute-0 ceph-mon[75677]: pgmap v2075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:52 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:52.952+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:52 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:53.051+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:53 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:53.931+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:54.093+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:50:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:50:54 compute-0 ceph-mon[75677]: pgmap v2076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:54 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:54.975+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:55.090+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:55 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:55.975+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:55 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:56.117+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:56 compute-0 ceph-mon[75677]: pgmap v2077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:56 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3572 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:50:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:57.002+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:57.123+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:57 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:57 compute-0 ceph-mon[75677]: pgmap v2078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:57 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3572 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:50:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:58.049+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:58.090+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:58 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:50:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:50:59.047+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:50:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:50:59.124+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:50:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:59 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:50:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:50:59 compute-0 ceph-mon[75677]: pgmap v2079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:00.084+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:00.097+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:00 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:01.058+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:01.107+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:01 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:01 compute-0 ceph-mon[75677]: pgmap v2080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:01 compute-0 podman[303370]: 2025-11-24 20:51:01.845760354 +0000 UTC m=+0.071098727 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 24 20:51:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3582 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:02.044+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:02.058+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:02 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:02 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3582 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:03.068+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:03.108+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:03 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:03 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:03 compute-0 ceph-mon[75677]: pgmap v2081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:03 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:51:03.740 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:71:3d 10.100.0.2 2001:db8::f816:3eff:feba:713d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feba:713d/64', 'neutron:device_id': 'ovnmeta-7726002e-ceb4-4ee9-a16e-fe44c5654017', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7726002e-ceb4-4ee9-a16e-fe44c5654017', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b571d4e6-59be-45e8-bff9-0ba5e7611eed, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=21c48ccd-3271-47d2-ae2b-1b1faf7d6850) old=Port_Binding(mac=['fa:16:3e:ba:71:3d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7726002e-ceb4-4ee9-a16e-fe44c5654017', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7726002e-ceb4-4ee9-a16e-fe44c5654017', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68888f16e5b04b808865a338fb48bf2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:51:03 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:51:03.741 165944 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 21c48ccd-3271-47d2-ae2b-1b1faf7d6850 in datapath 7726002e-ceb4-4ee9-a16e-fe44c5654017 updated
Nov 24 20:51:03 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:51:03.742 165944 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7726002e-ceb4-4ee9-a16e-fe44c5654017, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 20:51:03 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:51:03.744 293511 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd368d3-ec73-4c39-bb70-06bd40a7fbf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 20:51:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:04.030+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:04.089+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:04 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:05.078+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:05.083+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:05 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:05 compute-0 ceph-mon[75677]: pgmap v2082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:06.078+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:06.100+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:06 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3587 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:06 compute-0 podman[303390]: 2025-11-24 20:51:06.904739762 +0000 UTC m=+0.137208638 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:51:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:07.102+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:07.128+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:07 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:07 compute-0 ceph-mon[75677]: pgmap v2083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:07 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3587 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:08.112+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:08.139+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:08 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:09.083+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:09.093+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:51:09.406 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:51:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:51:09.406 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:51:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:51:09.407 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:51:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:09 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:09 compute-0 ceph-mon[75677]: pgmap v2084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:10.086+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:10 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:10.124+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:10 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:11.076+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:11.094+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:11 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:11 compute-0 ceph-mon[75677]: pgmap v2085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3592 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:12.044+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:12.047+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:12 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:12 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:12 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3592 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:12.999+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:13.084+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:13 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:13 compute-0 ceph-mon[75677]: pgmap v2086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:14.024+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:14.090+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:14 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:14 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:15.059+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:15.112+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:15 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:15 compute-0 ceph-mon[75677]: pgmap v2087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:16.079+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:16.147+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:51:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2957734968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:51:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:51:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2957734968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:51:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:16 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2957734968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:51:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2957734968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:51:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:17.037+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:17.100+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:17 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:17 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:51:17.218 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:51:17 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:51:17.220 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:51:17 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:17 compute-0 ceph-mon[75677]: pgmap v2088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:17 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:18.015+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:18.074+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:18 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:18.990+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:19.028+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:19 compute-0 ceph-mon[75677]: pgmap v2089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:19 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:20.012+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:20.076+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:20 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3602 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:20 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:21.019+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:21.038+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:21.973+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:22.110+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:22 compute-0 ceph-mon[75677]: pgmap v2090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:22 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3602 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:22 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:22 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:51:22.222 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:51:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:22 compute-0 podman[303417]: 2025-11-24 20:51:22.857638439 +0000 UTC m=+0.074831260 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:51:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:22.986+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:23.113+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:23 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:24.021+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:24.119+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:24 compute-0 ceph-mon[75677]: pgmap v2091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:24 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:51:24
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', '.mgr', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.log']
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:51:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:25.036+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:25 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:25.167+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:26.021+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:26.155+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:26 compute-0 ceph-mon[75677]: pgmap v2092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:26 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:27 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:27.058+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:27.118+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:27 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3607 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:27 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:28.088+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:28.091+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:28 compute-0 ceph-mon[75677]: pgmap v2093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:28 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:28 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3607 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:29.050+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:29.110+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:29 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:30.078+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:30.093+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:30 compute-0 ceph-mon[75677]: pgmap v2094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:30 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:31.103+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:31.107+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:31 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:32.066+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:32.131+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:32 compute-0 ceph-mon[75677]: pgmap v2095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:32 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:32 compute-0 podman[303436]: 2025-11-24 20:51:32.852106321 +0000 UTC m=+0.081230885 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 20:51:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:33.018+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:33.131+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:33 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:34.033+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:34 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:34.107+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:34 compute-0 ceph-mon[75677]: pgmap v2096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:34 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:35.019+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:35.114+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:51:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:51:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:35 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:36.049+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:36.113+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:36 compute-0 ceph-mon[75677]: pgmap v2097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:36 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3612 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:37.072+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:37.130+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:37 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:37 compute-0 ceph-mon[75677]: pgmap v2098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:37 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3612 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:37 compute-0 podman[303456]: 2025-11-24 20:51:37.902936716 +0000 UTC m=+0.129065845 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 20:51:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:38.100+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:38.127+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:38 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:39.125+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:39.159+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:39 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:39 compute-0 ceph-mon[75677]: pgmap v2099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:40.081+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:40.173+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:40 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:51:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:51:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:51:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:51:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:51:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:41.045+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:41.137+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:41 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:41 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:41 compute-0 ceph-mon[75677]: pgmap v2100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3622 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:42.073+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:42 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:42.159+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:42 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:42 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:42 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3622 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:43.094+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:43.209+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:43 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:43 compute-0 ceph-mon[75677]: pgmap v2101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:44.090+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:44.178+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:44 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:45.122+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:45 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:45.134+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:45 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:45 compute-0 ceph-mon[75677]: pgmap v2102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:45 compute-0 sudo[303482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:51:45 compute-0 sudo[303482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:45 compute-0 sudo[303482]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:45 compute-0 sudo[303507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:51:45 compute-0 sudo[303507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:45 compute-0 sudo[303507]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:45 compute-0 sudo[303532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:51:45 compute-0 sudo[303532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:45 compute-0 sudo[303532]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:46 compute-0 sudo[303557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:51:46 compute-0 sudo[303557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:46.103+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:46.113+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:46 compute-0 sudo[303557]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:46 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 20:51:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 20:51:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:51:46 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:51:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:51:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:51:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:51:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:51:46 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 85c8dab6-e97a-49e3-a05c-bb7b2cb1c4f1 does not exist
Nov 24 20:51:46 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a6fd0642-b4fc-429d-830a-e46c785d45b7 does not exist
Nov 24 20:51:46 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a684f9e4-3550-400e-945f-6c5d3da58e39 does not exist
Nov 24 20:51:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:51:46 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:51:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:51:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:51:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:51:46 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:51:46 compute-0 sudo[303613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:51:46 compute-0 sudo[303613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:46 compute-0 sudo[303613]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3627 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:46 compute-0 sudo[303638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:51:46 compute-0 sudo[303638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:46 compute-0 sudo[303638]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:47 compute-0 sudo[303663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:51:47 compute-0 sudo[303663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:47 compute-0 sudo[303663]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:47.137+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:47.142+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:47 compute-0 sudo[303688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:51:47 compute-0 sudo[303688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:47 compute-0 podman[303756]: 2025-11-24 20:51:47.619493909 +0000 UTC m=+0.081162553 container create 017f6d6ecdb12353d751f2f76c13ab09aa1c3818d59ff77370a420c9ee8bf289 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:51:47 compute-0 podman[303756]: 2025-11-24 20:51:47.576891063 +0000 UTC m=+0.038559717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:51:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:47 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:47 compute-0 ceph-mon[75677]: pgmap v2103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 20:51:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:51:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:51:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:51:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:51:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:51:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:51:47 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3627 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:47 compute-0 systemd[1]: Started libpod-conmon-017f6d6ecdb12353d751f2f76c13ab09aa1c3818d59ff77370a420c9ee8bf289.scope.
Nov 24 20:51:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:51:47 compute-0 podman[303756]: 2025-11-24 20:51:47.745448878 +0000 UTC m=+0.207117532 container init 017f6d6ecdb12353d751f2f76c13ab09aa1c3818d59ff77370a420c9ee8bf289 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:51:47 compute-0 podman[303756]: 2025-11-24 20:51:47.757932 +0000 UTC m=+0.219600634 container start 017f6d6ecdb12353d751f2f76c13ab09aa1c3818d59ff77370a420c9ee8bf289 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 20:51:47 compute-0 podman[303756]: 2025-11-24 20:51:47.762217307 +0000 UTC m=+0.223886001 container attach 017f6d6ecdb12353d751f2f76c13ab09aa1c3818d59ff77370a420c9ee8bf289 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 24 20:51:47 compute-0 xenodochial_visvesvaraya[303773]: 167 167
Nov 24 20:51:47 compute-0 systemd[1]: libpod-017f6d6ecdb12353d751f2f76c13ab09aa1c3818d59ff77370a420c9ee8bf289.scope: Deactivated successfully.
Nov 24 20:51:47 compute-0 podman[303756]: 2025-11-24 20:51:47.767427 +0000 UTC m=+0.229095644 container died 017f6d6ecdb12353d751f2f76c13ab09aa1c3818d59ff77370a420c9ee8bf289 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 20:51:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a164ad8ba955d16ae7d9a53ec2e89a94d0a105d61dd02407dd274e1b13e715db-merged.mount: Deactivated successfully.
Nov 24 20:51:47 compute-0 podman[303756]: 2025-11-24 20:51:47.820157364 +0000 UTC m=+0.281825968 container remove 017f6d6ecdb12353d751f2f76c13ab09aa1c3818d59ff77370a420c9ee8bf289 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:51:47 compute-0 systemd[1]: libpod-conmon-017f6d6ecdb12353d751f2f76c13ab09aa1c3818d59ff77370a420c9ee8bf289.scope: Deactivated successfully.
Nov 24 20:51:48 compute-0 podman[303796]: 2025-11-24 20:51:48.074302003 +0000 UTC m=+0.066810851 container create a1a8aeb9d8d9ddfd417697d65565e0541eb467d813bbd4e9bd3b420cacc4ee47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_meitner, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 20:51:48 compute-0 systemd[1]: Started libpod-conmon-a1a8aeb9d8d9ddfd417697d65565e0541eb467d813bbd4e9bd3b420cacc4ee47.scope.
Nov 24 20:51:48 compute-0 podman[303796]: 2025-11-24 20:51:48.047197071 +0000 UTC m=+0.039705979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:51:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:48.138+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a0d5a3946fd0d7c9ba8a35216741f78d9500867d63d8188392b85e8eba789d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a0d5a3946fd0d7c9ba8a35216741f78d9500867d63d8188392b85e8eba789d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a0d5a3946fd0d7c9ba8a35216741f78d9500867d63d8188392b85e8eba789d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a0d5a3946fd0d7c9ba8a35216741f78d9500867d63d8188392b85e8eba789d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a0d5a3946fd0d7c9ba8a35216741f78d9500867d63d8188392b85e8eba789d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:48 compute-0 podman[303796]: 2025-11-24 20:51:48.167098884 +0000 UTC m=+0.159607742 container init a1a8aeb9d8d9ddfd417697d65565e0541eb467d813bbd4e9bd3b420cacc4ee47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_meitner, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:51:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:48.171+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:48 compute-0 podman[303796]: 2025-11-24 20:51:48.181867289 +0000 UTC m=+0.174376137 container start a1a8aeb9d8d9ddfd417697d65565e0541eb467d813bbd4e9bd3b420cacc4ee47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_meitner, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:51:48 compute-0 podman[303796]: 2025-11-24 20:51:48.185820987 +0000 UTC m=+0.178329865 container attach a1a8aeb9d8d9ddfd417697d65565e0541eb467d813bbd4e9bd3b420cacc4ee47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:51:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:48 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:49.144+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:49.165+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:49 compute-0 optimistic_meitner[303813]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:51:49 compute-0 optimistic_meitner[303813]: --> relative data size: 1.0
Nov 24 20:51:49 compute-0 optimistic_meitner[303813]: --> All data devices are unavailable
Nov 24 20:51:49 compute-0 systemd[1]: libpod-a1a8aeb9d8d9ddfd417697d65565e0541eb467d813bbd4e9bd3b420cacc4ee47.scope: Deactivated successfully.
Nov 24 20:51:49 compute-0 systemd[1]: libpod-a1a8aeb9d8d9ddfd417697d65565e0541eb467d813bbd4e9bd3b420cacc4ee47.scope: Consumed 1.115s CPU time.
Nov 24 20:51:49 compute-0 podman[303796]: 2025-11-24 20:51:49.330176882 +0000 UTC m=+1.322685740 container died a1a8aeb9d8d9ddfd417697d65565e0541eb467d813bbd4e9bd3b420cacc4ee47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 20:51:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a0d5a3946fd0d7c9ba8a35216741f78d9500867d63d8188392b85e8eba789d-merged.mount: Deactivated successfully.
Nov 24 20:51:49 compute-0 podman[303796]: 2025-11-24 20:51:49.411884969 +0000 UTC m=+1.404393827 container remove a1a8aeb9d8d9ddfd417697d65565e0541eb467d813bbd4e9bd3b420cacc4ee47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:51:49 compute-0 systemd[1]: libpod-conmon-a1a8aeb9d8d9ddfd417697d65565e0541eb467d813bbd4e9bd3b420cacc4ee47.scope: Deactivated successfully.
Nov 24 20:51:49 compute-0 sudo[303688]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:49 compute-0 sudo[303858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:51:49 compute-0 sudo[303858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:49 compute-0 sudo[303858]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:49 compute-0 sudo[303883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:51:49 compute-0 sudo[303883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:49 compute-0 sudo[303883]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:49 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:49 compute-0 ceph-mon[75677]: pgmap v2104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:49 compute-0 sudo[303908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:51:49 compute-0 sudo[303908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:49 compute-0 sudo[303908]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:49 compute-0 sudo[303933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:51:49 compute-0 sudo[303933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:50.150+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:50.176+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:50 compute-0 podman[303998]: 2025-11-24 20:51:50.359501787 +0000 UTC m=+0.073374580 container create 5d3715fbef61cf7f52098d72defe6f42da1d5b18ffe7a5e5eb0c738ee72b4ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:51:50 compute-0 sshd-session[303820]: Received disconnect from 47.236.3.13 port 39708:11: Bye Bye [preauth]
Nov 24 20:51:50 compute-0 sshd-session[303820]: Disconnected from authenticating user root 47.236.3.13 port 39708 [preauth]
Nov 24 20:51:50 compute-0 systemd[1]: Started libpod-conmon-5d3715fbef61cf7f52098d72defe6f42da1d5b18ffe7a5e5eb0c738ee72b4ae5.scope.
Nov 24 20:51:50 compute-0 podman[303998]: 2025-11-24 20:51:50.33111459 +0000 UTC m=+0.044987423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:51:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:51:50 compute-0 podman[303998]: 2025-11-24 20:51:50.456243067 +0000 UTC m=+0.170115910 container init 5d3715fbef61cf7f52098d72defe6f42da1d5b18ffe7a5e5eb0c738ee72b4ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 20:51:50 compute-0 podman[303998]: 2025-11-24 20:51:50.465827909 +0000 UTC m=+0.179700702 container start 5d3715fbef61cf7f52098d72defe6f42da1d5b18ffe7a5e5eb0c738ee72b4ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 20:51:50 compute-0 podman[303998]: 2025-11-24 20:51:50.469310334 +0000 UTC m=+0.183183117 container attach 5d3715fbef61cf7f52098d72defe6f42da1d5b18ffe7a5e5eb0c738ee72b4ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:51:50 compute-0 priceless_faraday[304015]: 167 167
Nov 24 20:51:50 compute-0 systemd[1]: libpod-5d3715fbef61cf7f52098d72defe6f42da1d5b18ffe7a5e5eb0c738ee72b4ae5.scope: Deactivated successfully.
Nov 24 20:51:50 compute-0 podman[303998]: 2025-11-24 20:51:50.473967432 +0000 UTC m=+0.187840235 container died 5d3715fbef61cf7f52098d72defe6f42da1d5b18ffe7a5e5eb0c738ee72b4ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:51:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-26fda91af3a40e02f6db43b01180b96245024eaeadb60bea0b7ee5676000ab21-merged.mount: Deactivated successfully.
Nov 24 20:51:50 compute-0 podman[303998]: 2025-11-24 20:51:50.525567544 +0000 UTC m=+0.239440327 container remove 5d3715fbef61cf7f52098d72defe6f42da1d5b18ffe7a5e5eb0c738ee72b4ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 20:51:50 compute-0 systemd[1]: libpod-conmon-5d3715fbef61cf7f52098d72defe6f42da1d5b18ffe7a5e5eb0c738ee72b4ae5.scope: Deactivated successfully.
Nov 24 20:51:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:50 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:50 compute-0 podman[304041]: 2025-11-24 20:51:50.752086217 +0000 UTC m=+0.066705218 container create 5a1d72a21ec2828dc5696057789235e1ed4ed7f7f97eb8a5a0506a28e1a179c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 20:51:50 compute-0 systemd[1]: Started libpod-conmon-5a1d72a21ec2828dc5696057789235e1ed4ed7f7f97eb8a5a0506a28e1a179c7.scope.
Nov 24 20:51:50 compute-0 podman[304041]: 2025-11-24 20:51:50.731069452 +0000 UTC m=+0.045688423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:51:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8727b3e006ffd1604ed796777854e2a27d618a996c6d1acebbc8c624b2b74b1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8727b3e006ffd1604ed796777854e2a27d618a996c6d1acebbc8c624b2b74b1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8727b3e006ffd1604ed796777854e2a27d618a996c6d1acebbc8c624b2b74b1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8727b3e006ffd1604ed796777854e2a27d618a996c6d1acebbc8c624b2b74b1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:50 compute-0 podman[304041]: 2025-11-24 20:51:50.883891596 +0000 UTC m=+0.198510587 container init 5a1d72a21ec2828dc5696057789235e1ed4ed7f7f97eb8a5a0506a28e1a179c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 20:51:50 compute-0 podman[304041]: 2025-11-24 20:51:50.895832773 +0000 UTC m=+0.210451744 container start 5a1d72a21ec2828dc5696057789235e1ed4ed7f7f97eb8a5a0506a28e1a179c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:51:50 compute-0 podman[304041]: 2025-11-24 20:51:50.900077909 +0000 UTC m=+0.214696950 container attach 5a1d72a21ec2828dc5696057789235e1ed4ed7f7f97eb8a5a0506a28e1a179c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:51:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:51.160+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:51.165+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]: {
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:     "0": [
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:         {
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "devices": [
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "/dev/loop3"
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             ],
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_name": "ceph_lv0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_size": "21470642176",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "name": "ceph_lv0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "tags": {
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.cluster_name": "ceph",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.crush_device_class": "",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.encrypted": "0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.osd_id": "0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.type": "block",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.vdo": "0"
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             },
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "type": "block",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "vg_name": "ceph_vg0"
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:         }
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:     ],
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:     "1": [
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:         {
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "devices": [
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "/dev/loop4"
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             ],
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_name": "ceph_lv1",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_size": "21470642176",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "name": "ceph_lv1",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "tags": {
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.cluster_name": "ceph",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.crush_device_class": "",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.encrypted": "0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.osd_id": "1",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.type": "block",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.vdo": "0"
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             },
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "type": "block",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "vg_name": "ceph_vg1"
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:         }
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:     ],
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:     "2": [
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:         {
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "devices": [
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "/dev/loop5"
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             ],
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_name": "ceph_lv2",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_size": "21470642176",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "name": "ceph_lv2",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "tags": {
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.cluster_name": "ceph",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.crush_device_class": "",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.encrypted": "0",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.osd_id": "2",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.type": "block",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:                 "ceph.vdo": "0"
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             },
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "type": "block",
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:             "vg_name": "ceph_vg2"
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:         }
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]:     ]
Nov 24 20:51:51 compute-0 nifty_goldstine[304057]: }
Nov 24 20:51:51 compute-0 systemd[1]: libpod-5a1d72a21ec2828dc5696057789235e1ed4ed7f7f97eb8a5a0506a28e1a179c7.scope: Deactivated successfully.
Nov 24 20:51:51 compute-0 podman[304041]: 2025-11-24 20:51:51.638136699 +0000 UTC m=+0.952755700 container died 5a1d72a21ec2828dc5696057789235e1ed4ed7f7f97eb8a5a0506a28e1a179c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goldstine, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8727b3e006ffd1604ed796777854e2a27d618a996c6d1acebbc8c624b2b74b1d-merged.mount: Deactivated successfully.
Nov 24 20:51:51 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:51 compute-0 ceph-mon[75677]: pgmap v2105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:51 compute-0 podman[304041]: 2025-11-24 20:51:51.709789821 +0000 UTC m=+1.024408822 container remove 5a1d72a21ec2828dc5696057789235e1ed4ed7f7f97eb8a5a0506a28e1a179c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 20:51:51 compute-0 systemd[1]: libpod-conmon-5a1d72a21ec2828dc5696057789235e1ed4ed7f7f97eb8a5a0506a28e1a179c7.scope: Deactivated successfully.
Nov 24 20:51:51 compute-0 sudo[303933]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:51 compute-0 sudo[304080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:51:51 compute-0 sudo[304080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:51 compute-0 sudo[304080]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 35 slow ops, oldest one blocked for 3632 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:51 compute-0 sudo[304105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:51:51 compute-0 sudo[304105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:51 compute-0 sudo[304105]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:52 compute-0 sudo[304130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:51:52 compute-0 sudo[304130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:52 compute-0 sudo[304130]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:52.136+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:52 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:52 compute-0 sudo[304155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:51:52 compute-0 sudo[304155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:52.212+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:52 compute-0 podman[304221]: 2025-11-24 20:51:52.606517606 +0000 UTC m=+0.056856848 container create e46ca7466ab30f97913df358b3e3e58a33c39e6c4019ecf3132451108d457f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:51:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:52 compute-0 systemd[1]: Started libpod-conmon-e46ca7466ab30f97913df358b3e3e58a33c39e6c4019ecf3132451108d457f16.scope.
Nov 24 20:51:52 compute-0 podman[304221]: 2025-11-24 20:51:52.579496487 +0000 UTC m=+0.029835729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:51:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:51:52 compute-0 podman[304221]: 2025-11-24 20:51:52.700799428 +0000 UTC m=+0.151138730 container init e46ca7466ab30f97913df358b3e3e58a33c39e6c4019ecf3132451108d457f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:51:52 compute-0 podman[304221]: 2025-11-24 20:51:52.714472142 +0000 UTC m=+0.164811384 container start e46ca7466ab30f97913df358b3e3e58a33c39e6c4019ecf3132451108d457f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:51:52 compute-0 ceph-mon[75677]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 14 ])
Nov 24 20:51:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:52 compute-0 ceph-mon[75677]: Health check update: 35 slow ops, oldest one blocked for 3632 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:52 compute-0 podman[304221]: 2025-11-24 20:51:52.720342573 +0000 UTC m=+0.170681815 container attach e46ca7466ab30f97913df358b3e3e58a33c39e6c4019ecf3132451108d457f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 20:51:52 compute-0 modest_burnell[304237]: 167 167
Nov 24 20:51:52 compute-0 systemd[1]: libpod-e46ca7466ab30f97913df358b3e3e58a33c39e6c4019ecf3132451108d457f16.scope: Deactivated successfully.
Nov 24 20:51:52 compute-0 podman[304221]: 2025-11-24 20:51:52.72280128 +0000 UTC m=+0.173140552 container died e46ca7466ab30f97913df358b3e3e58a33c39e6c4019ecf3132451108d457f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d09fbf9eaa2e7ad9059ba98c49b43e93683559a6caac3126f908303ee977fd5-merged.mount: Deactivated successfully.
Nov 24 20:51:52 compute-0 podman[304221]: 2025-11-24 20:51:52.773118708 +0000 UTC m=+0.223457930 container remove e46ca7466ab30f97913df358b3e3e58a33c39e6c4019ecf3132451108d457f16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:51:52 compute-0 systemd[1]: libpod-conmon-e46ca7466ab30f97913df358b3e3e58a33c39e6c4019ecf3132451108d457f16.scope: Deactivated successfully.
Nov 24 20:51:52 compute-0 podman[304260]: 2025-11-24 20:51:52.963281585 +0000 UTC m=+0.044871360 container create 3e8b110c442fbecedefdadf19c7f99997ca62c72cab67265f6e62369a9ec81bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:51:53 compute-0 systemd[1]: Started libpod-conmon-3e8b110c442fbecedefdadf19c7f99997ca62c72cab67265f6e62369a9ec81bb.scope.
Nov 24 20:51:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89a13e6015a265d6bfa9ec5fe81070fc9f288beefa1b86d7037d652efe3bd94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89a13e6015a265d6bfa9ec5fe81070fc9f288beefa1b86d7037d652efe3bd94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89a13e6015a265d6bfa9ec5fe81070fc9f288beefa1b86d7037d652efe3bd94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89a13e6015a265d6bfa9ec5fe81070fc9f288beefa1b86d7037d652efe3bd94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:51:53 compute-0 podman[304260]: 2025-11-24 20:51:52.94407897 +0000 UTC m=+0.025668725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:51:53 compute-0 podman[304260]: 2025-11-24 20:51:53.04710714 +0000 UTC m=+0.128696925 container init 3e8b110c442fbecedefdadf19c7f99997ca62c72cab67265f6e62369a9ec81bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 20:51:53 compute-0 podman[304260]: 2025-11-24 20:51:53.053706771 +0000 UTC m=+0.135296506 container start 3e8b110c442fbecedefdadf19c7f99997ca62c72cab67265f6e62369a9ec81bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 24 20:51:53 compute-0 podman[304260]: 2025-11-24 20:51:53.061865455 +0000 UTC m=+0.143455240 container attach 3e8b110c442fbecedefdadf19c7f99997ca62c72cab67265f6e62369a9ec81bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 20:51:53 compute-0 podman[304274]: 2025-11-24 20:51:53.082035267 +0000 UTC m=+0.071249063 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 20:51:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:53.129+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:53.196+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:53 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:53 compute-0 ceph-mon[75677]: pgmap v2106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:54.132+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]: {
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "osd_id": 2,
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "type": "bluestore"
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:     },
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "osd_id": 1,
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "type": "bluestore"
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:     },
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "osd_id": 0,
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:         "type": "bluestore"
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]:     }
Nov 24 20:51:54 compute-0 cranky_brahmagupta[304278]: }
Nov 24 20:51:54 compute-0 systemd[1]: libpod-3e8b110c442fbecedefdadf19c7f99997ca62c72cab67265f6e62369a9ec81bb.scope: Deactivated successfully.
Nov 24 20:51:54 compute-0 podman[304260]: 2025-11-24 20:51:54.18907924 +0000 UTC m=+1.270668985 container died 3e8b110c442fbecedefdadf19c7f99997ca62c72cab67265f6e62369a9ec81bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:51:54 compute-0 systemd[1]: libpod-3e8b110c442fbecedefdadf19c7f99997ca62c72cab67265f6e62369a9ec81bb.scope: Consumed 1.143s CPU time.
Nov 24 20:51:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:54.215+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e89a13e6015a265d6bfa9ec5fe81070fc9f288beefa1b86d7037d652efe3bd94-merged.mount: Deactivated successfully.
Nov 24 20:51:54 compute-0 podman[304260]: 2025-11-24 20:51:54.267169999 +0000 UTC m=+1.348759744 container remove 3e8b110c442fbecedefdadf19c7f99997ca62c72cab67265f6e62369a9ec81bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:51:54 compute-0 systemd[1]: libpod-conmon-3e8b110c442fbecedefdadf19c7f99997ca62c72cab67265f6e62369a9ec81bb.scope: Deactivated successfully.
Nov 24 20:51:54 compute-0 sudo[304155]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:51:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:51:54 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:51:54 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:51:54 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7655008b-6df4-4568-8775-e83dc1e71b07 does not exist
Nov 24 20:51:54 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 5676899a-3237-4450-b78b-be16cb06a9e8 does not exist
Nov 24 20:51:54 compute-0 sudo[304344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:51:54 compute-0 sudo[304344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:51:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:51:54 compute-0 sudo[304344]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:54 compute-0 sudo[304369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:51:54 compute-0 sudo[304369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:51:54 compute-0 sudo[304369]: pam_unix(sudo:session): session closed for user root
Nov 24 20:51:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:54 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:51:54 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:51:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:55.106+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:55 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:55.228+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:55 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:55 compute-0 ceph-mon[75677]: pgmap v2107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:56.077+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:56 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:56.258+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:56 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 3637 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:51:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:57.060+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:57.273+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:57 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:57 compute-0 ceph-mon[75677]: pgmap v2108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:57 compute-0 ceph-mon[75677]: Health check update: 25 slow ops, oldest one blocked for 3637 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:51:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:58.013+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:58.225+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:58 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:59.019+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:51:59.261+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:51:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:59 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:51:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:51:59 compute-0 ceph-mon[75677]: pgmap v2109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:51:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:51:59.981+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:51:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:00.267+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:00 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:00.952+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:01.229+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:01 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:01 compute-0 ceph-mon[75677]: pgmap v2110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 3642 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:01.941+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:02.271+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:02 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:02 compute-0 ceph-mon[75677]: Health check update: 25 slow ops, oldest one blocked for 3642 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:02.975+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:03.259+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:03 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:03 compute-0 ceph-mon[75677]: pgmap v2111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:03 compute-0 podman[304394]: 2025-11-24 20:52:03.875831036 +0000 UTC m=+0.090745216 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 24 20:52:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:04.002+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:04.272+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:04 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:05.038+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:05.290+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:05 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:05 compute-0 ceph-mon[75677]: pgmap v2112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:06.049+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:06.247+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:06 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 3647 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:07.052+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:07.205+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:07 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:07 compute-0 ceph-mon[75677]: pgmap v2113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:07 compute-0 ceph-mon[75677]: Health check update: 25 slow ops, oldest one blocked for 3647 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:08.018+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:08.177+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:08 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:08 compute-0 podman[304414]: 2025-11-24 20:52:08.942561666 +0000 UTC m=+0.164872555 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 20:52:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:09.025+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:09.200+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:52:09.407 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:52:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:52:09.407 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:52:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:52:09.408 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:52:09 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:09 compute-0 ceph-mon[75677]: pgmap v2114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:09.992+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:10.187+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:10 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:10.961+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:10 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:11.204+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:11 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:11 compute-0 ceph-mon[75677]: pgmap v2115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 3652 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:11.917+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:12.189+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:12 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:12 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:12 compute-0 ceph-mon[75677]: Health check update: 25 slow ops, oldest one blocked for 3652 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:12 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:12.960+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:13.155+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:13 compute-0 ceph-mon[75677]: pgmap v2116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:13 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:13.916+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:14.159+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:14 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:14.946+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:14 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:15.149+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:15.920+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:15 compute-0 ceph-mon[75677]: pgmap v2117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:15 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:16.116+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:52:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/126354883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:52:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:52:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/126354883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:52:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:16.936+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 25 slow ops, oldest one blocked for 3657 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:16 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/126354883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:52:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/126354883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:52:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:17.123+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:17 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:52:17.343 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:52:17 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:52:17.345 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:52:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:17.925+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:17 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:17 compute-0 ceph-mon[75677]: pgmap v2118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:17 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:17 compute-0 ceph-mon[75677]: Health check update: 25 slow ops, oldest one blocked for 3657 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:18.115+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:18.970+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:18 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:19.147+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:19 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:19.934+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:19 compute-0 ceph-mon[75677]: pgmap v2119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:19 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:20.140+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:20.905+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:20 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:21.174+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:21.935+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:22 compute-0 ceph-mon[75677]: pgmap v2120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:22 compute-0 ceph-mon[75677]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 4 ])
Nov 24 20:52:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:22.181+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:22.965+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:23 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:23.161+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:23 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:23 compute-0 podman[304440]: 2025-11-24 20:52:23.84544976 +0000 UTC m=+0.078001036 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 24 20:52:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:24.002+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:24 compute-0 ceph-mon[75677]: pgmap v2121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:24 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:24.173+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:52:24
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups']
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:52:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:25.027+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:25 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3662 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:25 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:25.193+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:25.993+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:26 compute-0 ceph-mon[75677]: pgmap v2122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:26 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:26 compute-0 ceph-mon[75677]: Health check update: 36 slow ops, oldest one blocked for 3662 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:26.188+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:26.985+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:27 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:27.156+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:27 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:27 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:52:27.347 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:52:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:27.940+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:27 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:28 compute-0 ceph-mon[75677]: pgmap v2123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:28 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:28.186+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:28.959+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:29 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:29.158+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:29.935+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:30 compute-0 ceph-mon[75677]: pgmap v2124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:30 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:30.177+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:30.925+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3672 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:31 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:31.139+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:31.925+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:32.134+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:32 compute-0 ceph-mon[75677]: pgmap v2125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:32 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:32 compute-0 ceph-mon[75677]: Health check update: 36 slow ops, oldest one blocked for 3672 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:32.880+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:33.094+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:33 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:33.929+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:34.104+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:34 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:34 compute-0 ceph-mon[75677]: pgmap v2126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:34 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:34 compute-0 podman[304460]: 2025-11-24 20:52:34.861840776 +0000 UTC m=+0.084845094 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 20:52:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:34.952+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:35.153+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:35 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:52:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:52:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:35.977+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:36.171+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:36 compute-0 ceph-mon[75677]: pgmap v2127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:36 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:36.976+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:37.171+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3677 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:37 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:37.997+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:38.139+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:38 compute-0 ceph-mon[75677]: pgmap v2128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:38 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:38 compute-0 ceph-mon[75677]: Health check update: 36 slow ops, oldest one blocked for 3677 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:38.994+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:39.150+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:39 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:39 compute-0 podman[304482]: 2025-11-24 20:52:39.902344697 +0000 UTC m=+0.123495932 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:52:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:39.976+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:40.196+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:40 compute-0 ceph-mon[75677]: pgmap v2129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:40 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:52:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:52:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:52:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:52:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:52:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:40.942+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:41.173+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:41 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:41.977+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:41 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:42.175+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:42 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:42 compute-0 ceph-mon[75677]: pgmap v2130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:42 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:43.024+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:43.174+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:43 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:44.034+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:44.168+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:44 compute-0 ceph-mon[75677]: pgmap v2131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:44 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:45 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:45.055+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:45.162+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:45 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:46.056+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:46.186+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:46 compute-0 ceph-mon[75677]: pgmap v2132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:46 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 3682 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:47.031+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:47.163+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:52:47 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:47 compute-0 ceph-mon[75677]: Health check update: 36 slow ops, oldest one blocked for 3682 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:48.022+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:48.191+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:48 compute-0 ceph-mon[75677]: pgmap v2133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:48 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 20:52:48 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'default.rgw.log' : 20 ])
Nov 24 20:52:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:49.019+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:49.164+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:49 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:49.984+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:50.126+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:50 compute-0 ceph-mon[75677]: pgmap v2134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:50 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:50.941+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:51.139+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:51 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:51 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 23 slow ops, oldest one blocked for 3692 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:51.935+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:52.100+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:52 compute-0 ceph-mon[75677]: pgmap v2135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:52 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:52 compute-0 ceph-mon[75677]: Health check update: 23 slow ops, oldest one blocked for 3692 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:52 compute-0 sshd-session[304509]: Received disconnect from 51.158.120.121 port 39804:11: Bye Bye [preauth]
Nov 24 20:52:52 compute-0 sshd-session[304509]: Disconnected from authenticating user root 51.158.120.121 port 39804 [preauth]
Nov 24 20:52:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:52.907+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:52 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:53.144+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:53 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:53.877+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:54.112+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:54 compute-0 ceph-mon[75677]: pgmap v2136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:54 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:54 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:52:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:52:54 compute-0 sudo[304511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:52:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:54 compute-0 sudo[304511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:54 compute-0 sudo[304511]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:54 compute-0 podman[304535]: 2025-11-24 20:52:54.764815376 +0000 UTC m=+0.087551979 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:52:54 compute-0 sudo[304542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:52:54 compute-0 sudo[304542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:54 compute-0 sudo[304542]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:54.856+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:54 compute-0 sudo[304580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:52:54 compute-0 sudo[304580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:54 compute-0 sudo[304580]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:54 compute-0 sudo[304605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:52:54 compute-0 sudo[304605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:55.137+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:55 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:55 compute-0 sudo[304605]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:52:55 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:52:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:52:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:52:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:52:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:52:55 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 5cdec791-e78f-4366-8dc4-a90f7358f8a6 does not exist
Nov 24 20:52:55 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e939e100-e40d-41a3-b4c4-651e4be02b52 does not exist
Nov 24 20:52:55 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4655a3a1-aaa1-4e4c-b69b-b960f9a85d19 does not exist
Nov 24 20:52:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:52:55 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:52:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:52:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:52:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:52:55 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:52:55 compute-0 sudo[304662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:52:55 compute-0 sudo[304662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:55 compute-0 sudo[304662]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:55 compute-0 sudo[304687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:52:55 compute-0 sudo[304687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:55 compute-0 sudo[304687]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:55.821+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:55 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:55 compute-0 sudo[304712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:52:55 compute-0 sudo[304712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:55 compute-0 sudo[304712]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:55 compute-0 sudo[304737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:52:55 compute-0 sudo[304737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:56.104+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:56 compute-0 ceph-mon[75677]: pgmap v2137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:52:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:52:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:52:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:52:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:52:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:52:56 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:56 compute-0 podman[304801]: 2025-11-24 20:52:56.421378166 +0000 UTC m=+0.052849678 container create f8f8927a1d466c0e89e24351eb310641b567cf4eb180ac813fc3a5d089daaeca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:52:56 compute-0 systemd[1]: Started libpod-conmon-f8f8927a1d466c0e89e24351eb310641b567cf4eb180ac813fc3a5d089daaeca.scope.
Nov 24 20:52:56 compute-0 podman[304801]: 2025-11-24 20:52:56.398970643 +0000 UTC m=+0.030442165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:52:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:52:56 compute-0 podman[304801]: 2025-11-24 20:52:56.524925752 +0000 UTC m=+0.156397344 container init f8f8927a1d466c0e89e24351eb310641b567cf4eb180ac813fc3a5d089daaeca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:52:56 compute-0 podman[304801]: 2025-11-24 20:52:56.538541535 +0000 UTC m=+0.170013027 container start f8f8927a1d466c0e89e24351eb310641b567cf4eb180ac813fc3a5d089daaeca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:52:56 compute-0 podman[304801]: 2025-11-24 20:52:56.542629436 +0000 UTC m=+0.174101028 container attach f8f8927a1d466c0e89e24351eb310641b567cf4eb180ac813fc3a5d089daaeca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:52:56 compute-0 nostalgic_bose[304817]: 167 167
Nov 24 20:52:56 compute-0 systemd[1]: libpod-f8f8927a1d466c0e89e24351eb310641b567cf4eb180ac813fc3a5d089daaeca.scope: Deactivated successfully.
Nov 24 20:52:56 compute-0 conmon[304817]: conmon f8f8927a1d466c0e89e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8f8927a1d466c0e89e24351eb310641b567cf4eb180ac813fc3a5d089daaeca.scope/container/memory.events
Nov 24 20:52:56 compute-0 podman[304801]: 2025-11-24 20:52:56.549893255 +0000 UTC m=+0.181364757 container died f8f8927a1d466c0e89e24351eb310641b567cf4eb180ac813fc3a5d089daaeca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 20:52:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8697e9ddcfb6c132cd1f2fb94a6b358b9ec1a43691d4875e50081c3b5a6f420-merged.mount: Deactivated successfully.
Nov 24 20:52:56 compute-0 podman[304801]: 2025-11-24 20:52:56.609835296 +0000 UTC m=+0.241306838 container remove f8f8927a1d466c0e89e24351eb310641b567cf4eb180ac813fc3a5d089daaeca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 20:52:56 compute-0 systemd[1]: libpod-conmon-f8f8927a1d466c0e89e24351eb310641b567cf4eb180ac813fc3a5d089daaeca.scope: Deactivated successfully.
Nov 24 20:52:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:56 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:56.810+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:56 compute-0 podman[304841]: 2025-11-24 20:52:56.861754755 +0000 UTC m=+0.071847289 container create fb8d2bb944b648857613a9a811d64952f6dfba3eb7bf652a406e7db2027c69f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_murdock, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 20:52:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:52:56 compute-0 systemd[1]: Started libpod-conmon-fb8d2bb944b648857613a9a811d64952f6dfba3eb7bf652a406e7db2027c69f1.scope.
Nov 24 20:52:56 compute-0 podman[304841]: 2025-11-24 20:52:56.834402475 +0000 UTC m=+0.044495079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:52:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:52:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ef398e1cf9450b6be971e1a779d9cc89de09569b1c77eb552d31da3c1182b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:52:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ef398e1cf9450b6be971e1a779d9cc89de09569b1c77eb552d31da3c1182b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:52:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ef398e1cf9450b6be971e1a779d9cc89de09569b1c77eb552d31da3c1182b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:52:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ef398e1cf9450b6be971e1a779d9cc89de09569b1c77eb552d31da3c1182b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:52:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ef398e1cf9450b6be971e1a779d9cc89de09569b1c77eb552d31da3c1182b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:52:56 compute-0 podman[304841]: 2025-11-24 20:52:56.994475698 +0000 UTC m=+0.204568302 container init fb8d2bb944b648857613a9a811d64952f6dfba3eb7bf652a406e7db2027c69f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:52:57 compute-0 podman[304841]: 2025-11-24 20:52:57.007734822 +0000 UTC m=+0.217827366 container start fb8d2bb944b648857613a9a811d64952f6dfba3eb7bf652a406e7db2027c69f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:52:57 compute-0 podman[304841]: 2025-11-24 20:52:57.013022437 +0000 UTC m=+0.223115031 container attach fb8d2bb944b648857613a9a811d64952f6dfba3eb7bf652a406e7db2027c69f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_murdock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:52:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:57.146+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 23 slow ops, oldest one blocked for 3697 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:57 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:57.850+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:58.108+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:58 compute-0 funny_murdock[304857]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:52:58 compute-0 funny_murdock[304857]: --> relative data size: 1.0
Nov 24 20:52:58 compute-0 funny_murdock[304857]: --> All data devices are unavailable
Nov 24 20:52:58 compute-0 systemd[1]: libpod-fb8d2bb944b648857613a9a811d64952f6dfba3eb7bf652a406e7db2027c69f1.scope: Deactivated successfully.
Nov 24 20:52:58 compute-0 podman[304841]: 2025-11-24 20:52:58.366449417 +0000 UTC m=+1.576541961 container died fb8d2bb944b648857613a9a811d64952f6dfba3eb7bf652a406e7db2027c69f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_murdock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:52:58 compute-0 systemd[1]: libpod-fb8d2bb944b648857613a9a811d64952f6dfba3eb7bf652a406e7db2027c69f1.scope: Consumed 1.275s CPU time.
Nov 24 20:52:58 compute-0 ceph-mon[75677]: pgmap v2138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:58 compute-0 ceph-mon[75677]: Health check update: 23 slow ops, oldest one blocked for 3697 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:52:58 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-25ef398e1cf9450b6be971e1a779d9cc89de09569b1c77eb552d31da3c1182b8-merged.mount: Deactivated successfully.
Nov 24 20:52:58 compute-0 podman[304841]: 2025-11-24 20:52:58.454701353 +0000 UTC m=+1.664793897 container remove fb8d2bb944b648857613a9a811d64952f6dfba3eb7bf652a406e7db2027c69f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 20:52:58 compute-0 systemd[1]: libpod-conmon-fb8d2bb944b648857613a9a811d64952f6dfba3eb7bf652a406e7db2027c69f1.scope: Deactivated successfully.
Nov 24 20:52:58 compute-0 sudo[304737]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:58 compute-0 sudo[304900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:52:58 compute-0 sudo[304900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:58 compute-0 sudo[304900]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:52:58 compute-0 sudo[304925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:52:58 compute-0 sudo[304925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:58 compute-0 sudo[304925]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:58.809+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:58 compute-0 sudo[304950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:52:58 compute-0 sudo[304950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:58 compute-0 sudo[304950]: pam_unix(sudo:session): session closed for user root
Nov 24 20:52:58 compute-0 sudo[304975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:52:58 compute-0 sudo[304975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:52:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:52:59.109+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:52:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:52:59 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:59 compute-0 podman[305040]: 2025-11-24 20:52:59.440367843 +0000 UTC m=+0.064826417 container create becd6637d53bd994ab05c19ad0778a48a86fc33d7ecd7e7c10917114c19804c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_napier, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:52:59 compute-0 systemd[1]: Started libpod-conmon-becd6637d53bd994ab05c19ad0778a48a86fc33d7ecd7e7c10917114c19804c9.scope.
Nov 24 20:52:59 compute-0 podman[305040]: 2025-11-24 20:52:59.411647056 +0000 UTC m=+0.036105680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:52:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:52:59 compute-0 podman[305040]: 2025-11-24 20:52:59.553348746 +0000 UTC m=+0.177807380 container init becd6637d53bd994ab05c19ad0778a48a86fc33d7ecd7e7c10917114c19804c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_napier, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:52:59 compute-0 podman[305040]: 2025-11-24 20:52:59.566783364 +0000 UTC m=+0.191241938 container start becd6637d53bd994ab05c19ad0778a48a86fc33d7ecd7e7c10917114c19804c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_napier, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 20:52:59 compute-0 podman[305040]: 2025-11-24 20:52:59.571328678 +0000 UTC m=+0.195787322 container attach becd6637d53bd994ab05c19ad0778a48a86fc33d7ecd7e7c10917114c19804c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_napier, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 20:52:59 compute-0 crazy_napier[305056]: 167 167
Nov 24 20:52:59 compute-0 systemd[1]: libpod-becd6637d53bd994ab05c19ad0778a48a86fc33d7ecd7e7c10917114c19804c9.scope: Deactivated successfully.
Nov 24 20:52:59 compute-0 podman[305040]: 2025-11-24 20:52:59.574805044 +0000 UTC m=+0.199263608 container died becd6637d53bd994ab05c19ad0778a48a86fc33d7ecd7e7c10917114c19804c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 20:52:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-be8bc19ffb3cc848fb9d2e9e39bdf60902113a0aeca95c9949b452109fe8cef1-merged.mount: Deactivated successfully.
Nov 24 20:52:59 compute-0 podman[305040]: 2025-11-24 20:52:59.62725469 +0000 UTC m=+0.251713264 container remove becd6637d53bd994ab05c19ad0778a48a86fc33d7ecd7e7c10917114c19804c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 20:52:59 compute-0 systemd[1]: libpod-conmon-becd6637d53bd994ab05c19ad0778a48a86fc33d7ecd7e7c10917114c19804c9.scope: Deactivated successfully.
Nov 24 20:52:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:52:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:52:59.780+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:52:59 compute-0 podman[305081]: 2025-11-24 20:52:59.89096412 +0000 UTC m=+0.077193844 container create 44c220b746b4964b7a000beec6361a3834b82845f6d046c79788d225636f7158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 24 20:52:59 compute-0 systemd[1]: Started libpod-conmon-44c220b746b4964b7a000beec6361a3834b82845f6d046c79788d225636f7158.scope.
Nov 24 20:52:59 compute-0 podman[305081]: 2025-11-24 20:52:59.859696834 +0000 UTC m=+0.045926598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:52:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5621a47df4a2603604f8e6bd127695567b414cbf9bbf8dc48b9e024bc921384f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5621a47df4a2603604f8e6bd127695567b414cbf9bbf8dc48b9e024bc921384f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5621a47df4a2603604f8e6bd127695567b414cbf9bbf8dc48b9e024bc921384f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:52:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5621a47df4a2603604f8e6bd127695567b414cbf9bbf8dc48b9e024bc921384f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:52:59 compute-0 podman[305081]: 2025-11-24 20:52:59.998859815 +0000 UTC m=+0.185089539 container init 44c220b746b4964b7a000beec6361a3834b82845f6d046c79788d225636f7158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tesla, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:53:00 compute-0 podman[305081]: 2025-11-24 20:53:00.008476168 +0000 UTC m=+0.194705892 container start 44c220b746b4964b7a000beec6361a3834b82845f6d046c79788d225636f7158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:53:00 compute-0 podman[305081]: 2025-11-24 20:53:00.012404146 +0000 UTC m=+0.198633870 container attach 44c220b746b4964b7a000beec6361a3834b82845f6d046c79788d225636f7158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tesla, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:53:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:00.127+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:00 compute-0 ceph-mon[75677]: pgmap v2139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:00 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:00.794+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:00 compute-0 bold_tesla[305098]: {
Nov 24 20:53:00 compute-0 bold_tesla[305098]:     "0": [
Nov 24 20:53:00 compute-0 bold_tesla[305098]:         {
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "devices": [
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "/dev/loop3"
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             ],
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_name": "ceph_lv0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_size": "21470642176",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "name": "ceph_lv0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "tags": {
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.cluster_name": "ceph",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.crush_device_class": "",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.encrypted": "0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.osd_id": "0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.type": "block",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.vdo": "0"
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             },
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "type": "block",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "vg_name": "ceph_vg0"
Nov 24 20:53:00 compute-0 bold_tesla[305098]:         }
Nov 24 20:53:00 compute-0 bold_tesla[305098]:     ],
Nov 24 20:53:00 compute-0 bold_tesla[305098]:     "1": [
Nov 24 20:53:00 compute-0 bold_tesla[305098]:         {
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "devices": [
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "/dev/loop4"
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             ],
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_name": "ceph_lv1",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_size": "21470642176",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "name": "ceph_lv1",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "tags": {
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.cluster_name": "ceph",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.crush_device_class": "",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.encrypted": "0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.osd_id": "1",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.type": "block",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.vdo": "0"
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             },
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "type": "block",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "vg_name": "ceph_vg1"
Nov 24 20:53:00 compute-0 bold_tesla[305098]:         }
Nov 24 20:53:00 compute-0 bold_tesla[305098]:     ],
Nov 24 20:53:00 compute-0 bold_tesla[305098]:     "2": [
Nov 24 20:53:00 compute-0 bold_tesla[305098]:         {
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "devices": [
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "/dev/loop5"
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             ],
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_name": "ceph_lv2",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_size": "21470642176",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "name": "ceph_lv2",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "tags": {
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.cluster_name": "ceph",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.crush_device_class": "",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.encrypted": "0",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.osd_id": "2",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.type": "block",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:                 "ceph.vdo": "0"
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             },
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "type": "block",
Nov 24 20:53:00 compute-0 bold_tesla[305098]:             "vg_name": "ceph_vg2"
Nov 24 20:53:00 compute-0 bold_tesla[305098]:         }
Nov 24 20:53:00 compute-0 bold_tesla[305098]:     ]
Nov 24 20:53:00 compute-0 bold_tesla[305098]: }
Nov 24 20:53:00 compute-0 systemd[1]: libpod-44c220b746b4964b7a000beec6361a3834b82845f6d046c79788d225636f7158.scope: Deactivated successfully.
Nov 24 20:53:00 compute-0 podman[305107]: 2025-11-24 20:53:00.967390116 +0000 UTC m=+0.056013384 container died 44c220b746b4964b7a000beec6361a3834b82845f6d046c79788d225636f7158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 20:53:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5621a47df4a2603604f8e6bd127695567b414cbf9bbf8dc48b9e024bc921384f-merged.mount: Deactivated successfully.
Nov 24 20:53:01 compute-0 podman[305107]: 2025-11-24 20:53:01.041733922 +0000 UTC m=+0.130357160 container remove 44c220b746b4964b7a000beec6361a3834b82845f6d046c79788d225636f7158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_tesla, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 20:53:01 compute-0 systemd[1]: libpod-conmon-44c220b746b4964b7a000beec6361a3834b82845f6d046c79788d225636f7158.scope: Deactivated successfully.
Nov 24 20:53:01 compute-0 sudo[304975]: pam_unix(sudo:session): session closed for user root
Nov 24 20:53:01 compute-0 sudo[305122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:53:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:01.176+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:01 compute-0 sudo[305122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:53:01 compute-0 sudo[305122]: pam_unix(sudo:session): session closed for user root
Nov 24 20:53:01 compute-0 sudo[305147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:53:01 compute-0 sudo[305147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:53:01 compute-0 sudo[305147]: pam_unix(sudo:session): session closed for user root
Nov 24 20:53:01 compute-0 sudo[305172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:53:01 compute-0 sudo[305172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:53:01 compute-0 sudo[305172]: pam_unix(sudo:session): session closed for user root
Nov 24 20:53:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:01 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.453686) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017581453725, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 2482, "num_deletes": 638, "total_data_size": 2587285, "memory_usage": 2649232, "flush_reason": "Manual Compaction"}
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Nov 24 20:53:01 compute-0 sudo[305197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:53:01 compute-0 sudo[305197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017581474410, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 2518893, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60188, "largest_seqno": 62669, "table_properties": {"data_size": 2508721, "index_size": 5257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3845, "raw_key_size": 36194, "raw_average_key_size": 23, "raw_value_size": 2482621, "raw_average_value_size": 1638, "num_data_blocks": 228, "num_entries": 1515, "num_filter_entries": 1515, "num_deletions": 638, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017436, "oldest_key_time": 1764017436, "file_creation_time": 1764017581, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 20806 microseconds, and 10930 cpu microseconds.
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.474486) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 2518893 bytes OK
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.474514) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.475802) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.475821) EVENT_LOG_v1 {"time_micros": 1764017581475813, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.475848) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 2574649, prev total WAL file size 2574649, number of live WAL files 2.
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.476916) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(2459KB)], [131(8974KB)]
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017581476959, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11709283, "oldest_snapshot_seqno": -1}
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 13245 keys, 10135780 bytes, temperature: kUnknown
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017581535101, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10135780, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10063358, "index_size": 38197, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33157, "raw_key_size": 365689, "raw_average_key_size": 27, "raw_value_size": 9836045, "raw_average_value_size": 742, "num_data_blocks": 1388, "num_entries": 13245, "num_filter_entries": 13245, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017581, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.535481) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10135780 bytes
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.536622) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.8 rd, 173.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.8 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 14533, records dropped: 1288 output_compression: NoCompression
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.536646) EVENT_LOG_v1 {"time_micros": 1764017581536633, "job": 80, "event": "compaction_finished", "compaction_time_micros": 58306, "compaction_time_cpu_micros": 38876, "output_level": 6, "num_output_files": 1, "total_output_size": 10135780, "num_input_records": 14533, "num_output_records": 13245, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017581537303, "job": 80, "event": "table_file_deletion", "file_number": 133}
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017581539795, "job": 80, "event": "table_file_deletion", "file_number": 131}
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.476791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.539948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.539957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.539959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.539961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:53:01 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:53:01.539962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:53:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:01.805+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:01 compute-0 podman[305263]: 2025-11-24 20:53:01.875015579 +0000 UTC m=+0.068312592 container create 6b99f5ffcec1f4facb4594dd0bf5317ca086611342bac6c05caa44b9390bcb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_aryabhata, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:53:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:01 compute-0 systemd[1]: Started libpod-conmon-6b99f5ffcec1f4facb4594dd0bf5317ca086611342bac6c05caa44b9390bcb2f.scope.
Nov 24 20:53:01 compute-0 podman[305263]: 2025-11-24 20:53:01.843236729 +0000 UTC m=+0.036533812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:53:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:53:01 compute-0 podman[305263]: 2025-11-24 20:53:01.984801585 +0000 UTC m=+0.178098608 container init 6b99f5ffcec1f4facb4594dd0bf5317ca086611342bac6c05caa44b9390bcb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 20:53:01 compute-0 podman[305263]: 2025-11-24 20:53:01.996941258 +0000 UTC m=+0.190238271 container start 6b99f5ffcec1f4facb4594dd0bf5317ca086611342bac6c05caa44b9390bcb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:53:02 compute-0 podman[305263]: 2025-11-24 20:53:02.001278257 +0000 UTC m=+0.194575270 container attach 6b99f5ffcec1f4facb4594dd0bf5317ca086611342bac6c05caa44b9390bcb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 20:53:02 compute-0 systemd[1]: libpod-6b99f5ffcec1f4facb4594dd0bf5317ca086611342bac6c05caa44b9390bcb2f.scope: Deactivated successfully.
Nov 24 20:53:02 compute-0 nervous_aryabhata[305279]: 167 167
Nov 24 20:53:02 compute-0 conmon[305279]: conmon 6b99f5ffcec1f4facb45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b99f5ffcec1f4facb4594dd0bf5317ca086611342bac6c05caa44b9390bcb2f.scope/container/memory.events
Nov 24 20:53:02 compute-0 podman[305263]: 2025-11-24 20:53:02.009310557 +0000 UTC m=+0.202607550 container died 6b99f5ffcec1f4facb4594dd0bf5317ca086611342bac6c05caa44b9390bcb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_aryabhata, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-596ad702bda4d26a2ceb4e02b703c79f57506dfe344e637d6b1aa381d0a09ba7-merged.mount: Deactivated successfully.
Nov 24 20:53:02 compute-0 podman[305263]: 2025-11-24 20:53:02.068492957 +0000 UTC m=+0.261789970 container remove 6b99f5ffcec1f4facb4594dd0bf5317ca086611342bac6c05caa44b9390bcb2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:53:02 compute-0 systemd[1]: libpod-conmon-6b99f5ffcec1f4facb4594dd0bf5317ca086611342bac6c05caa44b9390bcb2f.scope: Deactivated successfully.
Nov 24 20:53:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:02.220+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:02 compute-0 podman[305304]: 2025-11-24 20:53:02.334896542 +0000 UTC m=+0.078359537 container create 4f06dfcd87b1beec2ca73ceffddc247271b0dbe44b9cd971434d556112c6346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 20:53:02 compute-0 podman[305304]: 2025-11-24 20:53:02.302262018 +0000 UTC m=+0.045725103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:53:02 compute-0 systemd[1]: Started libpod-conmon-4f06dfcd87b1beec2ca73ceffddc247271b0dbe44b9cd971434d556112c6346d.scope.
Nov 24 20:53:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53765e8b1f516a77178c476366c94d3cbb0e8ae07b15cecc96f939b75cd4ac08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53765e8b1f516a77178c476366c94d3cbb0e8ae07b15cecc96f939b75cd4ac08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53765e8b1f516a77178c476366c94d3cbb0e8ae07b15cecc96f939b75cd4ac08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53765e8b1f516a77178c476366c94d3cbb0e8ae07b15cecc96f939b75cd4ac08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:53:02 compute-0 ceph-mon[75677]: pgmap v2140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:02 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:02 compute-0 podman[305304]: 2025-11-24 20:53:02.472339855 +0000 UTC m=+0.215802880 container init 4f06dfcd87b1beec2ca73ceffddc247271b0dbe44b9cd971434d556112c6346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:53:02 compute-0 podman[305304]: 2025-11-24 20:53:02.488017204 +0000 UTC m=+0.231480229 container start 4f06dfcd87b1beec2ca73ceffddc247271b0dbe44b9cd971434d556112c6346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:53:02 compute-0 podman[305304]: 2025-11-24 20:53:02.49185086 +0000 UTC m=+0.235313855 container attach 4f06dfcd87b1beec2ca73ceffddc247271b0dbe44b9cd971434d556112c6346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 20:53:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:02.808+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:03.193+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:03 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]: {
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "osd_id": 2,
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "type": "bluestore"
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:     },
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "osd_id": 1,
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "type": "bluestore"
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:     },
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "osd_id": 0,
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:         "type": "bluestore"
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]:     }
Nov 24 20:53:03 compute-0 affectionate_leakey[305320]: }
Nov 24 20:53:03 compute-0 systemd[1]: libpod-4f06dfcd87b1beec2ca73ceffddc247271b0dbe44b9cd971434d556112c6346d.scope: Deactivated successfully.
Nov 24 20:53:03 compute-0 podman[305304]: 2025-11-24 20:53:03.673672831 +0000 UTC m=+1.417135826 container died 4f06dfcd87b1beec2ca73ceffddc247271b0dbe44b9cd971434d556112c6346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 20:53:03 compute-0 systemd[1]: libpod-4f06dfcd87b1beec2ca73ceffddc247271b0dbe44b9cd971434d556112c6346d.scope: Consumed 1.193s CPU time.
Nov 24 20:53:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-53765e8b1f516a77178c476366c94d3cbb0e8ae07b15cecc96f939b75cd4ac08-merged.mount: Deactivated successfully.
Nov 24 20:53:03 compute-0 podman[305304]: 2025-11-24 20:53:03.744618983 +0000 UTC m=+1.488081968 container remove 4f06dfcd87b1beec2ca73ceffddc247271b0dbe44b9cd971434d556112c6346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:53:03 compute-0 systemd[1]: libpod-conmon-4f06dfcd87b1beec2ca73ceffddc247271b0dbe44b9cd971434d556112c6346d.scope: Deactivated successfully.
Nov 24 20:53:03 compute-0 sudo[305197]: pam_unix(sudo:session): session closed for user root
Nov 24 20:53:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:53:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:53:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:53:03 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:53:03 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 459ac4e5-85e0-4fce-a712-cfdb77d9cf2c does not exist
Nov 24 20:53:03 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 5ed9795c-7f4f-4189-a2f9-867c8ad778f2 does not exist
Nov 24 20:53:03 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:03.851+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:03 compute-0 sudo[305366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:53:03 compute-0 sudo[305366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:53:03 compute-0 sudo[305366]: pam_unix(sudo:session): session closed for user root
Nov 24 20:53:04 compute-0 sudo[305391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:53:04 compute-0 sudo[305391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:53:04 compute-0 sudo[305391]: pam_unix(sudo:session): session closed for user root
Nov 24 20:53:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:04.240+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:04 compute-0 ceph-mon[75677]: pgmap v2141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:53:04 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:53:04 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:04.854+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:05.240+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:05 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:05 compute-0 podman[305416]: 2025-11-24 20:53:05.851978888 +0000 UTC m=+0.078519441 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:53:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:05.883+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:06.257+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:06 compute-0 ceph-mon[75677]: pgmap v2142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:06 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:06 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:53:06 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 20:53:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 23 slow ops, oldest one blocked for 3702 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:06.909+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:07.303+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:07 compute-0 ceph-mon[75677]: Health check update: 23 slow ops, oldest one blocked for 3702 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:07.915+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:08.256+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:08 compute-0 ceph-mon[75677]: pgmap v2143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:08 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:08.916+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:09.287+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:53:09.408 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:53:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:53:09.408 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:53:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:53:09.409 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:53:09 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:09.883+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:10.324+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:10 compute-0 ceph-mon[75677]: pgmap v2144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:10 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:10 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:10 compute-0 podman[305438]: 2025-11-24 20:53:10.916730342 +0000 UTC m=+0.143380337 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 24 20:53:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:10.929+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:10 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:11.344+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 23 slow ops, oldest one blocked for 3712 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:11.942+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:12.388+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:12 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:12 compute-0 ceph-mon[75677]: pgmap v2145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:12 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:12 compute-0 ceph-mon[75677]: Health check update: 23 slow ops, oldest one blocked for 3712 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:12.899+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:13.392+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:13 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:13.873+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:14.427+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:14 compute-0 ceph-mon[75677]: pgmap v2146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:14 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:14 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:14 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:14.845+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:15.452+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:15 compute-0 ceph-mon[75677]: pgmap v2147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:15 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:15.856+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:53:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2293167516' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:53:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:53:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2293167516' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:53:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:16.466+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:16 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2293167516' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:53:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2293167516' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:53:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:16.864+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 23 slow ops, oldest one blocked for 3717 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:17.452+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:17 compute-0 ceph-mon[75677]: pgmap v2148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:17 compute-0 ceph-mon[75677]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Nov 24 20:53:17 compute-0 ceph-mon[75677]: Health check update: 23 slow ops, oldest one blocked for 3717 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:17 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:53:17.715 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:53:17 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:53:17.716 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:53:17 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 20:53:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:17.901+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:18.435+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:18.905+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:19.405+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:19 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:19 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 20:53:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:19 compute-0 ceph-mon[75677]: pgmap v2149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:19.865+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:20.445+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:20 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:20 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:20.832+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:21.436+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:21 compute-0 ceph-mon[75677]: pgmap v2150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:21 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:21.852+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3722 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:22.486+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:22 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:22 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3722 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:22.898+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:23.530+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:23 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:23 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:53:23.718 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:53:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:23 compute-0 ceph-mon[75677]: pgmap v2151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:23.918+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:53:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:24.536+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:53:24
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms']
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:53:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:24 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:24.938+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:25.551+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:25 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:25 compute-0 ceph-mon[75677]: pgmap v2152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:25 compute-0 podman[305466]: 2025-11-24 20:53:25.841547796 +0000 UTC m=+0.065990958 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 24 20:53:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:25.972+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:26.529+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:26 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:26 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3727 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:26.982+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:27.501+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:27 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:27 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:27 compute-0 ceph-mon[75677]: pgmap v2153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:27 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3727 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:28.020+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:28.454+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:28 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:28.984+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:29.498+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:29 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:29 compute-0 ceph-mon[75677]: pgmap v2154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:30.009+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:30.537+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:30 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:31.035+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:31.571+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:31 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:31 compute-0 ceph-mon[75677]: pgmap v2155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3732 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:32.040+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:32.608+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:32 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:32 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3732 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:33.052+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:33.610+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:33 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:33 compute-0 ceph-mon[75677]: pgmap v2156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:34.040+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:34.632+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:34 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:34 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:35.035+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:53:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:53:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:35.590+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:35 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:35 compute-0 ceph-mon[75677]: pgmap v2157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:36.020+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:36.607+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:36 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:36 compute-0 podman[305486]: 2025-11-24 20:53:36.878524999 +0000 UTC m=+0.106281891 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:53:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3736 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:37.035+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:37.569+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:37 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:37 compute-0 ceph-mon[75677]: pgmap v2158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:37 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3736 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:38.013+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:38.620+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:38 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:39.024+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:39.601+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:39 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:39 compute-0 ceph-mon[75677]: pgmap v2159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:39 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:40.002+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:40.560+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:53:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:53:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:53:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:53:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:53:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:40 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:40.975+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:41.562+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:41 compute-0 podman[305507]: 2025-11-24 20:53:41.91467824 +0000 UTC m=+0.146456331 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:53:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3742 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:41 compute-0 ceph-mon[75677]: pgmap v2160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:41 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:41 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3742 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:42.005+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:42 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:42.525+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:42 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:42 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:43.036+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:43.490+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:44.030+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:44 compute-0 ceph-mon[75677]: pgmap v2161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:44 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:44.538+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:44.993+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:45 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:45.565+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:46.037+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:46 compute-0 ceph-mon[75677]: pgmap v2162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:46 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:46.612+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:47.022+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 32 slow ops, oldest one blocked for 3747 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:47 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:47.638+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:48.069+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:48 compute-0 ceph-mon[75677]: pgmap v2163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:48 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:48 compute-0 ceph-mon[75677]: Health check update: 32 slow ops, oldest one blocked for 3747 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:48.655+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:49 compute-0 ceph-mon[75677]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 11 ])
Nov 24 20:53:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:49.107+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:49.610+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:50 compute-0 ceph-mon[75677]: pgmap v2164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:50 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:50.151+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:50.607+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:51 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:51.191+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:51.619+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:52 compute-0 ceph-mon[75677]: pgmap v2165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:52 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:52.219+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:52 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:52.587+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:53 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:53.188+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:53.600+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:54.174+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:54 compute-0 ceph-mon[75677]: pgmap v2166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:54 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:53:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:53:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:54.585+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:55.199+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:55 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:55 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3751 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:55 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:55.590+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:56.207+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:56 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:56 compute-0 ceph-mon[75677]: pgmap v2167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:56 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:56 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3751 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:53:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:56.633+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:56 compute-0 podman[305534]: 2025-11-24 20:53:56.855969758 +0000 UTC m=+0.082466099 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 24 20:53:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:53:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:57.202+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:57 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:57.649+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:58.201+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:58 compute-0 ceph-mon[75677]: pgmap v2168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:58 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:58.645+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:53:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:53:59.168+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:53:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:59 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:53:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:53:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:53:59.668+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:53:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:00.131+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:00 compute-0 ceph-mon[75677]: pgmap v2169: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:00 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:00.697+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:01.119+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3761 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:01 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:01.715+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:02.133+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:02 compute-0 ceph-mon[75677]: pgmap v2170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:02 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:02 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3761 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:02.726+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:03.177+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:03 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:03 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:03.704+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:04 compute-0 sudo[305554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:04 compute-0 sudo[305554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:04 compute-0 sudo[305554]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:04.132+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:04 compute-0 sudo[305579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:54:04 compute-0 sudo[305579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:04 compute-0 sudo[305579]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:04 compute-0 sudo[305604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:04 compute-0 sudo[305604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:04 compute-0 sudo[305604]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:04 compute-0 sudo[305629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 20:54:04 compute-0 ceph-mon[75677]: pgmap v2171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:04 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:04 compute-0 sudo[305629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:04.749+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:05 compute-0 podman[305726]: 2025-11-24 20:54:05.025785277 +0000 UTC m=+0.061515246 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 24 20:54:05 compute-0 podman[305726]: 2025-11-24 20:54:05.127472392 +0000 UTC m=+0.163202391 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:54:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:05.132+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:05 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:05.796+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:05 compute-0 sudo[305629]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:54:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:54:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:54:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:54:06 compute-0 sudo[305887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:06 compute-0 sudo[305887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:06 compute-0 sudo[305887]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:06.114+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:06 compute-0 sudo[305912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:54:06 compute-0 sudo[305912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:06 compute-0 sudo[305912]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:06 compute-0 sudo[305937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:06 compute-0 sudo[305937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:06 compute-0 sudo[305937]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:06 compute-0 sudo[305962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:54:06 compute-0 sudo[305962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:06 compute-0 ceph-mon[75677]: pgmap v2172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:06 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:54:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:54:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:06.812+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:06 compute-0 sudo[305962]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:54:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:54:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:54:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:54:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:54:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:54:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev af7ceffc-d3c4-493a-a6ab-31baf6841aef does not exist
Nov 24 20:54:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d385dcb1-8564-4f13-91f6-ad665a79b901 does not exist
Nov 24 20:54:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 720e89f1-b566-46b7-bfda-007f819ce1c9 does not exist
Nov 24 20:54:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:54:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:54:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:54:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:54:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:54:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:54:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:07.129+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:07 compute-0 sudo[306019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:07 compute-0 sudo[306019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:07 compute-0 sudo[306019]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:07 compute-0 sudo[306050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:54:07 compute-0 sudo[306050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:07 compute-0 sudo[306050]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:07 compute-0 podman[306043]: 2025-11-24 20:54:07.258878804 +0000 UTC m=+0.097494710 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 20:54:07 compute-0 sudo[306088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:07 compute-0 sudo[306088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:07 compute-0 sudo[306088]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:07 compute-0 sudo[306114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:54:07 compute-0 sudo[306114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3766 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:07 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:54:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:54:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:54:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:54:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:54:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:54:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:07.797+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:07 compute-0 podman[306180]: 2025-11-24 20:54:07.834756884 +0000 UTC m=+0.055245774 container create dc84e969ca77691740042f2f4f325a662b2cd602c92b5ac137e1dfe1c5ff352e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 20:54:07 compute-0 systemd[1]: Started libpod-conmon-dc84e969ca77691740042f2f4f325a662b2cd602c92b5ac137e1dfe1c5ff352e.scope.
Nov 24 20:54:07 compute-0 podman[306180]: 2025-11-24 20:54:07.810300724 +0000 UTC m=+0.030789614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:54:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:54:07 compute-0 podman[306180]: 2025-11-24 20:54:07.946294688 +0000 UTC m=+0.166783598 container init dc84e969ca77691740042f2f4f325a662b2cd602c92b5ac137e1dfe1c5ff352e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:54:07 compute-0 podman[306180]: 2025-11-24 20:54:07.957640239 +0000 UTC m=+0.178129139 container start dc84e969ca77691740042f2f4f325a662b2cd602c92b5ac137e1dfe1c5ff352e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 20:54:07 compute-0 podman[306180]: 2025-11-24 20:54:07.962374679 +0000 UTC m=+0.182863559 container attach dc84e969ca77691740042f2f4f325a662b2cd602c92b5ac137e1dfe1c5ff352e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 20:54:07 compute-0 quizzical_cerf[306197]: 167 167
Nov 24 20:54:07 compute-0 systemd[1]: libpod-dc84e969ca77691740042f2f4f325a662b2cd602c92b5ac137e1dfe1c5ff352e.scope: Deactivated successfully.
Nov 24 20:54:07 compute-0 conmon[306197]: conmon dc84e969ca7769174004 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc84e969ca77691740042f2f4f325a662b2cd602c92b5ac137e1dfe1c5ff352e.scope/container/memory.events
Nov 24 20:54:07 compute-0 podman[306180]: 2025-11-24 20:54:07.967737765 +0000 UTC m=+0.188226655 container died dc84e969ca77691740042f2f4f325a662b2cd602c92b5ac137e1dfe1c5ff352e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:54:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8c7d5071ef04b0f60269fbe1f98fa6c9e5002d1d16354e3cadef778b794a8ab-merged.mount: Deactivated successfully.
Nov 24 20:54:08 compute-0 podman[306180]: 2025-11-24 20:54:08.022551186 +0000 UTC m=+0.243040086 container remove dc84e969ca77691740042f2f4f325a662b2cd602c92b5ac137e1dfe1c5ff352e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:54:08 compute-0 systemd[1]: libpod-conmon-dc84e969ca77691740042f2f4f325a662b2cd602c92b5ac137e1dfe1c5ff352e.scope: Deactivated successfully.
Nov 24 20:54:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:08.082+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:08 compute-0 podman[306220]: 2025-11-24 20:54:08.24583398 +0000 UTC m=+0.042210397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:54:08 compute-0 podman[306220]: 2025-11-24 20:54:08.340905133 +0000 UTC m=+0.137281480 container create e1a1fdcc86361ff48c2e9318ae98117803894850554a1ac878d3c135eb0beb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 20:54:08 compute-0 systemd[1]: Started libpod-conmon-e1a1fdcc86361ff48c2e9318ae98117803894850554a1ac878d3c135eb0beb8c.scope.
Nov 24 20:54:08 compute-0 ceph-mon[75677]: pgmap v2173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:08 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:08 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3766 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99419d7d215790b2b72d3f3ebc417c3a828ff94f405804d6e8e82ba2d0e832a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99419d7d215790b2b72d3f3ebc417c3a828ff94f405804d6e8e82ba2d0e832a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99419d7d215790b2b72d3f3ebc417c3a828ff94f405804d6e8e82ba2d0e832a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99419d7d215790b2b72d3f3ebc417c3a828ff94f405804d6e8e82ba2d0e832a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99419d7d215790b2b72d3f3ebc417c3a828ff94f405804d6e8e82ba2d0e832a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:08 compute-0 podman[306220]: 2025-11-24 20:54:08.445069326 +0000 UTC m=+0.241445693 container init e1a1fdcc86361ff48c2e9318ae98117803894850554a1ac878d3c135eb0beb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:54:08 compute-0 podman[306220]: 2025-11-24 20:54:08.462859462 +0000 UTC m=+0.259235789 container start e1a1fdcc86361ff48c2e9318ae98117803894850554a1ac878d3c135eb0beb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 20:54:08 compute-0 podman[306220]: 2025-11-24 20:54:08.46674991 +0000 UTC m=+0.263126267 container attach e1a1fdcc86361ff48c2e9318ae98117803894850554a1ac878d3c135eb0beb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:54:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:08.813+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:09.119+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:54:09.409 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:54:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:54:09.411 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:54:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:54:09.412 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:54:09 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:09 compute-0 funny_ritchie[306236]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:54:09 compute-0 funny_ritchie[306236]: --> relative data size: 1.0
Nov 24 20:54:09 compute-0 funny_ritchie[306236]: --> All data devices are unavailable
Nov 24 20:54:09 compute-0 systemd[1]: libpod-e1a1fdcc86361ff48c2e9318ae98117803894850554a1ac878d3c135eb0beb8c.scope: Deactivated successfully.
Nov 24 20:54:09 compute-0 systemd[1]: libpod-e1a1fdcc86361ff48c2e9318ae98117803894850554a1ac878d3c135eb0beb8c.scope: Consumed 1.061s CPU time.
Nov 24 20:54:09 compute-0 podman[306265]: 2025-11-24 20:54:09.617784948 +0000 UTC m=+0.031726200 container died e1a1fdcc86361ff48c2e9318ae98117803894850554a1ac878d3c135eb0beb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 20:54:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:09.779+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:10.087+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:10 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-99419d7d215790b2b72d3f3ebc417c3a828ff94f405804d6e8e82ba2d0e832a8-merged.mount: Deactivated successfully.
Nov 24 20:54:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:10 compute-0 ceph-mon[75677]: pgmap v2174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:10 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:10 compute-0 podman[306265]: 2025-11-24 20:54:10.773995388 +0000 UTC m=+1.187936630 container remove e1a1fdcc86361ff48c2e9318ae98117803894850554a1ac878d3c135eb0beb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:54:10 compute-0 systemd[1]: libpod-conmon-e1a1fdcc86361ff48c2e9318ae98117803894850554a1ac878d3c135eb0beb8c.scope: Deactivated successfully.
Nov 24 20:54:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:10.808+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:10 compute-0 sudo[306114]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:10 compute-0 sudo[306282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:10 compute-0 sudo[306282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:10 compute-0 sudo[306282]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:11 compute-0 sudo[306307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:54:11 compute-0 sudo[306307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:11 compute-0 sudo[306307]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:11.060+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:11 compute-0 sudo[306332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:11 compute-0 sudo[306332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:11 compute-0 sudo[306332]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:11 compute-0 sudo[306357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:54:11 compute-0 sudo[306357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:11 compute-0 podman[306423]: 2025-11-24 20:54:11.685839997 +0000 UTC m=+0.061808813 container create 40bbe5784fd8ae55abfa2b1318ff48c1b6baabe6149bcf613d203ed4208ce5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 20:54:11 compute-0 systemd[1]: Started libpod-conmon-40bbe5784fd8ae55abfa2b1318ff48c1b6baabe6149bcf613d203ed4208ce5c3.scope.
Nov 24 20:54:11 compute-0 podman[306423]: 2025-11-24 20:54:11.656897194 +0000 UTC m=+0.032866090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:54:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:54:11 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:11 compute-0 ceph-mon[75677]: pgmap v2175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:11 compute-0 podman[306423]: 2025-11-24 20:54:11.801547145 +0000 UTC m=+0.177516041 container init 40bbe5784fd8ae55abfa2b1318ff48c1b6baabe6149bcf613d203ed4208ce5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:54:11 compute-0 podman[306423]: 2025-11-24 20:54:11.816671549 +0000 UTC m=+0.192640365 container start 40bbe5784fd8ae55abfa2b1318ff48c1b6baabe6149bcf613d203ed4208ce5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 20:54:11 compute-0 podman[306423]: 2025-11-24 20:54:11.821713967 +0000 UTC m=+0.197682823 container attach 40bbe5784fd8ae55abfa2b1318ff48c1b6baabe6149bcf613d203ed4208ce5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:54:11 compute-0 practical_nightingale[306439]: 167 167
Nov 24 20:54:11 compute-0 systemd[1]: libpod-40bbe5784fd8ae55abfa2b1318ff48c1b6baabe6149bcf613d203ed4208ce5c3.scope: Deactivated successfully.
Nov 24 20:54:11 compute-0 podman[306423]: 2025-11-24 20:54:11.829362257 +0000 UTC m=+0.205331103 container died 40bbe5784fd8ae55abfa2b1318ff48c1b6baabe6149bcf613d203ed4208ce5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 20:54:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:11.844+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f88b0b31a5f9395273a33dcd372ad2473b103c18575dc567313b21cfa49ef973-merged.mount: Deactivated successfully.
Nov 24 20:54:11 compute-0 podman[306423]: 2025-11-24 20:54:11.876502838 +0000 UTC m=+0.252471654 container remove 40bbe5784fd8ae55abfa2b1318ff48c1b6baabe6149bcf613d203ed4208ce5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:54:11 compute-0 systemd[1]: libpod-conmon-40bbe5784fd8ae55abfa2b1318ff48c1b6baabe6149bcf613d203ed4208ce5c3.scope: Deactivated successfully.
Nov 24 20:54:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:12.101+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:12 compute-0 podman[306462]: 2025-11-24 20:54:12.127185911 +0000 UTC m=+0.071371504 container create 2a69ce000e3ef3f1245c612da077439fc76d73bc80a3bb95e7fb9492f978c1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cerf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 20:54:12 compute-0 systemd[1]: Started libpod-conmon-2a69ce000e3ef3f1245c612da077439fc76d73bc80a3bb95e7fb9492f978c1b9.scope.
Nov 24 20:54:12 compute-0 podman[306462]: 2025-11-24 20:54:12.097721135 +0000 UTC m=+0.041906828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:54:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da12fd4ac26688c217f4324673857ea10c655dfec67a64696621a40a9bd3aff8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da12fd4ac26688c217f4324673857ea10c655dfec67a64696621a40a9bd3aff8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da12fd4ac26688c217f4324673857ea10c655dfec67a64696621a40a9bd3aff8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da12fd4ac26688c217f4324673857ea10c655dfec67a64696621a40a9bd3aff8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:12 compute-0 podman[306462]: 2025-11-24 20:54:12.238434718 +0000 UTC m=+0.182620351 container init 2a69ce000e3ef3f1245c612da077439fc76d73bc80a3bb95e7fb9492f978c1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cerf, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 20:54:12 compute-0 podman[306462]: 2025-11-24 20:54:12.249360137 +0000 UTC m=+0.193545760 container start 2a69ce000e3ef3f1245c612da077439fc76d73bc80a3bb95e7fb9492f978c1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:54:12 compute-0 podman[306462]: 2025-11-24 20:54:12.254065306 +0000 UTC m=+0.198250909 container attach 2a69ce000e3ef3f1245c612da077439fc76d73bc80a3bb95e7fb9492f978c1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cerf, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:54:12 compute-0 podman[306477]: 2025-11-24 20:54:12.322281013 +0000 UTC m=+0.141868656 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 24 20:54:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:12 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:12.845+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:12 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:13 compute-0 objective_cerf[306480]: {
Nov 24 20:54:13 compute-0 objective_cerf[306480]:     "0": [
Nov 24 20:54:13 compute-0 objective_cerf[306480]:         {
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "devices": [
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "/dev/loop3"
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             ],
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_name": "ceph_lv0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_size": "21470642176",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "name": "ceph_lv0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "tags": {
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.cluster_name": "ceph",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.crush_device_class": "",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.encrypted": "0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.osd_id": "0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.type": "block",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.vdo": "0"
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             },
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "type": "block",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "vg_name": "ceph_vg0"
Nov 24 20:54:13 compute-0 objective_cerf[306480]:         }
Nov 24 20:54:13 compute-0 objective_cerf[306480]:     ],
Nov 24 20:54:13 compute-0 objective_cerf[306480]:     "1": [
Nov 24 20:54:13 compute-0 objective_cerf[306480]:         {
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "devices": [
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "/dev/loop4"
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             ],
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_name": "ceph_lv1",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_size": "21470642176",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "name": "ceph_lv1",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "tags": {
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.cluster_name": "ceph",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.crush_device_class": "",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.encrypted": "0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.osd_id": "1",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.type": "block",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.vdo": "0"
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             },
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "type": "block",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "vg_name": "ceph_vg1"
Nov 24 20:54:13 compute-0 objective_cerf[306480]:         }
Nov 24 20:54:13 compute-0 objective_cerf[306480]:     ],
Nov 24 20:54:13 compute-0 objective_cerf[306480]:     "2": [
Nov 24 20:54:13 compute-0 objective_cerf[306480]:         {
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "devices": [
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "/dev/loop5"
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             ],
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_name": "ceph_lv2",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_size": "21470642176",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "name": "ceph_lv2",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "tags": {
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.cluster_name": "ceph",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.crush_device_class": "",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.encrypted": "0",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.osd_id": "2",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.type": "block",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:                 "ceph.vdo": "0"
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             },
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "type": "block",
Nov 24 20:54:13 compute-0 objective_cerf[306480]:             "vg_name": "ceph_vg2"
Nov 24 20:54:13 compute-0 objective_cerf[306480]:         }
Nov 24 20:54:13 compute-0 objective_cerf[306480]:     ]
Nov 24 20:54:13 compute-0 objective_cerf[306480]: }
Nov 24 20:54:13 compute-0 systemd[1]: libpod-2a69ce000e3ef3f1245c612da077439fc76d73bc80a3bb95e7fb9492f978c1b9.scope: Deactivated successfully.
Nov 24 20:54:13 compute-0 podman[306462]: 2025-11-24 20:54:13.065067682 +0000 UTC m=+1.009253275 container died 2a69ce000e3ef3f1245c612da077439fc76d73bc80a3bb95e7fb9492f978c1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cerf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 20:54:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:13.097+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-da12fd4ac26688c217f4324673857ea10c655dfec67a64696621a40a9bd3aff8-merged.mount: Deactivated successfully.
Nov 24 20:54:13 compute-0 podman[306462]: 2025-11-24 20:54:13.270481967 +0000 UTC m=+1.214667570 container remove 2a69ce000e3ef3f1245c612da077439fc76d73bc80a3bb95e7fb9492f978c1b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cerf, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 20:54:13 compute-0 systemd[1]: libpod-conmon-2a69ce000e3ef3f1245c612da077439fc76d73bc80a3bb95e7fb9492f978c1b9.scope: Deactivated successfully.
Nov 24 20:54:13 compute-0 sudo[306357]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:13 compute-0 sudo[306528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:13 compute-0 sudo[306528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:13 compute-0 sudo[306528]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:13 compute-0 sudo[306553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:54:13 compute-0 sudo[306553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:13 compute-0 sudo[306553]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:13 compute-0 sudo[306578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:13 compute-0 sudo[306578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:13 compute-0 sudo[306578]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:13 compute-0 sudo[306603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:54:13 compute-0 sudo[306603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:13 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:13 compute-0 ceph-mon[75677]: pgmap v2176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:13.867+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:14.101+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:14 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:14 compute-0 podman[306668]: 2025-11-24 20:54:14.113577613 +0000 UTC m=+0.042032382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:54:14 compute-0 podman[306668]: 2025-11-24 20:54:14.284468002 +0000 UTC m=+0.212922721 container create c6e8ece5174670812c1d11661dd8e57e101fd3804299d3308d976b92c01a7a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 20:54:14 compute-0 systemd[1]: Started libpod-conmon-c6e8ece5174670812c1d11661dd8e57e101fd3804299d3308d976b92c01a7a09.scope.
Nov 24 20:54:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:54:14 compute-0 podman[306668]: 2025-11-24 20:54:14.695189999 +0000 UTC m=+0.623644778 container init c6e8ece5174670812c1d11661dd8e57e101fd3804299d3308d976b92c01a7a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chatterjee, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:54:14 compute-0 podman[306668]: 2025-11-24 20:54:14.709493261 +0000 UTC m=+0.637947970 container start c6e8ece5174670812c1d11661dd8e57e101fd3804299d3308d976b92c01a7a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chatterjee, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:54:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:14 compute-0 admiring_chatterjee[306684]: 167 167
Nov 24 20:54:14 compute-0 systemd[1]: libpod-c6e8ece5174670812c1d11661dd8e57e101fd3804299d3308d976b92c01a7a09.scope: Deactivated successfully.
Nov 24 20:54:14 compute-0 podman[306668]: 2025-11-24 20:54:14.773213466 +0000 UTC m=+0.701668235 container attach c6e8ece5174670812c1d11661dd8e57e101fd3804299d3308d976b92c01a7a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 20:54:14 compute-0 podman[306668]: 2025-11-24 20:54:14.775483258 +0000 UTC m=+0.703937967 container died c6e8ece5174670812c1d11661dd8e57e101fd3804299d3308d976b92c01a7a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chatterjee, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:54:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:14.821+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:15 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e5826126ef83419e13d5bdf2da1e0007026a3038bedfbb2c79a084c421a6334-merged.mount: Deactivated successfully.
Nov 24 20:54:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:15.128+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:15 compute-0 podman[306668]: 2025-11-24 20:54:15.199867258 +0000 UTC m=+1.128321977 container remove c6e8ece5174670812c1d11661dd8e57e101fd3804299d3308d976b92c01a7a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chatterjee, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 20:54:15 compute-0 systemd[1]: libpod-conmon-c6e8ece5174670812c1d11661dd8e57e101fd3804299d3308d976b92c01a7a09.scope: Deactivated successfully.
Nov 24 20:54:15 compute-0 podman[306708]: 2025-11-24 20:54:15.397663204 +0000 UTC m=+0.037121267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:54:15 compute-0 podman[306708]: 2025-11-24 20:54:15.530014668 +0000 UTC m=+0.169472671 container create 43b48bd4c7ca507b33dccff604fdb40f55600f50a832a1fffaea318932621a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bardeen, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:54:15 compute-0 systemd[1]: Started libpod-conmon-43b48bd4c7ca507b33dccff604fdb40f55600f50a832a1fffaea318932621a23.scope.
Nov 24 20:54:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb02566f89d804258eb68be9542fa1a043435e4ef103d7ebeeca6c836a5fa7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb02566f89d804258eb68be9542fa1a043435e4ef103d7ebeeca6c836a5fa7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb02566f89d804258eb68be9542fa1a043435e4ef103d7ebeeca6c836a5fa7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb02566f89d804258eb68be9542fa1a043435e4ef103d7ebeeca6c836a5fa7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:54:15 compute-0 podman[306708]: 2025-11-24 20:54:15.778169403 +0000 UTC m=+0.417627466 container init 43b48bd4c7ca507b33dccff604fdb40f55600f50a832a1fffaea318932621a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:54:15 compute-0 podman[306708]: 2025-11-24 20:54:15.792865446 +0000 UTC m=+0.432323439 container start 43b48bd4c7ca507b33dccff604fdb40f55600f50a832a1fffaea318932621a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 20:54:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:15.808+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:15 compute-0 podman[306708]: 2025-11-24 20:54:15.87115888 +0000 UTC m=+0.510616913 container attach 43b48bd4c7ca507b33dccff604fdb40f55600f50a832a1fffaea318932621a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bardeen, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 20:54:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:16 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:16 compute-0 ceph-mon[75677]: pgmap v2177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:16 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:16.137+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:54:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/986004612' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:54:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:54:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/986004612' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:54:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:16.805+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]: {
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "osd_id": 2,
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "type": "bluestore"
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:     },
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "osd_id": 1,
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "type": "bluestore"
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:     },
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "osd_id": 0,
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:         "type": "bluestore"
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]:     }
Nov 24 20:54:16 compute-0 awesome_bardeen[306725]: }
Nov 24 20:54:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3771 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:16 compute-0 systemd[1]: libpod-43b48bd4c7ca507b33dccff604fdb40f55600f50a832a1fffaea318932621a23.scope: Deactivated successfully.
Nov 24 20:54:16 compute-0 systemd[1]: libpod-43b48bd4c7ca507b33dccff604fdb40f55600f50a832a1fffaea318932621a23.scope: Consumed 1.176s CPU time.
Nov 24 20:54:17 compute-0 podman[306758]: 2025-11-24 20:54:17.010410754 +0000 UTC m=+0.032931471 container died 43b48bd4c7ca507b33dccff604fdb40f55600f50a832a1fffaea318932621a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 20:54:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:17.129+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:17 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdb02566f89d804258eb68be9542fa1a043435e4ef103d7ebeeca6c836a5fa7a-merged.mount: Deactivated successfully.
Nov 24 20:54:17 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/986004612' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:54:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/986004612' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:54:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:17 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3771 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:17 compute-0 podman[306758]: 2025-11-24 20:54:17.295624015 +0000 UTC m=+0.318144732 container remove 43b48bd4c7ca507b33dccff604fdb40f55600f50a832a1fffaea318932621a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:54:17 compute-0 systemd[1]: libpod-conmon-43b48bd4c7ca507b33dccff604fdb40f55600f50a832a1fffaea318932621a23.scope: Deactivated successfully.
Nov 24 20:54:17 compute-0 sudo[306603]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:54:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:54:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:54:17 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:54:17 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 44b937f6-fc5c-49f8-b92c-81707258aad2 does not exist
Nov 24 20:54:17 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 51db7728-caf8-4663-92e9-30a8b7e080c3 does not exist
Nov 24 20:54:17 compute-0 sudo[306773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:54:17 compute-0 sudo[306773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:17 compute-0 sudo[306773]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:17 compute-0 sudo[306798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:54:17 compute-0 sudo[306798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:54:17 compute-0 sudo[306798]: pam_unix(sudo:session): session closed for user root
Nov 24 20:54:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:17.762+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:18.171+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:18 compute-0 ceph-mon[75677]: pgmap v2178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:18 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:54:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:54:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:18.741+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:19.215+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:19 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:19.780+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:19 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:20.177+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:20 compute-0 ceph-mon[75677]: pgmap v2179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:20 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:20.778+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:21.134+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:21 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:21.753+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3781 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:22.108+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:22 compute-0 ceph-mon[75677]: pgmap v2180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:22 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:22 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3781 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:22.789+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:23.110+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:23 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:23.770+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:23 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:24.083+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:24 compute-0 ceph-mon[75677]: pgmap v2181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:24 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:54:24
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', '.rgw.root', '.mgr']
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:54:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:24.811+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:25.044+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:25 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:25.821+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:26.093+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:26 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:26 compute-0 ceph-mon[75677]: pgmap v2182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:26 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:26.841+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:26 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:27.073+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:27 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3787 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:27 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:27.869+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:27 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:27 compute-0 podman[306823]: 2025-11-24 20:54:27.871383516 +0000 UTC m=+0.093973314 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 20:54:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:28.104+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:28 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:28 compute-0 ceph-mon[75677]: pgmap v2183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:28 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:28 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3787 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:28.901+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:28 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:29 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:29.140+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:29 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:29.902+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:29 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:30.115+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:30 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:30 compute-0 ceph-mon[75677]: pgmap v2184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:30 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:30.912+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:30 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:31.067+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:31 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:31 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:31 compute-0 ceph-mon[75677]: pgmap v2185: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:31.926+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:31 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:32.111+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:32 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:32 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:32.957+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:32 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:33.107+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:33 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:33 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:33 compute-0 ceph-mon[75677]: pgmap v2186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:33.985+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:33 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:34.117+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:34 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:34 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:35.009+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:35 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:35.085+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:54:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:54:35 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:35 compute-0 ceph-mon[75677]: pgmap v2187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:35.963+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:35 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:36.113+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:36 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:36 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:54:36.157 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 20:54:36 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:54:36.159 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 20:54:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3797 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:36 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:36.922+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:36 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:37.150+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:37 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:37 compute-0 podman[306842]: 2025-11-24 20:54:37.872450108 +0000 UTC m=+0.095599568 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 24 20:54:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:37.874+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:37 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:37 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:37 compute-0 ceph-mon[75677]: pgmap v2188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:37 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3797 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:38 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:54:38.161 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 20:54:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:38.163+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:38 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:38.876+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:38 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:38 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:39.120+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:39 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:39.867+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:39 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:39 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:39 compute-0 ceph-mon[75677]: pgmap v2189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:39 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:40.170+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:40 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:54:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:54:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:54:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:54:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:54:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:40.864+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:40 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:40 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:41.180+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:41 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:41.842+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:41 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3802 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:42 compute-0 ceph-mon[75677]: pgmap v2190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:42 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:42 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3802 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:42.216+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:42 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:42.865+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:42 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:42 compute-0 podman[306863]: 2025-11-24 20:54:42.982982189 +0000 UTC m=+0.198999780 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 24 20:54:43 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:43.249+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:43 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:43.835+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:43 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:44 compute-0 ceph-mon[75677]: pgmap v2191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:44 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:44.252+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:44 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:44.840+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:44 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:45 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:45.222+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:45 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:45.881+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:45 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:46 compute-0 ceph-mon[75677]: pgmap v2192: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:46 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:46.242+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:46 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:46.874+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:46 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3807 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:47 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:47.211+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:47 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:47.878+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:47 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:48 compute-0 ceph-mon[75677]: pgmap v2193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:48 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3807 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:48 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:48.219+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:48 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:48.904+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:48 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:49 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:49.205+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:49 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:49.907+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:49 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:50 compute-0 ceph-mon[75677]: pgmap v2194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:50 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:50.174+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:50 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:50.877+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:50 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:51 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:51.200+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:51 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:51.847+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:51 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:52 compute-0 ceph-mon[75677]: pgmap v2195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:52 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:52.216+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:52 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:52.869+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:52 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:53.177+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:53 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:53 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:53.877+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:53 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:54.142+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:54 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:54 compute-0 ceph-mon[75677]: pgmap v2196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:54 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:54:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:54:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:54.832+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:54 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:55.134+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:55 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:55 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:55.829+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:55 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:56.124+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:56 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:56 compute-0 ceph-mon[75677]: pgmap v2197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:56 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:56.799+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:56 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3812 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:56.934416) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017696934463, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2015, "num_deletes": 563, "total_data_size": 2072404, "memory_usage": 2123568, "flush_reason": "Manual Compaction"}
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017696952316, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2014131, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62670, "largest_seqno": 64684, "table_properties": {"data_size": 2005779, "index_size": 4145, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 29305, "raw_average_key_size": 23, "raw_value_size": 1984395, "raw_average_value_size": 1584, "num_data_blocks": 181, "num_entries": 1252, "num_filter_entries": 1252, "num_deletions": 563, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017582, "oldest_key_time": 1764017582, "file_creation_time": 1764017696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 17949 microseconds, and 10145 cpu microseconds.
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:56.952365) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2014131 bytes OK
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:56.952386) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:56.953958) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:56.953970) EVENT_LOG_v1 {"time_micros": 1764017696953966, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:56.953986) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2061984, prev total WAL file size 2061984, number of live WAL files 2.
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:56.954762) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303230' seq:72057594037927935, type:22 .. '6C6F676D0033323735' seq:0, type:0; will stop at (end)
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(1966KB)], [134(9898KB)]
Nov 24 20:54:56 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017696954813, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12149911, "oldest_snapshot_seqno": -1}
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 13359 keys, 11879504 bytes, temperature: kUnknown
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017697053014, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11879504, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11804022, "index_size": 41001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33413, "raw_key_size": 368116, "raw_average_key_size": 27, "raw_value_size": 11572535, "raw_average_value_size": 866, "num_data_blocks": 1508, "num_entries": 13359, "num_filter_entries": 13359, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:57.053343) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11879504 bytes
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:57.054911) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.6 rd, 120.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.7 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 14497, records dropped: 1138 output_compression: NoCompression
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:57.054943) EVENT_LOG_v1 {"time_micros": 1764017697054928, "job": 82, "event": "compaction_finished", "compaction_time_micros": 98293, "compaction_time_cpu_micros": 53222, "output_level": 6, "num_output_files": 1, "total_output_size": 11879504, "num_input_records": 14497, "num_output_records": 13359, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017697055878, "job": 82, "event": "table_file_deletion", "file_number": 136}
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017697059351, "job": 82, "event": "table_file_deletion", "file_number": 134}
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:56.954628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:57.059522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:57.059530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:57.059532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:57.059534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:54:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:54:57.059537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:54:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:57.122+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:57 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:57 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:57 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3812 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:54:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:57.834+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:57 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:58.154+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:58 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:58 compute-0 ceph-mon[75677]: pgmap v2198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:58 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:54:58 compute-0 podman[306889]: 2025-11-24 20:54:58.839335303 +0000 UTC m=+0.069365690 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 24 20:54:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:58.841+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:58 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:54:59.109+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:59 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:54:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:59 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:54:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:54:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:54:59.849+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:59 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:54:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:00.124+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:00 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:00 compute-0 ceph-mon[75677]: pgmap v2199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:00 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:00.892+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:00 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:01.106+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:01 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:01 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3822 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:01.940+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:01 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:02.126+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:02 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:02 compute-0 ceph-mon[75677]: pgmap v2200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:02 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:02 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3822 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:02.909+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:02 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:03.110+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:03 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:03 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:03.872+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:03 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:04.154+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:04 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:04 compute-0 ceph-mon[75677]: pgmap v2201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:04 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:04.880+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:04 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:05.127+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:05 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:05 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:05.845+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:05 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:06.106+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:06 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:06 compute-0 ceph-mon[75677]: pgmap v2202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:06 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:06.859+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:06 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:07.142+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:07 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3827 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:07 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:07.873+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:07 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:08.132+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:08 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:08 compute-0 ceph-mon[75677]: pgmap v2203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:08 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:08 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3827 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:08 compute-0 podman[306907]: 2025-11-24 20:55:08.865912764 +0000 UTC m=+0.093089130 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 24 20:55:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:08.912+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:08 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:09.143+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:09 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:55:09.410 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:55:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:55:09.411 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:55:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:55:09.411 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:55:09 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:09 compute-0 ceph-mon[75677]: pgmap v2204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:09.938+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:09 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:10.128+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:10 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:10 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:10.966+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:10 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:11.085+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:11 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:11 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:11 compute-0 ceph-mon[75677]: pgmap v2205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:11.994+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:11 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:12.038+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:12 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:12 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:13.001+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:13 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:13.067+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:13 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:13 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:13 compute-0 ceph-mon[75677]: pgmap v2206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:13 compute-0 podman[306927]: 2025-11-24 20:55:13.953568387 +0000 UTC m=+0.177950453 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 24 20:55:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:14.019+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:14 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:14.024+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:14 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:14 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:15.011+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:15 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:15.025+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:15 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:15 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:15 compute-0 ceph-mon[75677]: pgmap v2207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:16.043+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:16 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:16.054+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:16 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:55:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624516056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:55:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:55:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624516056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:55:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:16 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/624516056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:55:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/624516056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:55:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:17.025+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:17 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:17.063+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:17 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:17 compute-0 sudo[306955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:55:17 compute-0 sudo[306955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:17 compute-0 sudo[306955]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:17 compute-0 sudo[306980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:55:17 compute-0 sudo[306980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:17 compute-0 sudo[306980]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:17 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:17 compute-0 ceph-mon[75677]: pgmap v2208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:17 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:17 compute-0 sudo[307005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:55:17 compute-0 sudo[307005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:17 compute-0 sudo[307005]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:17 compute-0 sudo[307030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:55:17 compute-0 sudo[307030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:18.014+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:18.086+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:18 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:18 compute-0 sudo[307030]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:55:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:55:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:55:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:55:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:55:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:55:18 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e230ebbd-8550-441e-b6ae-f3ac97be1a00 does not exist
Nov 24 20:55:18 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2b868b2d-ca3a-4c6c-98b8-72906b8fcdc7 does not exist
Nov 24 20:55:18 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6d1abc04-ccb9-4de5-bbf0-eea32da9da28 does not exist
Nov 24 20:55:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:55:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:55:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:55:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:55:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:55:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:55:18 compute-0 sudo[307086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:55:18 compute-0 sudo[307086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:18 compute-0 sudo[307086]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:18 compute-0 sudo[307111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:55:18 compute-0 sudo[307111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:18 compute-0 sudo[307111]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:18 compute-0 sudo[307136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:55:18 compute-0 sudo[307136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:18 compute-0 sudo[307136]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:18 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:55:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:55:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:55:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:55:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:55:18 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:55:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:18 compute-0 sudo[307161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:55:18 compute-0 sudo[307161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:18.993+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:18 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:19.080+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:19 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:19 compute-0 podman[307226]: 2025-11-24 20:55:19.218300089 +0000 UTC m=+0.059108610 container create 07615e732aed1b41533a81103092056a0631a52336220e54c6f10bf249423cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:55:19 compute-0 systemd[1]: Started libpod-conmon-07615e732aed1b41533a81103092056a0631a52336220e54c6f10bf249423cd1.scope.
Nov 24 20:55:19 compute-0 podman[307226]: 2025-11-24 20:55:19.18471526 +0000 UTC m=+0.025523761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:55:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:55:19 compute-0 podman[307226]: 2025-11-24 20:55:19.327409617 +0000 UTC m=+0.168218198 container init 07615e732aed1b41533a81103092056a0631a52336220e54c6f10bf249423cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:55:19 compute-0 podman[307226]: 2025-11-24 20:55:19.341169973 +0000 UTC m=+0.181978484 container start 07615e732aed1b41533a81103092056a0631a52336220e54c6f10bf249423cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 20:55:19 compute-0 musing_jemison[307242]: 167 167
Nov 24 20:55:19 compute-0 systemd[1]: libpod-07615e732aed1b41533a81103092056a0631a52336220e54c6f10bf249423cd1.scope: Deactivated successfully.
Nov 24 20:55:19 compute-0 podman[307226]: 2025-11-24 20:55:19.357900141 +0000 UTC m=+0.198708662 container attach 07615e732aed1b41533a81103092056a0631a52336220e54c6f10bf249423cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:55:19 compute-0 podman[307226]: 2025-11-24 20:55:19.359501365 +0000 UTC m=+0.200309906 container died 07615e732aed1b41533a81103092056a0631a52336220e54c6f10bf249423cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 20:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b30b8b75830f3d4777d5154c1a1c5e99701093dd31ab8ad211c12374537ad89-merged.mount: Deactivated successfully.
Nov 24 20:55:19 compute-0 podman[307226]: 2025-11-24 20:55:19.422642105 +0000 UTC m=+0.263450606 container remove 07615e732aed1b41533a81103092056a0631a52336220e54c6f10bf249423cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:55:19 compute-0 systemd[1]: libpod-conmon-07615e732aed1b41533a81103092056a0631a52336220e54c6f10bf249423cd1.scope: Deactivated successfully.
Nov 24 20:55:19 compute-0 podman[307269]: 2025-11-24 20:55:19.6715657 +0000 UTC m=+0.085156032 container create f3e0e131e9b4ee5c42e0f95ca5a0ba362d688338350a90c14f2a1bbfc62383f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_montalcini, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 20:55:19 compute-0 podman[307269]: 2025-11-24 20:55:19.627175195 +0000 UTC m=+0.040765517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:55:19 compute-0 systemd[1]: Started libpod-conmon-f3e0e131e9b4ee5c42e0f95ca5a0ba362d688338350a90c14f2a1bbfc62383f5.scope.
Nov 24 20:55:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:19 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:19 compute-0 ceph-mon[75677]: pgmap v2209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dfbf8d7631e67ec4c45f67d00c18aa93158435f327b475b3416f7eacb83ab3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dfbf8d7631e67ec4c45f67d00c18aa93158435f327b475b3416f7eacb83ab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dfbf8d7631e67ec4c45f67d00c18aa93158435f327b475b3416f7eacb83ab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dfbf8d7631e67ec4c45f67d00c18aa93158435f327b475b3416f7eacb83ab3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dfbf8d7631e67ec4c45f67d00c18aa93158435f327b475b3416f7eacb83ab3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:19 compute-0 podman[307269]: 2025-11-24 20:55:19.800206493 +0000 UTC m=+0.213796795 container init f3e0e131e9b4ee5c42e0f95ca5a0ba362d688338350a90c14f2a1bbfc62383f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_montalcini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:55:19 compute-0 podman[307269]: 2025-11-24 20:55:19.808236633 +0000 UTC m=+0.221826975 container start f3e0e131e9b4ee5c42e0f95ca5a0ba362d688338350a90c14f2a1bbfc62383f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:55:19 compute-0 podman[307269]: 2025-11-24 20:55:19.813154548 +0000 UTC m=+0.226744890 container attach f3e0e131e9b4ee5c42e0f95ca5a0ba362d688338350a90c14f2a1bbfc62383f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_montalcini, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:55:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:20.033+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:20 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:20.104+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:20 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:20 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:20 compute-0 keen_montalcini[307285]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:55:20 compute-0 keen_montalcini[307285]: --> relative data size: 1.0
Nov 24 20:55:20 compute-0 keen_montalcini[307285]: --> All data devices are unavailable
Nov 24 20:55:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:21.108+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:21 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:21.078+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:21 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:21 compute-0 systemd[1]: libpod-f3e0e131e9b4ee5c42e0f95ca5a0ba362d688338350a90c14f2a1bbfc62383f5.scope: Deactivated successfully.
Nov 24 20:55:21 compute-0 systemd[1]: libpod-f3e0e131e9b4ee5c42e0f95ca5a0ba362d688338350a90c14f2a1bbfc62383f5.scope: Consumed 1.121s CPU time.
Nov 24 20:55:21 compute-0 podman[307314]: 2025-11-24 20:55:21.371283073 +0000 UTC m=+0.036783078 container died f3e0e131e9b4ee5c42e0f95ca5a0ba362d688338350a90c14f2a1bbfc62383f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-44dfbf8d7631e67ec4c45f67d00c18aa93158435f327b475b3416f7eacb83ab3-merged.mount: Deactivated successfully.
Nov 24 20:55:21 compute-0 podman[307314]: 2025-11-24 20:55:21.421014815 +0000 UTC m=+0.086514820 container remove f3e0e131e9b4ee5c42e0f95ca5a0ba362d688338350a90c14f2a1bbfc62383f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 20:55:21 compute-0 systemd[1]: libpod-conmon-f3e0e131e9b4ee5c42e0f95ca5a0ba362d688338350a90c14f2a1bbfc62383f5.scope: Deactivated successfully.
Nov 24 20:55:21 compute-0 sudo[307161]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:21 compute-0 sudo[307329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:55:21 compute-0 sudo[307329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:21 compute-0 sudo[307329]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:21 compute-0 sudo[307354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:55:21 compute-0 sudo[307354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:21 compute-0 sudo[307354]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:21 compute-0 sudo[307379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:55:21 compute-0 sudo[307379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:21 compute-0 sudo[307379]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:21 compute-0 sudo[307404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:55:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:21 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:21 compute-0 ceph-mon[75677]: pgmap v2210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:21 compute-0 sudo[307404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3842 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:22.107+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:22 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:22.151+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:22 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:22 compute-0 podman[307469]: 2025-11-24 20:55:22.236977898 +0000 UTC m=+0.067537431 container create 99ef5001bac28f09abdd8be5d41d73ce7ae0c5f43197687e63668045fe432625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:55:22 compute-0 systemd[1]: Started libpod-conmon-99ef5001bac28f09abdd8be5d41d73ce7ae0c5f43197687e63668045fe432625.scope.
Nov 24 20:55:22 compute-0 podman[307469]: 2025-11-24 20:55:22.208030625 +0000 UTC m=+0.038590208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:55:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:55:22 compute-0 podman[307469]: 2025-11-24 20:55:22.340528313 +0000 UTC m=+0.171087856 container init 99ef5001bac28f09abdd8be5d41d73ce7ae0c5f43197687e63668045fe432625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carver, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:55:22 compute-0 podman[307469]: 2025-11-24 20:55:22.351487333 +0000 UTC m=+0.182046836 container start 99ef5001bac28f09abdd8be5d41d73ce7ae0c5f43197687e63668045fe432625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:55:22 compute-0 podman[307469]: 2025-11-24 20:55:22.355869344 +0000 UTC m=+0.186428877 container attach 99ef5001bac28f09abdd8be5d41d73ce7ae0c5f43197687e63668045fe432625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carver, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:55:22 compute-0 admiring_carver[307485]: 167 167
Nov 24 20:55:22 compute-0 systemd[1]: libpod-99ef5001bac28f09abdd8be5d41d73ce7ae0c5f43197687e63668045fe432625.scope: Deactivated successfully.
Nov 24 20:55:22 compute-0 podman[307469]: 2025-11-24 20:55:22.360021717 +0000 UTC m=+0.190581210 container died 99ef5001bac28f09abdd8be5d41d73ce7ae0c5f43197687e63668045fe432625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9245c37b94f1dca757e89da44f6c39a6e90fff91e3c21c844b967989a5645838-merged.mount: Deactivated successfully.
Nov 24 20:55:22 compute-0 podman[307469]: 2025-11-24 20:55:22.431535435 +0000 UTC m=+0.262094928 container remove 99ef5001bac28f09abdd8be5d41d73ce7ae0c5f43197687e63668045fe432625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 20:55:22 compute-0 systemd[1]: libpod-conmon-99ef5001bac28f09abdd8be5d41d73ce7ae0c5f43197687e63668045fe432625.scope: Deactivated successfully.
Nov 24 20:55:22 compute-0 podman[307508]: 2025-11-24 20:55:22.701476097 +0000 UTC m=+0.078494531 container create 39de2233b432fdd4777fbb2436f559482ce4c4f06a78f0070036eff5635298f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:55:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:22 compute-0 podman[307508]: 2025-11-24 20:55:22.670220751 +0000 UTC m=+0.047239275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:55:22 compute-0 systemd[1]: Started libpod-conmon-39de2233b432fdd4777fbb2436f559482ce4c4f06a78f0070036eff5635298f8.scope.
Nov 24 20:55:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:22 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:22 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3842 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25459438d46386568a4d6f59e78efc5cc0a2ffa86658fe9a8937fe2b77f89a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25459438d46386568a4d6f59e78efc5cc0a2ffa86658fe9a8937fe2b77f89a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25459438d46386568a4d6f59e78efc5cc0a2ffa86658fe9a8937fe2b77f89a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25459438d46386568a4d6f59e78efc5cc0a2ffa86658fe9a8937fe2b77f89a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:22 compute-0 podman[307508]: 2025-11-24 20:55:22.826109209 +0000 UTC m=+0.203127713 container init 39de2233b432fdd4777fbb2436f559482ce4c4f06a78f0070036eff5635298f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:55:22 compute-0 podman[307508]: 2025-11-24 20:55:22.839197048 +0000 UTC m=+0.216215482 container start 39de2233b432fdd4777fbb2436f559482ce4c4f06a78f0070036eff5635298f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 20:55:22 compute-0 podman[307508]: 2025-11-24 20:55:22.844070531 +0000 UTC m=+0.221088995 container attach 39de2233b432fdd4777fbb2436f559482ce4c4f06a78f0070036eff5635298f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 20:55:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:23.098+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:23 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:23.198+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:23 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]: {
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:     "0": [
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:         {
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "devices": [
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "/dev/loop3"
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             ],
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_name": "ceph_lv0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_size": "21470642176",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "name": "ceph_lv0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "tags": {
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.cluster_name": "ceph",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.crush_device_class": "",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.encrypted": "0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.osd_id": "0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.type": "block",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.vdo": "0"
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             },
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "type": "block",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "vg_name": "ceph_vg0"
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:         }
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:     ],
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:     "1": [
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:         {
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "devices": [
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "/dev/loop4"
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             ],
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_name": "ceph_lv1",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_size": "21470642176",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "name": "ceph_lv1",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "tags": {
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.cluster_name": "ceph",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.crush_device_class": "",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.encrypted": "0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.osd_id": "1",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.type": "block",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.vdo": "0"
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             },
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "type": "block",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "vg_name": "ceph_vg1"
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:         }
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:     ],
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:     "2": [
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:         {
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "devices": [
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "/dev/loop5"
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             ],
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_name": "ceph_lv2",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_size": "21470642176",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "name": "ceph_lv2",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "tags": {
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.cluster_name": "ceph",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.crush_device_class": "",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.encrypted": "0",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.osd_id": "2",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.type": "block",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:                 "ceph.vdo": "0"
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             },
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "type": "block",
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:             "vg_name": "ceph_vg2"
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:         }
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]:     ]
Nov 24 20:55:23 compute-0 dreamy_pasteur[307525]: }
Nov 24 20:55:23 compute-0 systemd[1]: libpod-39de2233b432fdd4777fbb2436f559482ce4c4f06a78f0070036eff5635298f8.scope: Deactivated successfully.
Nov 24 20:55:23 compute-0 podman[307508]: 2025-11-24 20:55:23.585952296 +0000 UTC m=+0.962970760 container died 39de2233b432fdd4777fbb2436f559482ce4c4f06a78f0070036eff5635298f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 20:55:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c25459438d46386568a4d6f59e78efc5cc0a2ffa86658fe9a8937fe2b77f89a3-merged.mount: Deactivated successfully.
Nov 24 20:55:23 compute-0 podman[307508]: 2025-11-24 20:55:23.656514908 +0000 UTC m=+1.033533312 container remove 39de2233b432fdd4777fbb2436f559482ce4c4f06a78f0070036eff5635298f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:55:23 compute-0 systemd[1]: libpod-conmon-39de2233b432fdd4777fbb2436f559482ce4c4f06a78f0070036eff5635298f8.scope: Deactivated successfully.
Nov 24 20:55:23 compute-0 sudo[307404]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:23 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:23 compute-0 ceph-mon[75677]: pgmap v2211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:23 compute-0 sudo[307546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:55:23 compute-0 sudo[307546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:23 compute-0 sudo[307546]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:23 compute-0 sudo[307571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:55:23 compute-0 sudo[307571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:23 compute-0 sudo[307571]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:24 compute-0 sudo[307596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:55:24 compute-0 sudo[307596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:24 compute-0 sudo[307596]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:24.077+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:24 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:24 compute-0 sudo[307621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:55:24 compute-0 sudo[307621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:24.151+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:24 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:55:24 compute-0 podman[307688]: 2025-11-24 20:55:24.52245745 +0000 UTC m=+0.064479107 container create 91e231a824ab9193154a38dd48e37e7ffc6c39e49775be08b66d2b61c1a26720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_robinson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:55:24
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.mgr', 'volumes', 'default.rgw.meta', 'backups', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:55:24 compute-0 systemd[1]: Started libpod-conmon-91e231a824ab9193154a38dd48e37e7ffc6c39e49775be08b66d2b61c1a26720.scope.
Nov 24 20:55:24 compute-0 podman[307688]: 2025-11-24 20:55:24.497438415 +0000 UTC m=+0.039460122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:55:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:55:24 compute-0 podman[307688]: 2025-11-24 20:55:24.622844579 +0000 UTC m=+0.164866276 container init 91e231a824ab9193154a38dd48e37e7ffc6c39e49775be08b66d2b61c1a26720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 20:55:24 compute-0 podman[307688]: 2025-11-24 20:55:24.635022652 +0000 UTC m=+0.177044319 container start 91e231a824ab9193154a38dd48e37e7ffc6c39e49775be08b66d2b61c1a26720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:55:24 compute-0 podman[307688]: 2025-11-24 20:55:24.63969693 +0000 UTC m=+0.181718637 container attach 91e231a824ab9193154a38dd48e37e7ffc6c39e49775be08b66d2b61c1a26720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:55:24 compute-0 charming_robinson[307704]: 167 167
Nov 24 20:55:24 compute-0 systemd[1]: libpod-91e231a824ab9193154a38dd48e37e7ffc6c39e49775be08b66d2b61c1a26720.scope: Deactivated successfully.
Nov 24 20:55:24 compute-0 podman[307688]: 2025-11-24 20:55:24.644235374 +0000 UTC m=+0.186257031 container died 91e231a824ab9193154a38dd48e37e7ffc6c39e49775be08b66d2b61c1a26720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_robinson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:55:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f656d2a31c9f3438c6b2795430af8a133d18779884a70b48c14e70e1cc59aa9-merged.mount: Deactivated successfully.
Nov 24 20:55:24 compute-0 podman[307688]: 2025-11-24 20:55:24.697188574 +0000 UTC m=+0.239210231 container remove 91e231a824ab9193154a38dd48e37e7ffc6c39e49775be08b66d2b61c1a26720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 20:55:24 compute-0 systemd[1]: libpod-conmon-91e231a824ab9193154a38dd48e37e7ffc6c39e49775be08b66d2b61c1a26720.scope: Deactivated successfully.
Nov 24 20:55:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:24 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:24 compute-0 podman[307729]: 2025-11-24 20:55:24.976734949 +0000 UTC m=+0.078622994 container create b2444f732f7c47b13600b6d135406fcec19fbe729de7274dc23aa4d262ab90cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:55:25 compute-0 podman[307729]: 2025-11-24 20:55:24.945223326 +0000 UTC m=+0.047111421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:55:25 compute-0 systemd[1]: Started libpod-conmon-b2444f732f7c47b13600b6d135406fcec19fbe729de7274dc23aa4d262ab90cd.scope.
Nov 24 20:55:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:25.060+0000 7f1a67169640 -1 osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:25 compute-0 ceph-osd[89640]: osd.1 182 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0525b5a72b5018b5c5e3792b736bb1ad47ea64b711b6da1f89432e13fbb3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0525b5a72b5018b5c5e3792b736bb1ad47ea64b711b6da1f89432e13fbb3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0525b5a72b5018b5c5e3792b736bb1ad47ea64b711b6da1f89432e13fbb3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0525b5a72b5018b5c5e3792b736bb1ad47ea64b711b6da1f89432e13fbb3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:55:25 compute-0 podman[307729]: 2025-11-24 20:55:25.103470179 +0000 UTC m=+0.205358284 container init b2444f732f7c47b13600b6d135406fcec19fbe729de7274dc23aa4d262ab90cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 20:55:25 compute-0 podman[307729]: 2025-11-24 20:55:25.116811714 +0000 UTC m=+0.218699769 container start b2444f732f7c47b13600b6d135406fcec19fbe729de7274dc23aa4d262ab90cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_almeida, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 20:55:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:25.118+0000 7f2ca3ee7640 -1 osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:25 compute-0 ceph-osd[88624]: osd.0 182 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:25 compute-0 podman[307729]: 2025-11-24 20:55:25.120868676 +0000 UTC m=+0.222756791 container attach b2444f732f7c47b13600b6d135406fcec19fbe729de7274dc23aa4d262ab90cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:55:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 24 20:55:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 24 20:55:25 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 24 20:55:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:25 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:25 compute-0 ceph-mon[75677]: pgmap v2212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:26.026+0000 7f1a67169640 -1 osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:26 compute-0 ceph-osd[89640]: osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:26 compute-0 determined_almeida[307746]: {
Nov 24 20:55:26 compute-0 determined_almeida[307746]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "osd_id": 2,
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "type": "bluestore"
Nov 24 20:55:26 compute-0 determined_almeida[307746]:     },
Nov 24 20:55:26 compute-0 determined_almeida[307746]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "osd_id": 1,
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "type": "bluestore"
Nov 24 20:55:26 compute-0 determined_almeida[307746]:     },
Nov 24 20:55:26 compute-0 determined_almeida[307746]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "osd_id": 0,
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:55:26 compute-0 determined_almeida[307746]:         "type": "bluestore"
Nov 24 20:55:26 compute-0 determined_almeida[307746]:     }
Nov 24 20:55:26 compute-0 determined_almeida[307746]: }
Nov 24 20:55:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:26.162+0000 7f2ca3ee7640 -1 osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:26 compute-0 ceph-osd[88624]: osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:26 compute-0 systemd[1]: libpod-b2444f732f7c47b13600b6d135406fcec19fbe729de7274dc23aa4d262ab90cd.scope: Deactivated successfully.
Nov 24 20:55:26 compute-0 podman[307729]: 2025-11-24 20:55:26.207695555 +0000 UTC m=+1.309583610 container died b2444f732f7c47b13600b6d135406fcec19fbe729de7274dc23aa4d262ab90cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 20:55:26 compute-0 systemd[1]: libpod-b2444f732f7c47b13600b6d135406fcec19fbe729de7274dc23aa4d262ab90cd.scope: Consumed 1.090s CPU time.
Nov 24 20:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-78c0525b5a72b5018b5c5e3792b736bb1ad47ea64b711b6da1f89432e13fbb3b-merged.mount: Deactivated successfully.
Nov 24 20:55:26 compute-0 podman[307729]: 2025-11-24 20:55:26.282854713 +0000 UTC m=+1.384742768 container remove b2444f732f7c47b13600b6d135406fcec19fbe729de7274dc23aa4d262ab90cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_almeida, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:55:26 compute-0 systemd[1]: libpod-conmon-b2444f732f7c47b13600b6d135406fcec19fbe729de7274dc23aa4d262ab90cd.scope: Deactivated successfully.
Nov 24 20:55:26 compute-0 sudo[307621]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:55:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:55:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:55:26 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:55:26 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 80b26143-910b-438b-af20-8760d5c30338 does not exist
Nov 24 20:55:26 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3aaf3235-3145-4766-8aaf-3f8c114bd829 does not exist
Nov 24 20:55:26 compute-0 sudo[307793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:55:26 compute-0 sudo[307793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:26 compute-0 sudo[307793]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:26 compute-0 sudo[307818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:55:26 compute-0 sudo[307818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:55:26 compute-0 sudo[307818]: pam_unix(sudo:session): session closed for user root
Nov 24 20:55:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 614 B/s wr, 8 op/s
Nov 24 20:55:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:26 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:26 compute-0 ceph-mon[75677]: osdmap e183: 3 total, 3 up, 3 in
Nov 24 20:55:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:55:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:55:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:27.015+0000 7f1a67169640 -1 osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:27 compute-0 ceph-osd[89640]: osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:27.201+0000 7f2ca3ee7640 -1 osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:27 compute-0 ceph-osd[88624]: osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3847 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:27 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:27 compute-0 ceph-mon[75677]: pgmap v2214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 614 B/s wr, 8 op/s
Nov 24 20:55:27 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3847 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:27.965+0000 7f1a67169640 -1 osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:27 compute-0 ceph-osd[89640]: osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:28.176+0000 7f2ca3ee7640 -1 osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:28 compute-0 ceph-osd[88624]: osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 614 B/s wr, 8 op/s
Nov 24 20:55:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:28 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:28.936+0000 7f1a67169640 -1 osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:28 compute-0 ceph-osd[89640]: osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:29.215+0000 7f2ca3ee7640 -1 osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:29 compute-0 ceph-osd[88624]: osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:29 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:29 compute-0 ceph-mon[75677]: pgmap v2215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 614 B/s wr, 8 op/s
Nov 24 20:55:29 compute-0 podman[307843]: 2025-11-24 20:55:29.907087044 +0000 UTC m=+0.121708584 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 20:55:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:29.944+0000 7f1a67169640 -1 osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:29 compute-0 ceph-osd[89640]: osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:30.232+0000 7f2ca3ee7640 -1 osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:30 compute-0 ceph-osd[88624]: osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 24 20:55:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:30 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:30.948+0000 7f1a67169640 -1 osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:30 compute-0 ceph-osd[89640]: osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:31.253+0000 7f2ca3ee7640 -1 osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:31 compute-0 ceph-osd[88624]: osd.0 183 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:31 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:31 compute-0 ceph-mon[75677]: pgmap v2216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 24 20:55:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:31.925+0000 7f1a67169640 -1 osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:31 compute-0 ceph-osd[89640]: osd.1 183 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 24 20:55:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 24 20:55:31 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 24 20:55:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:32.210+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:32 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 24 20:55:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:32 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:32 compute-0 ceph-mon[75677]: osdmap e184: 3 total, 3 up, 3 in
Nov 24 20:55:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:32.924+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:32 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:33.239+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:33 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:33 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:33 compute-0 ceph-mon[75677]: pgmap v2218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 24 20:55:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:33.889+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:33 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:34.192+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:34 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 916 B/s wr, 18 op/s
Nov 24 20:55:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:34.854+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:34 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:34 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:35.178+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:35 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:55:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:55:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:35.807+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:35 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:35 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:35 compute-0 ceph-mon[75677]: pgmap v2219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 916 B/s wr, 18 op/s
Nov 24 20:55:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:36.131+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:36 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Nov 24 20:55:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:36.759+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:36 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:36 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3857 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:36 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:37.146+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:37 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:37.791+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:37 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:37 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:37 compute-0 ceph-mon[75677]: pgmap v2220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Nov 24 20:55:37 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3857 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:38.144+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:38 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Nov 24 20:55:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:38.795+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:38 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:38 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:39.147+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:39 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:39.753+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:39 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:39 compute-0 podman[307862]: 2025-11-24 20:55:39.891534481 +0000 UTC m=+0.115250878 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 20:55:39 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:39 compute-0 ceph-mon[75677]: pgmap v2221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Nov 24 20:55:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:40.187+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:40 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:55:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:55:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:55:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:55:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:55:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:40.782+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:40 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:40 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:40 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:41.182+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:41 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:41.790+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:41 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:41 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3861 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:41 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:41 compute-0 ceph-mon[75677]: pgmap v2222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:41 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:41 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3861 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:41.970649) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017741970936, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 992, "num_deletes": 377, "total_data_size": 868792, "memory_usage": 887664, "flush_reason": "Manual Compaction"}
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017741981748, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 853627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64685, "largest_seqno": 65676, "table_properties": {"data_size": 849070, "index_size": 1760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15391, "raw_average_key_size": 22, "raw_value_size": 837940, "raw_average_value_size": 1234, "num_data_blocks": 76, "num_entries": 679, "num_filter_entries": 679, "num_deletions": 377, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017697, "oldest_key_time": 1764017697, "file_creation_time": 1764017741, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 10941 microseconds, and 6929 cpu microseconds.
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:41.981805) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 853627 bytes OK
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:41.981831) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:41.984026) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:41.984046) EVENT_LOG_v1 {"time_micros": 1764017741984039, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:41.984067) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 863271, prev total WAL file size 863271, number of live WAL files 2.
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:41.984841) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(833KB)], [137(11MB)]
Nov 24 20:55:41 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017741984874, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 12733131, "oldest_snapshot_seqno": -1}
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 13269 keys, 11179456 bytes, temperature: kUnknown
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017742086503, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 11179456, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11105130, "index_size": 40037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33221, "raw_key_size": 366616, "raw_average_key_size": 27, "raw_value_size": 10875644, "raw_average_value_size": 819, "num_data_blocks": 1462, "num_entries": 13269, "num_filter_entries": 13269, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017741, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:42.086782) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 11179456 bytes
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:42.089964) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.2 rd, 109.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.3 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(28.0) write-amplify(13.1) OK, records in: 14038, records dropped: 769 output_compression: NoCompression
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:42.089980) EVENT_LOG_v1 {"time_micros": 1764017742089972, "job": 84, "event": "compaction_finished", "compaction_time_micros": 101736, "compaction_time_cpu_micros": 50203, "output_level": 6, "num_output_files": 1, "total_output_size": 11179456, "num_input_records": 14038, "num_output_records": 13269, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017742090239, "job": 84, "event": "table_file_deletion", "file_number": 139}
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017742092380, "job": 84, "event": "table_file_deletion", "file_number": 137}
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:41.984750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:42.092432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:42.092435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:42.092437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:42.092438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:55:42 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:55:42.092440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:55:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:42.207+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:42 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:42.765+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:42 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:42 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:43.211+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:43 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:43.750+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:43 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:43 compute-0 ceph-mon[75677]: pgmap v2223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:43 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:44.211+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:44 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:44.785+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:44 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:44 compute-0 podman[307883]: 2025-11-24 20:55:44.915865914 +0000 UTC m=+0.140691072 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 20:55:44 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:45.208+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:45 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:45.754+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:45 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:46 compute-0 ceph-mon[75677]: pgmap v2224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:46 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:46.230+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:46 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:46.709+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:46 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3867 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:47 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:47.214+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:47 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:47.666+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:47 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:48 compute-0 ceph-mon[75677]: pgmap v2225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:48 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3867 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:48 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:48.195+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:48 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:48.683+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:48 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:49.221+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:49 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:49 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:49.699+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:49 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:50.250+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:50 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:50 compute-0 ceph-mon[75677]: pgmap v2226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:50 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:50.736+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:50 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:51.274+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:51 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:51 compute-0 sshd-session[307909]: Invalid user jenkins from 51.158.120.121 port 59862
Nov 24 20:55:51 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:51 compute-0 sshd-session[307909]: Received disconnect from 51.158.120.121 port 59862:11: Bye Bye [preauth]
Nov 24 20:55:51 compute-0 sshd-session[307909]: Disconnected from invalid user jenkins 51.158.120.121 port 59862 [preauth]
Nov 24 20:55:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:51.689+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:51 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:52.302+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:52 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:52 compute-0 ceph-mon[75677]: pgmap v2227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:52 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:52.715+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:52 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:53.340+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:53 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:53 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:53.696+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:53 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:54.338+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:54 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:55:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:55:54 compute-0 ceph-mon[75677]: pgmap v2228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:54 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:54.660+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:54 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:55.336+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:55 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:55 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:55.692+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:55 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:56.374+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:56 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:56 compute-0 ceph-mon[75677]: pgmap v2229: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:56 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:56.646+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:56 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:56 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3871 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:55:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:57.368+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:57 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:57 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:57 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3871 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:55:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:57.696+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:57 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:58.377+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:58 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:58 compute-0 ceph-mon[75677]: pgmap v2230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:58 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:58.668+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:58 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:55:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:55:59.421+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:59 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:55:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:59 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:55:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:55:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:55:59.646+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:59 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:55:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:00.380+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:00 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:00 compute-0 ceph-mon[75677]: pgmap v2231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:00 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:00.655+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:00 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:00 compute-0 podman[307911]: 2025-11-24 20:56:00.83601597 +0000 UTC m=+0.065637355 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 20:56:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:01.420+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:01 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:01 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:01.692+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:01 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3881 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:02.391+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:02 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:02 compute-0 ceph-mon[75677]: pgmap v2232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:02 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:02 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3881 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:02.706+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:02 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:03.412+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:03 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:03 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:03 compute-0 ceph-mon[75677]: pgmap v2233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:03.677+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:03 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:04 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:04.461+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:04.647+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:04 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:04 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:05 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:05.477+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:05 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:05 compute-0 ceph-mon[75677]: pgmap v2234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:05.672+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:05 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:06 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:06.518+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:06.671+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:06 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:06 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:07 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:07.483+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3886 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:07.700+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:07 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:07 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:07 compute-0 ceph-mon[75677]: pgmap v2235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:08 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:08.469+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:08.683+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:08 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:08 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:08 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3886 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:08 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:56:09.412 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:56:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:56:09.412 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:56:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:56:09.413 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:56:09 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:09.435+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:09.649+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:09 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:09 compute-0 ceph-mon[75677]: pgmap v2236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:09 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:10 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:10.447+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:10.633+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:10 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:10 compute-0 podman[307931]: 2025-11-24 20:56:10.855812386 +0000 UTC m=+0.076607179 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 24 20:56:11 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:11.403+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:11 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:11.646+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:11 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:12 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:12.440+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:12 compute-0 ceph-mon[75677]: pgmap v2237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:12 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:12.625+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:12 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:13 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:13.454+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:13 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:13.648+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:13 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:14 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:14.468+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:14 compute-0 ceph-mon[75677]: pgmap v2238: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:14 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:14.672+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:14 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:15.513+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:15 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:15 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:15.714+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:15 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:15 compute-0 podman[307949]: 2025-11-24 20:56:15.971857061 +0000 UTC m=+0.194856867 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 24 20:56:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:56:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3723550010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:56:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:56:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3723550010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:56:16 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:16.518+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:16 compute-0 ceph-mon[75677]: pgmap v2239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:16 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3723550010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:56:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3723550010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:56:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:16.682+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:16 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:16 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3891 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:17 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:17.564+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:17 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:17 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3891 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:17.713+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:17 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:18 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:18.528+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:18 compute-0 ceph-mon[75677]: pgmap v2240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:18 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:18.674+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:18 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:19.562+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:19 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:19 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:19.722+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:19 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:20.551+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:20 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:20 compute-0 ceph-mon[75677]: pgmap v2241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:20 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:20.721+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:20 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:21.537+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:21 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:21.700+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:21 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:21 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:21 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3901 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:22.549+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:22 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:22.658+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:22 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:22 compute-0 ceph-mon[75677]: pgmap v2242: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:22 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:22 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3901 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:23.537+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:23 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:23.611+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:23 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:23 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:23 compute-0 ceph-mon[75677]: pgmap v2243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:56:24
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.control', '.mgr', 'vms', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root']
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:56:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:24.573+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:24 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:24.623+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:24 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:24 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:25.523+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:25 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:25.657+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:25 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:25 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:25 compute-0 ceph-mon[75677]: pgmap v2244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:26.507+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:26 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:26.613+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:26 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:26 compute-0 sudo[307976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:56:26 compute-0 sudo[307976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:26 compute-0 sudo[307976]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:26 compute-0 sudo[308001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:56:26 compute-0 sudo[308001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:26 compute-0 sudo[308001]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:26 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:26 compute-0 sudo[308026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:56:26 compute-0 sudo[308026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:26 compute-0 sudo[308026]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:26 compute-0 sudo[308051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:56:26 compute-0 sudo[308051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:26 compute-0 sshd-session[308076]: error: kex_exchange_identification: read: Connection reset by peer
Nov 24 20:56:26 compute-0 sshd-session[308076]: Connection reset by 158.222.23.245 port 34272
Nov 24 20:56:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:27.492+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:27 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:27 compute-0 sshd-session[308077]: Invalid user a from 158.222.23.245 port 34390
Nov 24 20:56:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:27.597+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:27 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:27 compute-0 sshd-session[308077]: Connection closed by invalid user a 158.222.23.245 port 34390 [preauth]
Nov 24 20:56:27 compute-0 sudo[308051]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:56:27 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:56:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:56:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:56:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:56:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:56:27 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev da1d8097-2cfb-4c46-996b-dc1124791ad9 does not exist
Nov 24 20:56:27 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev fb3265d7-886f-4ff8-9ce4-0eb7817a8700 does not exist
Nov 24 20:56:27 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9b778ef2-94f3-476c-9e04-dd5b8137c1a8 does not exist
Nov 24 20:56:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:56:27 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:56:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:56:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:56:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:56:27 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:56:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3906 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:27 compute-0 sudo[308110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:56:27 compute-0 sudo[308110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:27 compute-0 sudo[308110]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:27 compute-0 sudo[308135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:56:27 compute-0 sudo[308135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:27 compute-0 sudo[308135]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:27 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:27 compute-0 ceph-mon[75677]: pgmap v2245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:56:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:56:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:56:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:56:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:56:27 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:56:27 compute-0 sudo[308160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:56:27 compute-0 sudo[308160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:27 compute-0 sudo[308160]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:28 compute-0 systemd[1]: Starting dnf makecache...
Nov 24 20:56:28 compute-0 sudo[308185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:56:28 compute-0 sudo[308185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:28 compute-0 dnf[308209]: Metadata cache refreshed recently.
Nov 24 20:56:28 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 20:56:28 compute-0 systemd[1]: Finished dnf makecache.
Nov 24 20:56:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:28.484+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:28 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:28 compute-0 podman[308252]: 2025-11-24 20:56:28.494971777 +0000 UTC m=+0.035331190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:56:28 compute-0 podman[308252]: 2025-11-24 20:56:28.588292355 +0000 UTC m=+0.128651758 container create 89c238e2e7c884ac1043cffff990e763a3dc379711f815f94a6be50fde41103e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 20:56:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:28.640+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:28 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:28 compute-0 systemd[1]: Started libpod-conmon-89c238e2e7c884ac1043cffff990e763a3dc379711f815f94a6be50fde41103e.scope.
Nov 24 20:56:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:56:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:28 compute-0 podman[308252]: 2025-11-24 20:56:28.865037251 +0000 UTC m=+0.405396734 container init 89c238e2e7c884ac1043cffff990e763a3dc379711f815f94a6be50fde41103e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:56:28 compute-0 podman[308252]: 2025-11-24 20:56:28.878121423 +0000 UTC m=+0.418480826 container start 89c238e2e7c884ac1043cffff990e763a3dc379711f815f94a6be50fde41103e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cartwright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:56:28 compute-0 gifted_cartwright[308268]: 167 167
Nov 24 20:56:28 compute-0 systemd[1]: libpod-89c238e2e7c884ac1043cffff990e763a3dc379711f815f94a6be50fde41103e.scope: Deactivated successfully.
Nov 24 20:56:28 compute-0 podman[308252]: 2025-11-24 20:56:28.89065806 +0000 UTC m=+0.431017463 container attach 89c238e2e7c884ac1043cffff990e763a3dc379711f815f94a6be50fde41103e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 20:56:28 compute-0 podman[308252]: 2025-11-24 20:56:28.891008049 +0000 UTC m=+0.431367422 container died 89c238e2e7c884ac1043cffff990e763a3dc379711f815f94a6be50fde41103e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cartwright, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:56:28 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:28 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3906 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-83fe453de540a08724d24527e394a8af0ead926f7f9287442478b38176942a6a-merged.mount: Deactivated successfully.
Nov 24 20:56:28 compute-0 podman[308252]: 2025-11-24 20:56:28.998645682 +0000 UTC m=+0.539005075 container remove 89c238e2e7c884ac1043cffff990e763a3dc379711f815f94a6be50fde41103e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cartwright, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 20:56:29 compute-0 systemd[1]: libpod-conmon-89c238e2e7c884ac1043cffff990e763a3dc379711f815f94a6be50fde41103e.scope: Deactivated successfully.
Nov 24 20:56:29 compute-0 podman[308291]: 2025-11-24 20:56:29.226067823 +0000 UTC m=+0.060781435 container create be5eb9c91dd242a965b9b9a13f1b41f4927b96b7e1c035a421366b13273b4248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:56:29 compute-0 systemd[1]: Started libpod-conmon-be5eb9c91dd242a965b9b9a13f1b41f4927b96b7e1c035a421366b13273b4248.scope.
Nov 24 20:56:29 compute-0 podman[308291]: 2025-11-24 20:56:29.203420285 +0000 UTC m=+0.038133897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:56:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:56:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e85d6dfe76ad219fb9f816c595ffa9d16897bdf5581484e24a00f6d707b9ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e85d6dfe76ad219fb9f816c595ffa9d16897bdf5581484e24a00f6d707b9ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e85d6dfe76ad219fb9f816c595ffa9d16897bdf5581484e24a00f6d707b9ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e85d6dfe76ad219fb9f816c595ffa9d16897bdf5581484e24a00f6d707b9ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e85d6dfe76ad219fb9f816c595ffa9d16897bdf5581484e24a00f6d707b9ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:29 compute-0 podman[308291]: 2025-11-24 20:56:29.327834997 +0000 UTC m=+0.162548569 container init be5eb9c91dd242a965b9b9a13f1b41f4927b96b7e1c035a421366b13273b4248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:56:29 compute-0 podman[308291]: 2025-11-24 20:56:29.340174469 +0000 UTC m=+0.174888041 container start be5eb9c91dd242a965b9b9a13f1b41f4927b96b7e1c035a421366b13273b4248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 20:56:29 compute-0 podman[308291]: 2025-11-24 20:56:29.344923266 +0000 UTC m=+0.179636878 container attach be5eb9c91dd242a965b9b9a13f1b41f4927b96b7e1c035a421366b13273b4248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:56:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:29.487+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:29 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:29.677+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:29 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:29 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:29 compute-0 ceph-mon[75677]: pgmap v2246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:29 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:30 compute-0 objective_rhodes[308307]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:56:30 compute-0 objective_rhodes[308307]: --> relative data size: 1.0
Nov 24 20:56:30 compute-0 objective_rhodes[308307]: --> All data devices are unavailable
Nov 24 20:56:30 compute-0 systemd[1]: libpod-be5eb9c91dd242a965b9b9a13f1b41f4927b96b7e1c035a421366b13273b4248.scope: Deactivated successfully.
Nov 24 20:56:30 compute-0 podman[308291]: 2025-11-24 20:56:30.449873059 +0000 UTC m=+1.284586671 container died be5eb9c91dd242a965b9b9a13f1b41f4927b96b7e1c035a421366b13273b4248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 20:56:30 compute-0 systemd[1]: libpod-be5eb9c91dd242a965b9b9a13f1b41f4927b96b7e1c035a421366b13273b4248.scope: Consumed 1.064s CPU time.
Nov 24 20:56:30 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:30.477+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-21e85d6dfe76ad219fb9f816c595ffa9d16897bdf5581484e24a00f6d707b9ce-merged.mount: Deactivated successfully.
Nov 24 20:56:30 compute-0 podman[308291]: 2025-11-24 20:56:30.51952854 +0000 UTC m=+1.354242142 container remove be5eb9c91dd242a965b9b9a13f1b41f4927b96b7e1c035a421366b13273b4248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 24 20:56:30 compute-0 systemd[1]: libpod-conmon-be5eb9c91dd242a965b9b9a13f1b41f4927b96b7e1c035a421366b13273b4248.scope: Deactivated successfully.
Nov 24 20:56:30 compute-0 sudo[308185]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:30 compute-0 sudo[308350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:56:30 compute-0 sudo[308350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:30.676+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:30 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:30 compute-0 sudo[308350]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:30 compute-0 sudo[308375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:56:30 compute-0 sudo[308375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:30 compute-0 sudo[308375]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:30 compute-0 sudo[308400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:56:30 compute-0 sudo[308400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:30 compute-0 sudo[308400]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:30 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:30 compute-0 sudo[308426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:56:30 compute-0 sudo[308426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:31 compute-0 podman[308424]: 2025-11-24 20:56:31.008111279 +0000 UTC m=+0.098152619 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 20:56:31 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:31.475+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:31 compute-0 podman[308510]: 2025-11-24 20:56:31.481480639 +0000 UTC m=+0.077777281 container create f7a3a279c0bf2e003b0b7f3a730a3796aa06a7e1991d91fdfe9f883b51946cc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:56:31 compute-0 podman[308510]: 2025-11-24 20:56:31.449690895 +0000 UTC m=+0.045987627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:56:31 compute-0 systemd[1]: Started libpod-conmon-f7a3a279c0bf2e003b0b7f3a730a3796aa06a7e1991d91fdfe9f883b51946cc7.scope.
Nov 24 20:56:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:56:31 compute-0 podman[308510]: 2025-11-24 20:56:31.652824643 +0000 UTC m=+0.249121375 container init f7a3a279c0bf2e003b0b7f3a730a3796aa06a7e1991d91fdfe9f883b51946cc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:56:31 compute-0 podman[308510]: 2025-11-24 20:56:31.665027702 +0000 UTC m=+0.261324384 container start f7a3a279c0bf2e003b0b7f3a730a3796aa06a7e1991d91fdfe9f883b51946cc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:56:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:31.672+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:31 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:31 compute-0 elastic_dijkstra[308526]: 167 167
Nov 24 20:56:31 compute-0 systemd[1]: libpod-f7a3a279c0bf2e003b0b7f3a730a3796aa06a7e1991d91fdfe9f883b51946cc7.scope: Deactivated successfully.
Nov 24 20:56:31 compute-0 podman[308510]: 2025-11-24 20:56:31.683646521 +0000 UTC m=+0.279943243 container attach f7a3a279c0bf2e003b0b7f3a730a3796aa06a7e1991d91fdfe9f883b51946cc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 20:56:31 compute-0 podman[308510]: 2025-11-24 20:56:31.684880645 +0000 UTC m=+0.281177327 container died f7a3a279c0bf2e003b0b7f3a730a3796aa06a7e1991d91fdfe9f883b51946cc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fea636b5d6ca5b2efb7bdde130df60e94d36be039cbbc371ea1522ea141885a2-merged.mount: Deactivated successfully.
Nov 24 20:56:31 compute-0 podman[308510]: 2025-11-24 20:56:31.739670837 +0000 UTC m=+0.335967519 container remove f7a3a279c0bf2e003b0b7f3a730a3796aa06a7e1991d91fdfe9f883b51946cc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 20:56:31 compute-0 systemd[1]: libpod-conmon-f7a3a279c0bf2e003b0b7f3a730a3796aa06a7e1991d91fdfe9f883b51946cc7.scope: Deactivated successfully.
Nov 24 20:56:31 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:31 compute-0 ceph-mon[75677]: pgmap v2247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:31 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:32 compute-0 podman[308549]: 2025-11-24 20:56:32.000169467 +0000 UTC m=+0.071498492 container create 4bb7b2ea5079058fb568c86c63326e36ab430560c6398f20750561ef67dc2681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 20:56:32 compute-0 systemd[1]: Started libpod-conmon-4bb7b2ea5079058fb568c86c63326e36ab430560c6398f20750561ef67dc2681.scope.
Nov 24 20:56:32 compute-0 podman[308549]: 2025-11-24 20:56:31.968860546 +0000 UTC m=+0.040189601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:56:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e26959c27941c668e98b78723376bda5015c3405a6129b9402b0a6e23ec4a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e26959c27941c668e98b78723376bda5015c3405a6129b9402b0a6e23ec4a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e26959c27941c668e98b78723376bda5015c3405a6129b9402b0a6e23ec4a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e26959c27941c668e98b78723376bda5015c3405a6129b9402b0a6e23ec4a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:32 compute-0 podman[308549]: 2025-11-24 20:56:32.116053211 +0000 UTC m=+0.187382266 container init 4bb7b2ea5079058fb568c86c63326e36ab430560c6398f20750561ef67dc2681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bohr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:56:32 compute-0 podman[308549]: 2025-11-24 20:56:32.131351032 +0000 UTC m=+0.202680027 container start 4bb7b2ea5079058fb568c86c63326e36ab430560c6398f20750561ef67dc2681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bohr, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 20:56:32 compute-0 podman[308549]: 2025-11-24 20:56:32.136301985 +0000 UTC m=+0.207631050 container attach 4bb7b2ea5079058fb568c86c63326e36ab430560c6398f20750561ef67dc2681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:56:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:32.429+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:32 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:32.659+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:32 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:32 compute-0 tender_bohr[308566]: {
Nov 24 20:56:32 compute-0 tender_bohr[308566]:     "0": [
Nov 24 20:56:32 compute-0 tender_bohr[308566]:         {
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "devices": [
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "/dev/loop3"
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             ],
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_name": "ceph_lv0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_size": "21470642176",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "name": "ceph_lv0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "tags": {
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.cluster_name": "ceph",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.crush_device_class": "",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.encrypted": "0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.osd_id": "0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.type": "block",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.vdo": "0"
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             },
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "type": "block",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "vg_name": "ceph_vg0"
Nov 24 20:56:32 compute-0 tender_bohr[308566]:         }
Nov 24 20:56:32 compute-0 tender_bohr[308566]:     ],
Nov 24 20:56:32 compute-0 tender_bohr[308566]:     "1": [
Nov 24 20:56:32 compute-0 tender_bohr[308566]:         {
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "devices": [
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "/dev/loop4"
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             ],
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_name": "ceph_lv1",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_size": "21470642176",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "name": "ceph_lv1",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "tags": {
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.cluster_name": "ceph",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.crush_device_class": "",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.encrypted": "0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.osd_id": "1",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.type": "block",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.vdo": "0"
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             },
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "type": "block",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "vg_name": "ceph_vg1"
Nov 24 20:56:32 compute-0 tender_bohr[308566]:         }
Nov 24 20:56:32 compute-0 tender_bohr[308566]:     ],
Nov 24 20:56:32 compute-0 tender_bohr[308566]:     "2": [
Nov 24 20:56:32 compute-0 tender_bohr[308566]:         {
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "devices": [
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "/dev/loop5"
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             ],
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_name": "ceph_lv2",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_size": "21470642176",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "name": "ceph_lv2",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "tags": {
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.cluster_name": "ceph",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.crush_device_class": "",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.encrypted": "0",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.osd_id": "2",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.type": "block",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:                 "ceph.vdo": "0"
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             },
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "type": "block",
Nov 24 20:56:32 compute-0 tender_bohr[308566]:             "vg_name": "ceph_vg2"
Nov 24 20:56:32 compute-0 tender_bohr[308566]:         }
Nov 24 20:56:32 compute-0 tender_bohr[308566]:     ]
Nov 24 20:56:32 compute-0 tender_bohr[308566]: }
Nov 24 20:56:32 compute-0 systemd[1]: libpod-4bb7b2ea5079058fb568c86c63326e36ab430560c6398f20750561ef67dc2681.scope: Deactivated successfully.
Nov 24 20:56:32 compute-0 podman[308549]: 2025-11-24 20:56:32.899055501 +0000 UTC m=+0.970384506 container died 4bb7b2ea5079058fb568c86c63326e36ab430560c6398f20750561ef67dc2681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bohr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-35e26959c27941c668e98b78723376bda5015c3405a6129b9402b0a6e23ec4a7-merged.mount: Deactivated successfully.
Nov 24 20:56:32 compute-0 podman[308549]: 2025-11-24 20:56:32.967519431 +0000 UTC m=+1.038848426 container remove 4bb7b2ea5079058fb568c86c63326e36ab430560c6398f20750561ef67dc2681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bohr, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 20:56:32 compute-0 systemd[1]: libpod-conmon-4bb7b2ea5079058fb568c86c63326e36ab430560c6398f20750561ef67dc2681.scope: Deactivated successfully.
Nov 24 20:56:33 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:33 compute-0 sudo[308426]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:33 compute-0 sudo[308589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:56:33 compute-0 sudo[308589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:33 compute-0 sudo[308589]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:33 compute-0 sudo[308614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:56:33 compute-0 sudo[308614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:33 compute-0 sudo[308614]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:33 compute-0 sudo[308639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:56:33 compute-0 sudo[308639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:33 compute-0 sudo[308639]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:33 compute-0 sudo[308664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:56:33 compute-0 sudo[308664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:33 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:33.448+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:33.672+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:33 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:33 compute-0 podman[308728]: 2025-11-24 20:56:33.870132316 +0000 UTC m=+0.030996214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:56:34 compute-0 podman[308728]: 2025-11-24 20:56:34.060741607 +0000 UTC m=+0.221605445 container create 26d37d2bf5820a51494e7ffa56963cb8af8df1c771cf25ab66b00751cc478815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 20:56:34 compute-0 ceph-mon[75677]: pgmap v2248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:34 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:34 compute-0 systemd[1]: Started libpod-conmon-26d37d2bf5820a51494e7ffa56963cb8af8df1c771cf25ab66b00751cc478815.scope.
Nov 24 20:56:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:56:34 compute-0 podman[308728]: 2025-11-24 20:56:34.272290322 +0000 UTC m=+0.433154130 container init 26d37d2bf5820a51494e7ffa56963cb8af8df1c771cf25ab66b00751cc478815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 20:56:34 compute-0 podman[308728]: 2025-11-24 20:56:34.283451481 +0000 UTC m=+0.444315309 container start 26d37d2bf5820a51494e7ffa56963cb8af8df1c771cf25ab66b00751cc478815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:56:34 compute-0 charming_goldstine[308744]: 167 167
Nov 24 20:56:34 compute-0 systemd[1]: libpod-26d37d2bf5820a51494e7ffa56963cb8af8df1c771cf25ab66b00751cc478815.scope: Deactivated successfully.
Nov 24 20:56:34 compute-0 conmon[308744]: conmon 26d37d2bf5820a51494e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26d37d2bf5820a51494e7ffa56963cb8af8df1c771cf25ab66b00751cc478815.scope/container/memory.events
Nov 24 20:56:34 compute-0 podman[308728]: 2025-11-24 20:56:34.310255142 +0000 UTC m=+0.471118970 container attach 26d37d2bf5820a51494e7ffa56963cb8af8df1c771cf25ab66b00751cc478815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:56:34 compute-0 podman[308728]: 2025-11-24 20:56:34.311355181 +0000 UTC m=+0.472218999 container died 26d37d2bf5820a51494e7ffa56963cb8af8df1c771cf25ab66b00751cc478815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldstine, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 20:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9cfa76d2e5193c968273d985fba82eb148ba40347bcd3a41de21bf8aede2fb0-merged.mount: Deactivated successfully.
Nov 24 20:56:34 compute-0 podman[308728]: 2025-11-24 20:56:34.446312858 +0000 UTC m=+0.607176646 container remove 26d37d2bf5820a51494e7ffa56963cb8af8df1c771cf25ab66b00751cc478815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldstine, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:56:34 compute-0 systemd[1]: libpod-conmon-26d37d2bf5820a51494e7ffa56963cb8af8df1c771cf25ab66b00751cc478815.scope: Deactivated successfully.
Nov 24 20:56:34 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:34.478+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:34.710+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:34 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:34 compute-0 podman[308770]: 2025-11-24 20:56:34.734245785 +0000 UTC m=+0.114876458 container create ad4d31fe9b34cd6c19ee7a4a8410694dfa21cad23027876c7e48b6ba398cb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 20:56:34 compute-0 podman[308770]: 2025-11-24 20:56:34.65997703 +0000 UTC m=+0.040607773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:56:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:34 compute-0 systemd[1]: Started libpod-conmon-ad4d31fe9b34cd6c19ee7a4a8410694dfa21cad23027876c7e48b6ba398cb26b.scope.
Nov 24 20:56:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86aac82369ab73447799f26f1d3d3ffdfaaaf4e33f8e1ec5c98cd7de3507979c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86aac82369ab73447799f26f1d3d3ffdfaaaf4e33f8e1ec5c98cd7de3507979c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86aac82369ab73447799f26f1d3d3ffdfaaaf4e33f8e1ec5c98cd7de3507979c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86aac82369ab73447799f26f1d3d3ffdfaaaf4e33f8e1ec5c98cd7de3507979c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:56:34 compute-0 podman[308770]: 2025-11-24 20:56:34.849846021 +0000 UTC m=+0.230476744 container init ad4d31fe9b34cd6c19ee7a4a8410694dfa21cad23027876c7e48b6ba398cb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:56:34 compute-0 podman[308770]: 2025-11-24 20:56:34.862196884 +0000 UTC m=+0.242827557 container start ad4d31fe9b34cd6c19ee7a4a8410694dfa21cad23027876c7e48b6ba398cb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heyrovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 20:56:34 compute-0 podman[308770]: 2025-11-24 20:56:34.86615533 +0000 UTC m=+0.246786023 container attach ad4d31fe9b34cd6c19ee7a4a8410694dfa21cad23027876c7e48b6ba398cb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:56:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:56:35 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:35.517+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:35 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:35.727+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:35 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]: {
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "osd_id": 2,
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "type": "bluestore"
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:     },
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "osd_id": 1,
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "type": "bluestore"
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:     },
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "osd_id": 0,
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:         "type": "bluestore"
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]:     }
Nov 24 20:56:35 compute-0 reverent_heyrovsky[308787]: }
Nov 24 20:56:35 compute-0 systemd[1]: libpod-ad4d31fe9b34cd6c19ee7a4a8410694dfa21cad23027876c7e48b6ba398cb26b.scope: Deactivated successfully.
Nov 24 20:56:35 compute-0 podman[308770]: 2025-11-24 20:56:35.993513924 +0000 UTC m=+1.374144607 container died ad4d31fe9b34cd6c19ee7a4a8410694dfa21cad23027876c7e48b6ba398cb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heyrovsky, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:56:35 compute-0 systemd[1]: libpod-ad4d31fe9b34cd6c19ee7a4a8410694dfa21cad23027876c7e48b6ba398cb26b.scope: Consumed 1.129s CPU time.
Nov 24 20:56:36 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:36.546+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:36.759+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:36 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3911 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:37 compute-0 ceph-mon[75677]: pgmap v2249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:37 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-86aac82369ab73447799f26f1d3d3ffdfaaaf4e33f8e1ec5c98cd7de3507979c-merged.mount: Deactivated successfully.
Nov 24 20:56:37 compute-0 podman[308770]: 2025-11-24 20:56:37.348531083 +0000 UTC m=+2.729161726 container remove ad4d31fe9b34cd6c19ee7a4a8410694dfa21cad23027876c7e48b6ba398cb26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:56:37 compute-0 systemd[1]: libpod-conmon-ad4d31fe9b34cd6c19ee7a4a8410694dfa21cad23027876c7e48b6ba398cb26b.scope: Deactivated successfully.
Nov 24 20:56:37 compute-0 sudo[308664]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:56:37 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:37.535+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:56:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:56:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:56:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 24cc360c-bd84-4368-9ae5-00e0b05a516c does not exist
Nov 24 20:56:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 161e6f0e-1f28-4a34-ad64-cf156d5b7c3a does not exist
Nov 24 20:56:37 compute-0 sudo[308834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:56:37 compute-0 sudo[308834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:37 compute-0 sudo[308834]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:37 compute-0 sudo[308859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:56:37 compute-0 sudo[308859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:56:37 compute-0 sudo[308859]: pam_unix(sudo:session): session closed for user root
Nov 24 20:56:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:37.765+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:37 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:38 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:38 compute-0 ceph-mon[75677]: pgmap v2250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:38 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3911 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:38 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:56:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:56:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:38 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:38.531+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:38.722+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:38 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:39 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:39 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:39.579+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:39.679+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:39 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:40 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:40.575+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:56:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:56:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:56:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:56:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:56:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:40.713+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:40 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:40 compute-0 ceph-mon[75677]: pgmap v2251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:40 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:41.566+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:41 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:56:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 66K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1817 writes, 10K keys, 1817 commit groups, 1.0 writes per commit group, ingest: 10.16 MB, 0.02 MB/s
                                           Interval WAL: 1818 writes, 1818 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     47.4      1.38              0.30        42    0.033       0      0       0.0       0.0
                                             L6      1/0   10.66 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   5.5    100.7     87.4      4.10              1.48        41    0.100    409K    24K       0.0       0.0
                                            Sum      1/0   10.66 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   6.5     75.4     77.3      5.48              1.78        83    0.066    409K    24K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.5     59.7     59.2      1.50              0.38        16    0.094    111K   6086       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    100.7     87.4      4.10              1.48        41    0.100    409K    24K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     47.5      1.38              0.30        41    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.064, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.41 GB write, 0.10 MB/s write, 0.40 GB read, 0.10 MB/s read, 5.5 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b73e09f1f0#2 capacity: 304.00 MB usage: 41.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.00056 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2747,38.83 MB,12.7733%) FilterBlock(84,1.11 MB,0.364961%) IndexBlock(84,1.45 MB,0.477831%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 20:56:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:41.732+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:41 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:41 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:41 compute-0 ceph-mon[75677]: pgmap v2252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:41 compute-0 podman[308884]: 2025-11-24 20:56:41.874260325 +0000 UTC m=+0.101421496 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Nov 24 20:56:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3921 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:42 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:42.547+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:42.755+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:42 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:42 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:42 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3921 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:43 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:43.591+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:43.769+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:43 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:43 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:43 compute-0 ceph-mon[75677]: pgmap v2253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:44 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:44.589+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:44.808+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:44 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:44 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:45 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:45.629+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:45.819+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:45 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:45 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:45 compute-0 ceph-mon[75677]: pgmap v2254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:46 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:46.627+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:46.813+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:46 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:46 compute-0 podman[308904]: 2025-11-24 20:56:46.888223207 +0000 UTC m=+0.112348640 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 20:56:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3927 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:47 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.145203) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017807145264, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1229, "num_deletes": 422, "total_data_size": 1167839, "memory_usage": 1201496, "flush_reason": "Manual Compaction"}
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017807293343, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 787488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65677, "largest_seqno": 66905, "table_properties": {"data_size": 782635, "index_size": 1799, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 18579, "raw_average_key_size": 23, "raw_value_size": 770024, "raw_average_value_size": 982, "num_data_blocks": 78, "num_entries": 784, "num_filter_entries": 784, "num_deletions": 422, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017742, "oldest_key_time": 1764017742, "file_creation_time": 1764017807, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 148245 microseconds, and 3756 cpu microseconds.
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.293449) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 787488 bytes OK
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.293478) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.339693) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.339747) EVENT_LOG_v1 {"time_micros": 1764017807339733, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.339778) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 1161166, prev total WAL file size 1161166, number of live WAL files 2.
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.340885) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(769KB)], [140(10MB)]
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017807340975, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11966944, "oldest_snapshot_seqno": -1}
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 13224 keys, 8922794 bytes, temperature: kUnknown
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017807468018, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 8922794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8852444, "index_size": 36189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33093, "raw_key_size": 365384, "raw_average_key_size": 27, "raw_value_size": 8627509, "raw_average_value_size": 652, "num_data_blocks": 1305, "num_entries": 13224, "num_filter_entries": 13224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017807, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.468707) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 8922794 bytes
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.470311) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.1 rd, 70.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.7 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(26.5) write-amplify(11.3) OK, records in: 14053, records dropped: 829 output_compression: NoCompression
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.470345) EVENT_LOG_v1 {"time_micros": 1764017807470330, "job": 86, "event": "compaction_finished", "compaction_time_micros": 127120, "compaction_time_cpu_micros": 52789, "output_level": 6, "num_output_files": 1, "total_output_size": 8922794, "num_input_records": 14053, "num_output_records": 13224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017807470775, "job": 86, "event": "table_file_deletion", "file_number": 142}
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017807474930, "job": 86, "event": "table_file_deletion", "file_number": 140}
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.340794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.475053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.475063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.475066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.475069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:56:47 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:56:47.475073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:56:47 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:47.602+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:47.860+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:47 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:48 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:48 compute-0 ceph-mon[75677]: pgmap v2255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:48 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3927 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:48 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:48 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:48.612+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:48.844+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:48 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:49 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:49 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:49.614+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:49.855+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:49 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:50 compute-0 ceph-mon[75677]: pgmap v2256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:50 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:50 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:50.578+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:50.813+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:50 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:51 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:51 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:51.584+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:51.789+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:51 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3932 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:52 compute-0 ceph-mon[75677]: pgmap v2257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:52 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:52 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3932 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:52.631+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:52 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:52.791+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:52 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:53 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:53.582+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:53 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:53.747+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:53 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:54 compute-0 ceph-mon[75677]: pgmap v2258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:54 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:56:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:56:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:54.561+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:54 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:54.765+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:54 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:55 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:55.585+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:55 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:55.746+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:55 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:56 compute-0 ceph-mon[75677]: pgmap v2259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:56 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:56.563+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:56 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:56.729+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:56 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:56:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3936 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:57 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:57.581+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:57 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:57.735+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:57 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:58 compute-0 ceph-mon[75677]: pgmap v2260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:58 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3936 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:56:58 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:58.575+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:58 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:58.763+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:58 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:56:59 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:56:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:56:59.599+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:59 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:56:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:56:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:56:59.746+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:59 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:56:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:00 compute-0 ceph-mon[75677]: pgmap v2261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:00 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:00.589+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:00 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:00.757+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:00 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:01 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:01 compute-0 sshd-session[308930]: Received disconnect from 51.158.120.121 port 43706:11: Bye Bye [preauth]
Nov 24 20:57:01 compute-0 sshd-session[308930]: Disconnected from authenticating user root 51.158.120.121 port 43706 [preauth]
Nov 24 20:57:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:01.600+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:01 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:01.759+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:01 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:01 compute-0 podman[308932]: 2025-11-24 20:57:01.839570023 +0000 UTC m=+0.071770890 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 20:57:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:02 compute-0 ceph-mon[75677]: pgmap v2262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:02 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:02.629+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:02 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:02.797+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:02 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:03 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:03.661+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:03 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:03.833+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:03 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:04 compute-0 ceph-mon[75677]: pgmap v2263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:04 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:04.691+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:04 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:04.825+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:04 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:05 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:05.700+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:05 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:05.802+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:05 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:06 compute-0 ceph-mon[75677]: pgmap v2264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:06 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:06.742+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:06 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:06.762+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:06 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3941 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:07 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:07 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3941 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:07.721+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:07 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:07.727+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:07 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:08 compute-0 ceph-mon[75677]: pgmap v2265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:08 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:08.708+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:08 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:08.712+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:08 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:57:09.413 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:57:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:57:09.414 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:57:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:57:09.414 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:57:09 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:09.665+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:09 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:09.735+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:09 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:10.673+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:10 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:10.721+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:10 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:10 compute-0 ceph-mon[75677]: pgmap v2266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:10 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:11.695+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:11 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:11.751+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:11 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:12 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:12 compute-0 ceph-mon[75677]: pgmap v2267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3951 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:12.712+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:12 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:12.742+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:12 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:12 compute-0 podman[308951]: 2025-11-24 20:57:12.82878209 +0000 UTC m=+0.068960965 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:57:13 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:13 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3951 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:13 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:13.695+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:13 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:13.699+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:13 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:14 compute-0 ceph-mon[75677]: pgmap v2268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:14 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:14.653+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:14 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:14.748+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:14 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:15 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:15.641+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:15 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:15.714+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:15 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:16 compute-0 ceph-mon[75677]: pgmap v2269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:16 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:57:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2330846219' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:57:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:57:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2330846219' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:57:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:16.626+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:16 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:16.685+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:16 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3956 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2330846219' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:57:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2330846219' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:57:17 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:17.596+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:17 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:17.684+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:17 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:17 compute-0 podman[308971]: 2025-11-24 20:57:17.899155239 +0000 UTC m=+0.122224325 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:57:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:18.597+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:18 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:18.658+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:18 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:18 compute-0 ceph-mon[75677]: pgmap v2270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:18 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3956 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:18 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:19.632+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:19 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:19.685+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:19 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:19 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:19 compute-0 ceph-mon[75677]: pgmap v2271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:20.620+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:20 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:20.729+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:20 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:20 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:21.615+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:21 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:21.718+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:21 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:22 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:22 compute-0 ceph-mon[75677]: pgmap v2272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:22.609+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:22 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:22.720+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:22 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:23 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:23 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:23.648+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:23 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:23.770+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:23 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:24 compute-0 ceph-mon[75677]: pgmap v2273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:24 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:57:24
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', 'vms', 'backups', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data']
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:57:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:24.645+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:24 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:24.817+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:24 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:25 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:25.652+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:25 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:25.774+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:25 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:26 compute-0 ceph-mon[75677]: pgmap v2274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:26 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:26.643+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:26 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:26.755+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:26 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3967 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:27 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:27.657+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:27 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:27.724+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:27 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:28 compute-0 ceph-mon[75677]: pgmap v2275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:28 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3967 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:28 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:28.679+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:28 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:28.740+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:28 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:29 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:29.661+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:29 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:29.708+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:29 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:30 compute-0 ceph-mon[75677]: pgmap v2276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:30 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:30.662+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:30 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:30.663+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:30 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:31 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:31.637+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:31 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:31.687+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:31 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3971 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:32 compute-0 ceph-mon[75677]: pgmap v2277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:32 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:32 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3971 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:32.679+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:32 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:32.733+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:32 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:32 compute-0 podman[308999]: 2025-11-24 20:57:32.860890672 +0000 UTC m=+0.087472071 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 20:57:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:33 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:33.637+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:33 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:33.706+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:33 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:34 compute-0 ceph-mon[75677]: pgmap v2278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:34 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:34.666+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:34 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:34.677+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:34 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:35 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:57:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:57:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:35.676+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:35 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:35.710+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:35 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:36 compute-0 ceph-mon[75677]: pgmap v2279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:57:36 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:36.669+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:36 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:36.737+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:36 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 24 20:57:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3976 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:37 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:37.659+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:37 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:37.774+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:37 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:37 compute-0 sudo[309018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:57:37 compute-0 sudo[309018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:37 compute-0 sudo[309018]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:37 compute-0 sudo[309043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:57:37 compute-0 sudo[309043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:37 compute-0 sudo[309043]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:38 compute-0 sudo[309068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:57:38 compute-0 sudo[309068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:38 compute-0 sudo[309068]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:38 compute-0 sudo[309093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:57:38 compute-0 sudo[309093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:38 compute-0 ceph-mon[75677]: pgmap v2280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 24 20:57:38 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3976 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:38 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:38.618+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:38 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:38.756+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:38 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 24 20:57:38 compute-0 sudo[309093]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:57:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:57:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:57:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:57:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:57:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:57:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9bbab2b8-eeaa-402e-9e27-31ed89b5e3d0 does not exist
Nov 24 20:57:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4cbe2821-8ba0-4e8a-985c-491c5ea6469f does not exist
Nov 24 20:57:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7c63c724-f53a-450d-9ae0-4b20e56be56f does not exist
Nov 24 20:57:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:57:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:57:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:57:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:57:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:57:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:57:38 compute-0 sudo[309148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:57:38 compute-0 sudo[309148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:38 compute-0 sudo[309148]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:39 compute-0 sudo[309173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:57:39 compute-0 sudo[309173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:39 compute-0 sudo[309173]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:39 compute-0 sudo[309198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:57:39 compute-0 sudo[309198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:39 compute-0 sudo[309198]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:39 compute-0 sudo[309223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:57:39 compute-0 sudo[309223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:39 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:57:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:57:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:57:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:57:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:57:39 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:57:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:39.587+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:39 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:39 compute-0 podman[309287]: 2025-11-24 20:57:39.679685124 +0000 UTC m=+0.069178540 container create ab1e5ed19e5463c954440f81c6219d0cc14f1b837d06e8dbc506637e52849639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:57:39 compute-0 podman[309287]: 2025-11-24 20:57:39.651420444 +0000 UTC m=+0.040913850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:57:39 compute-0 systemd[1]: Started libpod-conmon-ab1e5ed19e5463c954440f81c6219d0cc14f1b837d06e8dbc506637e52849639.scope.
Nov 24 20:57:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:39.785+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:39 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:57:39 compute-0 podman[309287]: 2025-11-24 20:57:39.815979006 +0000 UTC m=+0.205472362 container init ab1e5ed19e5463c954440f81c6219d0cc14f1b837d06e8dbc506637e52849639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_napier, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:57:39 compute-0 podman[309287]: 2025-11-24 20:57:39.823672903 +0000 UTC m=+0.213166209 container start ab1e5ed19e5463c954440f81c6219d0cc14f1b837d06e8dbc506637e52849639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_napier, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:57:39 compute-0 zealous_napier[309303]: 167 167
Nov 24 20:57:39 compute-0 systemd[1]: libpod-ab1e5ed19e5463c954440f81c6219d0cc14f1b837d06e8dbc506637e52849639.scope: Deactivated successfully.
Nov 24 20:57:39 compute-0 podman[309287]: 2025-11-24 20:57:39.848767468 +0000 UTC m=+0.238260814 container attach ab1e5ed19e5463c954440f81c6219d0cc14f1b837d06e8dbc506637e52849639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:57:39 compute-0 podman[309287]: 2025-11-24 20:57:39.851114761 +0000 UTC m=+0.240608067 container died ab1e5ed19e5463c954440f81c6219d0cc14f1b837d06e8dbc506637e52849639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_napier, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 20:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dc9e8d7579715621eb3867af2160d4073578645d3c697ae3efc7db7313b160d-merged.mount: Deactivated successfully.
Nov 24 20:57:40 compute-0 podman[309287]: 2025-11-24 20:57:40.15324718 +0000 UTC m=+0.542740526 container remove ab1e5ed19e5463c954440f81c6219d0cc14f1b837d06e8dbc506637e52849639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_napier, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:57:40 compute-0 systemd[1]: libpod-conmon-ab1e5ed19e5463c954440f81c6219d0cc14f1b837d06e8dbc506637e52849639.scope: Deactivated successfully.
Nov 24 20:57:40 compute-0 podman[309327]: 2025-11-24 20:57:40.357190229 +0000 UTC m=+0.040393046 container create d1323db4158f82b6ea37e4b4ffb93f89550bcfe54d6380704bedc504de93ad6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 20:57:40 compute-0 ceph-mon[75677]: pgmap v2281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 24 20:57:40 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:40 compute-0 systemd[1]: Started libpod-conmon-d1323db4158f82b6ea37e4b4ffb93f89550bcfe54d6380704bedc504de93ad6a.scope.
Nov 24 20:57:40 compute-0 podman[309327]: 2025-11-24 20:57:40.340369887 +0000 UTC m=+0.023572664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:57:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd31d89790070aa357591e2d83e5371cb42a773d89c01c6db82990352b139a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd31d89790070aa357591e2d83e5371cb42a773d89c01c6db82990352b139a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd31d89790070aa357591e2d83e5371cb42a773d89c01c6db82990352b139a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd31d89790070aa357591e2d83e5371cb42a773d89c01c6db82990352b139a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd31d89790070aa357591e2d83e5371cb42a773d89c01c6db82990352b139a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:40 compute-0 podman[309327]: 2025-11-24 20:57:40.472971661 +0000 UTC m=+0.156174488 container init d1323db4158f82b6ea37e4b4ffb93f89550bcfe54d6380704bedc504de93ad6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 20:57:40 compute-0 podman[309327]: 2025-11-24 20:57:40.485840556 +0000 UTC m=+0.169043373 container start d1323db4158f82b6ea37e4b4ffb93f89550bcfe54d6380704bedc504de93ad6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 24 20:57:40 compute-0 podman[309327]: 2025-11-24 20:57:40.490636225 +0000 UTC m=+0.173839042 container attach d1323db4158f82b6ea37e4b4ffb93f89550bcfe54d6380704bedc504de93ad6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:57:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:40.563+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:40 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:57:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:57:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:57:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:57:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:57:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:40.805+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:40 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 24 20:57:41 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:41 compute-0 silly_stonebraker[309343]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:57:41 compute-0 silly_stonebraker[309343]: --> relative data size: 1.0
Nov 24 20:57:41 compute-0 silly_stonebraker[309343]: --> All data devices are unavailable
Nov 24 20:57:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:41.593+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:41 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:41 compute-0 systemd[1]: libpod-d1323db4158f82b6ea37e4b4ffb93f89550bcfe54d6380704bedc504de93ad6a.scope: Deactivated successfully.
Nov 24 20:57:41 compute-0 systemd[1]: libpod-d1323db4158f82b6ea37e4b4ffb93f89550bcfe54d6380704bedc504de93ad6a.scope: Consumed 1.064s CPU time.
Nov 24 20:57:41 compute-0 podman[309327]: 2025-11-24 20:57:41.601339042 +0000 UTC m=+1.284541859 container died d1323db4158f82b6ea37e4b4ffb93f89550bcfe54d6380704bedc504de93ad6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:57:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccd31d89790070aa357591e2d83e5371cb42a773d89c01c6db82990352b139a4-merged.mount: Deactivated successfully.
Nov 24 20:57:41 compute-0 podman[309327]: 2025-11-24 20:57:41.72447349 +0000 UTC m=+1.407676287 container remove d1323db4158f82b6ea37e4b4ffb93f89550bcfe54d6380704bedc504de93ad6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:57:41 compute-0 systemd[1]: libpod-conmon-d1323db4158f82b6ea37e4b4ffb93f89550bcfe54d6380704bedc504de93ad6a.scope: Deactivated successfully.
Nov 24 20:57:41 compute-0 sudo[309223]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:41.842+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:41 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:41 compute-0 sudo[309385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:57:41 compute-0 sudo[309385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:41 compute-0 sudo[309385]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:41 compute-0 sudo[309410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:57:41 compute-0 sudo[309410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:41 compute-0 sudo[309410]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:42 compute-0 sudo[309435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:57:42 compute-0 sudo[309435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:42 compute-0 sudo[309435]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:42 compute-0 sudo[309460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:57:42 compute-0 sudo[309460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:42 compute-0 ceph-mon[75677]: pgmap v2282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 24 20:57:42 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:42.623+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:42 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:42 compute-0 podman[309527]: 2025-11-24 20:57:42.610235282 +0000 UTC m=+0.030996664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:57:42 compute-0 podman[309527]: 2025-11-24 20:57:42.782172872 +0000 UTC m=+0.202934224 container create 806a0cbcf483124d5c29b653f803b5ba4a18d2858c6f19a93aea2bfa9c5d1c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 20:57:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 24 20:57:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:42.838+0000 7f1a67169640 -1 osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:42 compute-0 ceph-osd[89640]: osd.1 184 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:43 compute-0 systemd[1]: Started libpod-conmon-806a0cbcf483124d5c29b653f803b5ba4a18d2858c6f19a93aea2bfa9c5d1c69.scope.
Nov 24 20:57:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:57:43 compute-0 podman[309527]: 2025-11-24 20:57:43.394162136 +0000 UTC m=+0.814923518 container init 806a0cbcf483124d5c29b653f803b5ba4a18d2858c6f19a93aea2bfa9c5d1c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 20:57:43 compute-0 podman[309527]: 2025-11-24 20:57:43.410594248 +0000 UTC m=+0.831355570 container start 806a0cbcf483124d5c29b653f803b5ba4a18d2858c6f19a93aea2bfa9c5d1c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 20:57:43 compute-0 crazy_rubin[309543]: 167 167
Nov 24 20:57:43 compute-0 systemd[1]: libpod-806a0cbcf483124d5c29b653f803b5ba4a18d2858c6f19a93aea2bfa9c5d1c69.scope: Deactivated successfully.
Nov 24 20:57:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 24 20:57:43 compute-0 podman[309527]: 2025-11-24 20:57:43.534884628 +0000 UTC m=+0.955645970 container attach 806a0cbcf483124d5c29b653f803b5ba4a18d2858c6f19a93aea2bfa9c5d1c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 20:57:43 compute-0 podman[309527]: 2025-11-24 20:57:43.536116481 +0000 UTC m=+0.956877843 container died 806a0cbcf483124d5c29b653f803b5ba4a18d2858c6f19a93aea2bfa9c5d1c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 24 20:57:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 24 20:57:43 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:43 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 24 20:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-52f14ab2b24844fd4a70868488af924394ae73d6fc5e6fb3691f7a17ba09731a-merged.mount: Deactivated successfully.
Nov 24 20:57:43 compute-0 ceph-osd[88624]: osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:43.608+0000 7f2ca3ee7640 -1 osd.0 184 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:43 compute-0 podman[309527]: 2025-11-24 20:57:43.672560558 +0000 UTC m=+1.093321900 container remove 806a0cbcf483124d5c29b653f803b5ba4a18d2858c6f19a93aea2bfa9c5d1c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 24 20:57:43 compute-0 systemd[1]: libpod-conmon-806a0cbcf483124d5c29b653f803b5ba4a18d2858c6f19a93aea2bfa9c5d1c69.scope: Deactivated successfully.
Nov 24 20:57:43 compute-0 podman[309544]: 2025-11-24 20:57:43.786695015 +0000 UTC m=+0.728736734 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 24 20:57:43 compute-0 ceph-osd[89640]: osd.1 185 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:43.869+0000 7f1a67169640 -1 osd.1 185 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:43 compute-0 podman[309588]: 2025-11-24 20:57:43.917883859 +0000 UTC m=+0.070097634 container create 035eabcc97acf1b45365b30965e0942e3e3fd2336a96a0dd5c2ff5d753f73d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:57:43 compute-0 systemd[1]: Started libpod-conmon-035eabcc97acf1b45365b30965e0942e3e3fd2336a96a0dd5c2ff5d753f73d7a.scope.
Nov 24 20:57:43 compute-0 podman[309588]: 2025-11-24 20:57:43.89744388 +0000 UTC m=+0.049657695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:57:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f205370c5a66da5c68453aa352011227067cbde834b2b57d5394aea110c6aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f205370c5a66da5c68453aa352011227067cbde834b2b57d5394aea110c6aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f205370c5a66da5c68453aa352011227067cbde834b2b57d5394aea110c6aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f205370c5a66da5c68453aa352011227067cbde834b2b57d5394aea110c6aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:44 compute-0 podman[309588]: 2025-11-24 20:57:44.152319619 +0000 UTC m=+0.304533464 container init 035eabcc97acf1b45365b30965e0942e3e3fd2336a96a0dd5c2ff5d753f73d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hofstadter, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:57:44 compute-0 podman[309588]: 2025-11-24 20:57:44.167445576 +0000 UTC m=+0.319659391 container start 035eabcc97acf1b45365b30965e0942e3e3fd2336a96a0dd5c2ff5d753f73d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:57:44 compute-0 podman[309588]: 2025-11-24 20:57:44.224860909 +0000 UTC m=+0.377074724 container attach 035eabcc97acf1b45365b30965e0942e3e3fd2336a96a0dd5c2ff5d753f73d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hofstadter, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 20:57:44 compute-0 ceph-mon[75677]: pgmap v2283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 24 20:57:44 compute-0 ceph-mon[75677]: osdmap e185: 3 total, 3 up, 3 in
Nov 24 20:57:44 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:44 compute-0 ceph-osd[88624]: osd.0 185 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:44.616+0000 7f2ca3ee7640 -1 osd.0 185 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 409 B/s wr, 9 op/s
Nov 24 20:57:44 compute-0 ceph-osd[89640]: osd.1 185 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:44.893+0000 7f1a67169640 -1 osd.1 185 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]: {
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:     "0": [
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:         {
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "devices": [
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "/dev/loop3"
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             ],
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_name": "ceph_lv0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_size": "21470642176",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "name": "ceph_lv0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "tags": {
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.cluster_name": "ceph",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.crush_device_class": "",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.encrypted": "0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.osd_id": "0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.type": "block",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.vdo": "0"
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             },
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "type": "block",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "vg_name": "ceph_vg0"
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:         }
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:     ],
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:     "1": [
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:         {
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "devices": [
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "/dev/loop4"
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             ],
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_name": "ceph_lv1",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_size": "21470642176",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "name": "ceph_lv1",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "tags": {
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.cluster_name": "ceph",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.crush_device_class": "",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.encrypted": "0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.osd_id": "1",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.type": "block",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.vdo": "0"
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             },
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "type": "block",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "vg_name": "ceph_vg1"
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:         }
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:     ],
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:     "2": [
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:         {
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "devices": [
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "/dev/loop5"
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             ],
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_name": "ceph_lv2",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_size": "21470642176",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "name": "ceph_lv2",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "tags": {
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.cluster_name": "ceph",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.crush_device_class": "",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.encrypted": "0",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.osd_id": "2",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.type": "block",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:                 "ceph.vdo": "0"
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             },
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "type": "block",
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:             "vg_name": "ceph_vg2"
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:         }
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]:     ]
Nov 24 20:57:44 compute-0 optimistic_hofstadter[309605]: }
Nov 24 20:57:45 compute-0 systemd[1]: libpod-035eabcc97acf1b45365b30965e0942e3e3fd2336a96a0dd5c2ff5d753f73d7a.scope: Deactivated successfully.
Nov 24 20:57:45 compute-0 podman[309588]: 2025-11-24 20:57:45.013180992 +0000 UTC m=+1.165394797 container died 035eabcc97acf1b45365b30965e0942e3e3fd2336a96a0dd5c2ff5d753f73d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 20:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-11f205370c5a66da5c68453aa352011227067cbde834b2b57d5394aea110c6aa-merged.mount: Deactivated successfully.
Nov 24 20:57:45 compute-0 podman[309588]: 2025-11-24 20:57:45.102744128 +0000 UTC m=+1.254957903 container remove 035eabcc97acf1b45365b30965e0942e3e3fd2336a96a0dd5c2ff5d753f73d7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hofstadter, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 20:57:45 compute-0 systemd[1]: libpod-conmon-035eabcc97acf1b45365b30965e0942e3e3fd2336a96a0dd5c2ff5d753f73d7a.scope: Deactivated successfully.
Nov 24 20:57:45 compute-0 sudo[309460]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:45 compute-0 sudo[309629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:57:45 compute-0 sudo[309629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:45 compute-0 sudo[309629]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:45 compute-0 sudo[309654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:57:45 compute-0 sudo[309654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:45 compute-0 sudo[309654]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:45 compute-0 sudo[309679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:57:45 compute-0 sudo[309679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:45 compute-0 sudo[309679]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:45 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:45 compute-0 sudo[309704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:57:45 compute-0 sudo[309704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:45 compute-0 ceph-osd[88624]: osd.0 185 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:45.611+0000 7f2ca3ee7640 -1 osd.0 185 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:45.846+0000 7f1a67169640 -1 osd.1 185 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:45 compute-0 ceph-osd[89640]: osd.1 185 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:46 compute-0 podman[309771]: 2025-11-24 20:57:46.099654707 +0000 UTC m=+0.081905282 container create d2dec4c360cd5d3d952c8cdf6d9c9b5c90e16cb2f061fbd8a2e20bc5b5e42a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:57:46 compute-0 systemd[1]: Started libpod-conmon-d2dec4c360cd5d3d952c8cdf6d9c9b5c90e16cb2f061fbd8a2e20bc5b5e42a56.scope.
Nov 24 20:57:46 compute-0 podman[309771]: 2025-11-24 20:57:46.066799415 +0000 UTC m=+0.049050120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:57:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:57:46 compute-0 podman[309771]: 2025-11-24 20:57:46.210914867 +0000 UTC m=+0.193165462 container init d2dec4c360cd5d3d952c8cdf6d9c9b5c90e16cb2f061fbd8a2e20bc5b5e42a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:57:46 compute-0 podman[309771]: 2025-11-24 20:57:46.224227095 +0000 UTC m=+0.206477700 container start d2dec4c360cd5d3d952c8cdf6d9c9b5c90e16cb2f061fbd8a2e20bc5b5e42a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 20:57:46 compute-0 podman[309771]: 2025-11-24 20:57:46.229165867 +0000 UTC m=+0.211416472 container attach d2dec4c360cd5d3d952c8cdf6d9c9b5c90e16cb2f061fbd8a2e20bc5b5e42a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 20:57:46 compute-0 hardcore_leakey[309787]: 167 167
Nov 24 20:57:46 compute-0 systemd[1]: libpod-d2dec4c360cd5d3d952c8cdf6d9c9b5c90e16cb2f061fbd8a2e20bc5b5e42a56.scope: Deactivated successfully.
Nov 24 20:57:46 compute-0 conmon[309787]: conmon d2dec4c360cd5d3d952c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2dec4c360cd5d3d952c8cdf6d9c9b5c90e16cb2f061fbd8a2e20bc5b5e42a56.scope/container/memory.events
Nov 24 20:57:46 compute-0 podman[309771]: 2025-11-24 20:57:46.237694137 +0000 UTC m=+0.219944752 container died d2dec4c360cd5d3d952c8cdf6d9c9b5c90e16cb2f061fbd8a2e20bc5b5e42a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 20:57:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-526e17f45c04d9edc70fc3cdeef3763bdb9c391a3129e2ca841fab8bdc5214aa-merged.mount: Deactivated successfully.
Nov 24 20:57:46 compute-0 podman[309771]: 2025-11-24 20:57:46.302124408 +0000 UTC m=+0.284375023 container remove d2dec4c360cd5d3d952c8cdf6d9c9b5c90e16cb2f061fbd8a2e20bc5b5e42a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:57:46 compute-0 systemd[1]: libpod-conmon-d2dec4c360cd5d3d952c8cdf6d9c9b5c90e16cb2f061fbd8a2e20bc5b5e42a56.scope: Deactivated successfully.
Nov 24 20:57:46 compute-0 podman[309810]: 2025-11-24 20:57:46.538661034 +0000 UTC m=+0.072896790 container create 0933db4b6af2bf3866a102ba4e26c1c697c7288421d25ea1c752fd8cae5ff9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 20:57:46 compute-0 ceph-mon[75677]: pgmap v2285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 409 B/s wr, 9 op/s
Nov 24 20:57:46 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:46 compute-0 systemd[1]: Started libpod-conmon-0933db4b6af2bf3866a102ba4e26c1c697c7288421d25ea1c752fd8cae5ff9c6.scope.
Nov 24 20:57:46 compute-0 podman[309810]: 2025-11-24 20:57:46.513503208 +0000 UTC m=+0.047738964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:57:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:57:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 24 20:57:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 24 20:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c852fc627717f52974f2b909d1361cb015b5c24d3fad24e136f9fb70cabd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c852fc627717f52974f2b909d1361cb015b5c24d3fad24e136f9fb70cabd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c852fc627717f52974f2b909d1361cb015b5c24d3fad24e136f9fb70cabd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c852fc627717f52974f2b909d1361cb015b5c24d3fad24e136f9fb70cabd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:57:46 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 24 20:57:46 compute-0 podman[309810]: 2025-11-24 20:57:46.653839719 +0000 UTC m=+0.188075475 container init 0933db4b6af2bf3866a102ba4e26c1c697c7288421d25ea1c752fd8cae5ff9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 20:57:46 compute-0 ceph-osd[88624]: osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:46.654+0000 7f2ca3ee7640 -1 osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:46 compute-0 podman[309810]: 2025-11-24 20:57:46.662486031 +0000 UTC m=+0.196721777 container start 0933db4b6af2bf3866a102ba4e26c1c697c7288421d25ea1c752fd8cae5ff9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ellis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 24 20:57:46 compute-0 podman[309810]: 2025-11-24 20:57:46.672983703 +0000 UTC m=+0.207219439 container attach 0933db4b6af2bf3866a102ba4e26c1c697c7288421d25ea1c752fd8cae5ff9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ellis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:57:46 compute-0 ceph-osd[89640]: osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:46.806+0000 7f1a67169640 -1 osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 24 20:57:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3982 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:47 compute-0 ceph-osd[88624]: osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:47.631+0000 7f2ca3ee7640 -1 osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:47 compute-0 ceph-mon[75677]: osdmap e186: 3 total, 3 up, 3 in
Nov 24 20:57:47 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:47 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3982 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:47 compute-0 charming_ellis[309826]: {
Nov 24 20:57:47 compute-0 charming_ellis[309826]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "osd_id": 2,
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "type": "bluestore"
Nov 24 20:57:47 compute-0 charming_ellis[309826]:     },
Nov 24 20:57:47 compute-0 charming_ellis[309826]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "osd_id": 1,
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "type": "bluestore"
Nov 24 20:57:47 compute-0 charming_ellis[309826]:     },
Nov 24 20:57:47 compute-0 charming_ellis[309826]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "osd_id": 0,
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:57:47 compute-0 charming_ellis[309826]:         "type": "bluestore"
Nov 24 20:57:47 compute-0 charming_ellis[309826]:     }
Nov 24 20:57:47 compute-0 charming_ellis[309826]: }
Nov 24 20:57:47 compute-0 ceph-osd[89640]: osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:47.763+0000 7f1a67169640 -1 osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:47 compute-0 systemd[1]: libpod-0933db4b6af2bf3866a102ba4e26c1c697c7288421d25ea1c752fd8cae5ff9c6.scope: Deactivated successfully.
Nov 24 20:57:47 compute-0 systemd[1]: libpod-0933db4b6af2bf3866a102ba4e26c1c697c7288421d25ea1c752fd8cae5ff9c6.scope: Consumed 1.138s CPU time.
Nov 24 20:57:47 compute-0 podman[309810]: 2025-11-24 20:57:47.795969319 +0000 UTC m=+1.330205085 container died 0933db4b6af2bf3866a102ba4e26c1c697c7288421d25ea1c752fd8cae5ff9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ellis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d2c852fc627717f52974f2b909d1361cb015b5c24d3fad24e136f9fb70cabd4-merged.mount: Deactivated successfully.
Nov 24 20:57:47 compute-0 podman[309810]: 2025-11-24 20:57:47.87675284 +0000 UTC m=+1.410988616 container remove 0933db4b6af2bf3866a102ba4e26c1c697c7288421d25ea1c752fd8cae5ff9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ellis, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:57:47 compute-0 systemd[1]: libpod-conmon-0933db4b6af2bf3866a102ba4e26c1c697c7288421d25ea1c752fd8cae5ff9c6.scope: Deactivated successfully.
Nov 24 20:57:47 compute-0 sudo[309704]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:57:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:57:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:57:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:57:47 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ca1ea2b1-da4c-4dfe-80bc-755057c668e6 does not exist
Nov 24 20:57:47 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 2b5841b1-a8ee-45b1-b369-e9998d50b85e does not exist
Nov 24 20:57:48 compute-0 sudo[309869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:57:48 compute-0 sudo[309869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:48 compute-0 sudo[309869]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:48 compute-0 sudo[309895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:57:48 compute-0 sudo[309895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:57:48 compute-0 sudo[309895]: pam_unix(sudo:session): session closed for user root
Nov 24 20:57:48 compute-0 podman[309893]: 2025-11-24 20:57:48.293116848 +0000 UTC m=+0.200854097 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:57:48 compute-0 ceph-mon[75677]: pgmap v2287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 24 20:57:48 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:57:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:57:48 compute-0 ceph-osd[88624]: osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:48.674+0000 7f2ca3ee7640 -1 osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:48.812+0000 7f1a67169640 -1 osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:48 compute-0 ceph-osd[89640]: osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 24 20:57:49 compute-0 ceph-osd[88624]: osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:49.692+0000 7f2ca3ee7640 -1 osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:49 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:49.861+0000 7f1a67169640 -1 osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:49 compute-0 ceph-osd[89640]: osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:50 compute-0 ceph-osd[88624]: osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:50.645+0000 7f2ca3ee7640 -1 osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:50 compute-0 ceph-mon[75677]: pgmap v2288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 24 20:57:50 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 24 20:57:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:50.840+0000 7f1a67169640 -1 osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:50 compute-0 ceph-osd[89640]: osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:51 compute-0 ceph-osd[88624]: osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:51.670+0000 7f2ca3ee7640 -1 osd.0 186 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:51 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:51 compute-0 ceph-mon[75677]: pgmap v2289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 24 20:57:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:51.852+0000 7f1a67169640 -1 osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:51 compute-0 ceph-osd[89640]: osd.1 186 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3992 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Nov 24 20:57:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Nov 24 20:57:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Nov 24 20:57:52 compute-0 ceph-osd[88624]: osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:52.641+0000 7f2ca3ee7640 -1 osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:52 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:52 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3992 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:52 compute-0 ceph-mon[75677]: osdmap e187: 3 total, 3 up, 3 in
Nov 24 20:57:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:52.812+0000 7f1a67169640 -1 osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:52 compute-0 ceph-osd[89640]: osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Nov 24 20:57:53 compute-0 ceph-osd[88624]: osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:53.644+0000 7f2ca3ee7640 -1 osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:53 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:53 compute-0 ceph-mon[75677]: pgmap v2291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Nov 24 20:57:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:53.841+0000 7f1a67169640 -1 osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:53 compute-0 ceph-osd[89640]: osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:57:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:57:54 compute-0 ceph-osd[88624]: osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:54.618+0000 7f2ca3ee7640 -1 osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:54 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Nov 24 20:57:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:54.831+0000 7f1a67169640 -1 osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:54 compute-0 ceph-osd[89640]: osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:55 compute-0 ceph-osd[88624]: osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:55.648+0000 7f2ca3ee7640 -1 osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:55 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:55 compute-0 ceph-mon[75677]: pgmap v2292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Nov 24 20:57:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:55.846+0000 7f1a67169640 -1 osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:55 compute-0 ceph-osd[89640]: osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:56 compute-0 ceph-osd[88624]: osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:56.655+0000 7f2ca3ee7640 -1 osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:56 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:56.797+0000 7f1a67169640 -1 osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:56 compute-0 ceph-osd[89640]: osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 24 20:57:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:57:57 compute-0 ceph-osd[88624]: osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:57.684+0000 7f2ca3ee7640 -1 osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:57.756+0000 7f1a67169640 -1 osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:57 compute-0 ceph-osd[89640]: osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 3997 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:57 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:57 compute-0 ceph-mon[75677]: pgmap v2293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 24 20:57:58 compute-0 ceph-osd[88624]: osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:58.713+0000 7f2ca3ee7640 -1 osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:58.739+0000 7f1a67169640 -1 osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:58 compute-0 ceph-osd[89640]: osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:58 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:58 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 3997 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:57:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 24 20:57:59 compute-0 ceph-osd[88624]: osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:57:59.737+0000 7f2ca3ee7640 -1 osd.0 187 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:57:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:57:59.759+0000 7f1a67169640 -1 osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:59 compute-0 ceph-osd[89640]: osd.1 187 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:57:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 24 20:57:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 24 20:57:59 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 24 20:57:59 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:57:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:57:59 compute-0 ceph-mon[75677]: pgmap v2294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 24 20:58:00 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:00.727+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:00.744+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:00 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:00 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:00 compute-0 ceph-mon[75677]: osdmap e188: 3 total, 3 up, 3 in
Nov 24 20:58:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1.1 KiB/s wr, 7 op/s
Nov 24 20:58:01 compute-0 anacron[156334]: Job `cron.monthly' started
Nov 24 20:58:01 compute-0 anacron[156334]: Job `cron.monthly' terminated
Nov 24 20:58:01 compute-0 anacron[156334]: Normal exit (3 jobs run)
Nov 24 20:58:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:01.719+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:01 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:01 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:01.749+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:01 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:01 compute-0 ceph-mon[75677]: pgmap v2296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1.1 KiB/s wr, 7 op/s
Nov 24 20:58:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:02 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:02.705+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:02.739+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:02 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:02 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1023 B/s wr, 6 op/s
Nov 24 20:58:03 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:03.674+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:03.778+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:03 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:03 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:03 compute-0 ceph-mon[75677]: pgmap v2297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1023 B/s wr, 6 op/s
Nov 24 20:58:03 compute-0 podman[309945]: 2025-11-24 20:58:03.86207599 +0000 UTC m=+0.086299227 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Nov 24 20:58:04 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:04.685+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:04.762+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:04 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:58:04 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:05 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:05.665+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:05.801+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:05 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:05 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:05 compute-0 ceph-mon[75677]: pgmap v2298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:58:06 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:06.642+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:06.756+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:06 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:58:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 4006 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:06 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:07 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:07.595+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:07.708+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:07 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:07 compute-0 sshd-session[309962]: Invalid user server from 51.158.120.121 port 36330
Nov 24 20:58:07 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:07 compute-0 ceph-mon[75677]: pgmap v2299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:58:07 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 4006 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:07 compute-0 sshd-session[309962]: Received disconnect from 51.158.120.121 port 36330:11: Bye Bye [preauth]
Nov 24 20:58:07 compute-0 sshd-session[309962]: Disconnected from invalid user server 51.158.120.121 port 36330 [preauth]
Nov 24 20:58:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:08.570+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:08 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:08.688+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:08 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:58:08 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:58:09.415 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:58:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:58:09.416 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:58:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:58:09.416 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:58:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:09.606+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:09 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:09.661+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:09 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:09 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:09 compute-0 ceph-mon[75677]: pgmap v2300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 24 20:58:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:10.632+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:10 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:10.663+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:10 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Nov 24 20:58:10 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:11.617+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:11 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:11.625+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:11 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:11 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:11 compute-0 ceph-mon[75677]: pgmap v2301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Nov 24 20:58:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 38 slow ops, oldest one blocked for 4011 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:12.569+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:12 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:12.676+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:12 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 511 B/s wr, 7 op/s
Nov 24 20:58:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:12 compute-0 ceph-mon[75677]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 17 ])
Nov 24 20:58:12 compute-0 ceph-mon[75677]: Health check update: 38 slow ops, oldest one blocked for 4011 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:13.534+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:13 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:13.671+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:13 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:13 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:13 compute-0 ceph-mon[75677]: pgmap v2302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 511 B/s wr, 7 op/s
Nov 24 20:58:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:14.525+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:14 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:14.688+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:14 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 511 B/s wr, 7 op/s
Nov 24 20:58:14 compute-0 podman[309964]: 2025-11-24 20:58:14.880982294 +0000 UTC m=+0.102767321 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 20:58:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:14 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:15.507+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:15 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:15.682+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:15 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:15 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:15 compute-0 ceph-mon[75677]: pgmap v2303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 511 B/s wr, 7 op/s
Nov 24 20:58:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:58:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1163572653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:58:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:58:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1163572653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:58:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:16.520+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:16 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:16.721+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:16 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:16 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1163572653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:58:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1163572653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:58:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 4017 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:17.473+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:17 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:17.715+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:17 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:17 compute-0 ceph-mon[75677]: pgmap v2304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:17 compute-0 ceph-mon[75677]: Health check update: 22 slow ops, oldest one blocked for 4017 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:17 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:18.475+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:18 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:18.703+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:18 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:18 compute-0 podman[309984]: 2025-11-24 20:58:18.923196287 +0000 UTC m=+0.138897745 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 24 20:58:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:18 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:19.452+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:19 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:19.697+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:19 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:20 compute-0 ceph-mon[75677]: pgmap v2305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:20 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:20.461+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:20 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:20.718+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:20 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:21 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:21.444+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:21 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:21.745+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:21 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 4022 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:22 compute-0 ceph-mon[75677]: pgmap v2306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:22 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:22 compute-0 ceph-mon[75677]: Health check update: 22 slow ops, oldest one blocked for 4022 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:22.459+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:22 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:22.759+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:22 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:23 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:58:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 8475 writes, 32K keys, 8475 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 8475 writes, 2053 syncs, 4.13 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 205 writes, 519 keys, 205 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 205 writes, 92 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:58:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:23.476+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:23 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:23.758+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:23 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:24 compute-0 ceph-mon[75677]: pgmap v2307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:24 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:58:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:24.496+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:24 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:58:24
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'images', 'backups', 'vms']
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:58:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:24.734+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:24 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:25 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:25.510+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:25 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:25.709+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:25 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:26 compute-0 ceph-mon[75677]: pgmap v2308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:26 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:26.476+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:26 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:26.735+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:26 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 4027 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:27 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.252116) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017907252141, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1805, "num_deletes": 526, "total_data_size": 1806664, "memory_usage": 1840080, "flush_reason": "Manual Compaction"}
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017907262483, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 1774172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66906, "largest_seqno": 68710, "table_properties": {"data_size": 1766494, "index_size": 3727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 26992, "raw_average_key_size": 23, "raw_value_size": 1747029, "raw_average_value_size": 1532, "num_data_blocks": 162, "num_entries": 1140, "num_filter_entries": 1140, "num_deletions": 526, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017808, "oldest_key_time": 1764017808, "file_creation_time": 1764017907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 10431 microseconds, and 4344 cpu microseconds.
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.262540) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 1774172 bytes OK
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.262568) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.264622) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.264645) EVENT_LOG_v1 {"time_micros": 1764017907264637, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.264667) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1797223, prev total WAL file size 1797223, number of live WAL files 2.
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.265730) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(1732KB)], [143(8713KB)]
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017907265773, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10696966, "oldest_snapshot_seqno": -1}
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 13297 keys, 9160999 bytes, temperature: kUnknown
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017907335160, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9160999, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9089543, "index_size": 37103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33285, "raw_key_size": 366830, "raw_average_key_size": 27, "raw_value_size": 8862763, "raw_average_value_size": 666, "num_data_blocks": 1341, "num_entries": 13297, "num_filter_entries": 13297, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.335532) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9160999 bytes
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.337061) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.0 rd, 131.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.5 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(11.2) write-amplify(5.2) OK, records in: 14364, records dropped: 1067 output_compression: NoCompression
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.337084) EVENT_LOG_v1 {"time_micros": 1764017907337072, "job": 88, "event": "compaction_finished", "compaction_time_micros": 69472, "compaction_time_cpu_micros": 46426, "output_level": 6, "num_output_files": 1, "total_output_size": 9160999, "num_input_records": 14364, "num_output_records": 13297, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017907337531, "job": 88, "event": "table_file_deletion", "file_number": 145}
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017907339573, "job": 88, "event": "table_file_deletion", "file_number": 143}
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.265617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.339763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.339771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.339773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.339776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:58:27 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:58:27.339778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:58:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:27.475+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:27 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:27.738+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:27 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:28 compute-0 ceph-mon[75677]: pgmap v2309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:28 compute-0 ceph-mon[75677]: Health check update: 22 slow ops, oldest one blocked for 4027 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:28 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:28.437+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:28 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:28.719+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:28 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:58:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 9604 writes, 37K keys, 9604 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9604 writes, 2438 syncs, 3.94 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 227 writes, 696 keys, 227 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 227 writes, 97 syncs, 2.34 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:58:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:29 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:29.423+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:29 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:29.680+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:29 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:30 compute-0 ceph-mon[75677]: pgmap v2310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:30 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:30.470+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:30 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:30.680+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:30 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:31 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:31.451+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:31 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:31.639+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:31 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:32 compute-0 ceph-mon[75677]: pgmap v2311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:32 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:32.451+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:32 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:32.615+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:32 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:33 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:33.450+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:33 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:33.616+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:33 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:34 compute-0 ceph-mon[75677]: pgmap v2312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:34 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:34.453+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:34 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 20:58:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 8176 writes, 32K keys, 8176 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8176 writes, 1929 syncs, 4.24 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 282 writes, 657 keys, 282 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s
                                           Interval WAL: 282 writes, 124 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 20:58:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:34.651+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:34 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:34 compute-0 podman[310011]: 2025-11-24 20:58:34.833395762 +0000 UTC m=+0.065310783 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 24 20:58:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:58:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:58:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:35 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:35.406+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:35 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:35.666+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:35 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:36.398+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:36 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:36 compute-0 ceph-mon[75677]: pgmap v2313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:36 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:36.669+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:36 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 4032 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:37.380+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:37 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:37 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:37 compute-0 ceph-mon[75677]: Health check update: 22 slow ops, oldest one blocked for 4032 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:37.652+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:37 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:38.422+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:38 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:38 compute-0 ceph-mon[75677]: pgmap v2314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:38 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:38.655+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:38 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:39 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:39.448+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:39 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:39 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Check health
Nov 24 20:58:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:39.673+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:39 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:40.419+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:40 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:40 compute-0 ceph-mon[75677]: pgmap v2315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:40 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:58:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:58:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:58:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:58:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:58:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:40.719+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:40 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:41.397+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:41 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:41 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:41.736+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:41 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14507.0:62 2.11 2:89afe44f:::rbd_data.38abbaecb8b3.000000000000000a:head [set-alloc-hint object_size 4194304 write_size 4194304,writefull 0~4194304 [fadvise_nocache] in=4194304b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 24 20:58:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 22 slow ops, oldest one blocked for 4042 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:42 compute-0 sshd-session[310030]: Received disconnect from 182.93.7.194 port 34182:11: Bye Bye [preauth]
Nov 24 20:58:42 compute-0 sshd-session[310030]: Disconnected from authenticating user root 182.93.7.194 port 34182 [preauth]
Nov 24 20:58:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:42.414+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:42 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:42 compute-0 ceph-mon[75677]: pgmap v2316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:42 compute-0 ceph-mon[75677]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Nov 24 20:58:42 compute-0 ceph-mon[75677]: Health check update: 22 slow ops, oldest one blocked for 4042 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:42.781+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:42 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:43.427+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:43 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:43 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:43.740+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:43 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:44.436+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:44 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:44 compute-0 ceph-mon[75677]: pgmap v2317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:44 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:44.788+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:44 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:45.439+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:45 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:45 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:45.783+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:45 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:45 compute-0 podman[310032]: 2025-11-24 20:58:45.86395236 +0000 UTC m=+0.083787941 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 20:58:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:46.435+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:46 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:46 compute-0 ceph-mon[75677]: pgmap v2318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:46 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:46.815+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:46 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4042 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:47.415+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:47 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:47 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:47 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4042 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:47.824+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:47 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:48 compute-0 sudo[310052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:58:48 compute-0 sudo[310052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:48 compute-0 sudo[310052]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:48 compute-0 sudo[310077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:58:48 compute-0 sudo[310077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:48 compute-0 sudo[310077]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:48 compute-0 sudo[310102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:58:48 compute-0 sudo[310102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:48 compute-0 sudo[310102]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:48.454+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:48 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:48 compute-0 sudo[310127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:58:48 compute-0 sudo[310127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:48 compute-0 ceph-mon[75677]: pgmap v2319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:48 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:48.800+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:48 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:49 compute-0 sudo[310127]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:58:49 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:58:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:58:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:58:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:58:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:58:49 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev cffff34b-7c36-44f6-a3b0-e2ad6098ba4a does not exist
Nov 24 20:58:49 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d904d215-48a1-45ab-8a14-6bfa7df72395 does not exist
Nov 24 20:58:49 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3fbd84c7-4d27-4f69-a4f6-2526c3e377d7 does not exist
Nov 24 20:58:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:58:49 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:58:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:58:49 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:58:49 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:58:49 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:58:49 compute-0 sudo[310182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:58:49 compute-0 sudo[310182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:49 compute-0 sudo[310182]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:49 compute-0 sudo[310213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:58:49 compute-0 sudo[310213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:49 compute-0 sudo[310213]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:49 compute-0 podman[310206]: 2025-11-24 20:58:49.402899034 +0000 UTC m=+0.137446587 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 20:58:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:49.433+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:49 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:49 compute-0 sudo[310255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:58:49 compute-0 sudo[310255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:49 compute-0 sudo[310255]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:49 compute-0 sudo[310283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:58:49 compute-0 sudo[310283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:49 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:58:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:58:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:58:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:58:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:58:49 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:58:49 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:49.809+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:49 compute-0 podman[310347]: 2025-11-24 20:58:49.969750318 +0000 UTC m=+0.056875356 container create cda64f2dfb388bcc34ceecfc9a06a3df3ae2e9c8704cba0891645354d888a27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_franklin, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 20:58:50 compute-0 systemd[1]: Started libpod-conmon-cda64f2dfb388bcc34ceecfc9a06a3df3ae2e9c8704cba0891645354d888a27d.scope.
Nov 24 20:58:50 compute-0 podman[310347]: 2025-11-24 20:58:49.950420896 +0000 UTC m=+0.037545924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:58:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:58:50 compute-0 podman[310347]: 2025-11-24 20:58:50.079816415 +0000 UTC m=+0.166941473 container init cda64f2dfb388bcc34ceecfc9a06a3df3ae2e9c8704cba0891645354d888a27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 20:58:50 compute-0 podman[310347]: 2025-11-24 20:58:50.090986326 +0000 UTC m=+0.178111364 container start cda64f2dfb388bcc34ceecfc9a06a3df3ae2e9c8704cba0891645354d888a27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:58:50 compute-0 podman[310347]: 2025-11-24 20:58:50.094513751 +0000 UTC m=+0.181638839 container attach cda64f2dfb388bcc34ceecfc9a06a3df3ae2e9c8704cba0891645354d888a27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 20:58:50 compute-0 hungry_franklin[310365]: 167 167
Nov 24 20:58:50 compute-0 systemd[1]: libpod-cda64f2dfb388bcc34ceecfc9a06a3df3ae2e9c8704cba0891645354d888a27d.scope: Deactivated successfully.
Nov 24 20:58:50 compute-0 podman[310347]: 2025-11-24 20:58:50.099705001 +0000 UTC m=+0.186830069 container died cda64f2dfb388bcc34ceecfc9a06a3df3ae2e9c8704cba0891645354d888a27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_franklin, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 20:58:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5c01499f221c4a3db3a848d59913f0b0a1970ee2c36f9920191f5e2c49c500a-merged.mount: Deactivated successfully.
Nov 24 20:58:50 compute-0 podman[310347]: 2025-11-24 20:58:50.151447586 +0000 UTC m=+0.238572624 container remove cda64f2dfb388bcc34ceecfc9a06a3df3ae2e9c8704cba0891645354d888a27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_franklin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:58:50 compute-0 systemd[1]: libpod-conmon-cda64f2dfb388bcc34ceecfc9a06a3df3ae2e9c8704cba0891645354d888a27d.scope: Deactivated successfully.
Nov 24 20:58:50 compute-0 podman[310389]: 2025-11-24 20:58:50.385545857 +0000 UTC m=+0.058622291 container create c50770e7f4f700b552b17ce3f7c1aff58894ba91a1ac7a8eb2d602aa2eefcde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 20:58:50 compute-0 systemd[1]: Started libpod-conmon-c50770e7f4f700b552b17ce3f7c1aff58894ba91a1ac7a8eb2d602aa2eefcde7.scope.
Nov 24 20:58:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:50.442+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:50 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:50 compute-0 podman[310389]: 2025-11-24 20:58:50.362456125 +0000 UTC m=+0.035532619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:58:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056d979b7c9a01925007ae890247ceb15e682863c664a6bfea58b90212b6d872/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056d979b7c9a01925007ae890247ceb15e682863c664a6bfea58b90212b6d872/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056d979b7c9a01925007ae890247ceb15e682863c664a6bfea58b90212b6d872/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056d979b7c9a01925007ae890247ceb15e682863c664a6bfea58b90212b6d872/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056d979b7c9a01925007ae890247ceb15e682863c664a6bfea58b90212b6d872/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:50 compute-0 podman[310389]: 2025-11-24 20:58:50.482977295 +0000 UTC m=+0.156053729 container init c50770e7f4f700b552b17ce3f7c1aff58894ba91a1ac7a8eb2d602aa2eefcde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 20:58:50 compute-0 podman[310389]: 2025-11-24 20:58:50.490257991 +0000 UTC m=+0.163334395 container start c50770e7f4f700b552b17ce3f7c1aff58894ba91a1ac7a8eb2d602aa2eefcde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 20:58:50 compute-0 podman[310389]: 2025-11-24 20:58:50.493124559 +0000 UTC m=+0.166200963 container attach c50770e7f4f700b552b17ce3f7c1aff58894ba91a1ac7a8eb2d602aa2eefcde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:58:50 compute-0 ceph-mon[75677]: pgmap v2320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:50 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:50 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:50.778+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:51 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:51.402+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:51 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:51 compute-0 elastic_hugle[310404]: --> passed data devices: 0 physical, 3 LVM
Nov 24 20:58:51 compute-0 elastic_hugle[310404]: --> relative data size: 1.0
Nov 24 20:58:51 compute-0 elastic_hugle[310404]: --> All data devices are unavailable
Nov 24 20:58:51 compute-0 systemd[1]: libpod-c50770e7f4f700b552b17ce3f7c1aff58894ba91a1ac7a8eb2d602aa2eefcde7.scope: Deactivated successfully.
Nov 24 20:58:51 compute-0 systemd[1]: libpod-c50770e7f4f700b552b17ce3f7c1aff58894ba91a1ac7a8eb2d602aa2eefcde7.scope: Consumed 1.133s CPU time.
Nov 24 20:58:51 compute-0 podman[310433]: 2025-11-24 20:58:51.72977991 +0000 UTC m=+0.037687747 container died c50770e7f4f700b552b17ce3f7c1aff58894ba91a1ac7a8eb2d602aa2eefcde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:58:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-056d979b7c9a01925007ae890247ceb15e682863c664a6bfea58b90212b6d872-merged.mount: Deactivated successfully.
Nov 24 20:58:51 compute-0 podman[310433]: 2025-11-24 20:58:51.795426079 +0000 UTC m=+0.103333926 container remove c50770e7f4f700b552b17ce3f7c1aff58894ba91a1ac7a8eb2d602aa2eefcde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 20:58:51 compute-0 systemd[1]: libpod-conmon-c50770e7f4f700b552b17ce3f7c1aff58894ba91a1ac7a8eb2d602aa2eefcde7.scope: Deactivated successfully.
Nov 24 20:58:51 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:51.823+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:51 compute-0 sudo[310283]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:51 compute-0 sudo[310448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:58:51 compute-0 sudo[310448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:51 compute-0 sudo[310448]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:52 compute-0 sudo[310473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:58:52 compute-0 sudo[310473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:52 compute-0 sudo[310473]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:52 compute-0 sudo[310498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:58:52 compute-0 sudo[310498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:52 compute-0 sudo[310498]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4052 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:52 compute-0 sudo[310523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 20:58:52 compute-0 sudo[310523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:52 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:52.415+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:52 compute-0 podman[310589]: 2025-11-24 20:58:52.577834324 +0000 UTC m=+0.056395691 container create 4ace9b6029fd20dd091c1c823a440ca459622b2b74aa58e8485a0888f516ee40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:58:52 compute-0 systemd[1]: Started libpod-conmon-4ace9b6029fd20dd091c1c823a440ca459622b2b74aa58e8485a0888f516ee40.scope.
Nov 24 20:58:52 compute-0 ceph-mon[75677]: pgmap v2321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:52 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:52 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4052 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:52 compute-0 podman[310589]: 2025-11-24 20:58:52.549435069 +0000 UTC m=+0.027996496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:58:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:58:52 compute-0 podman[310589]: 2025-11-24 20:58:52.696793041 +0000 UTC m=+0.175354448 container init 4ace9b6029fd20dd091c1c823a440ca459622b2b74aa58e8485a0888f516ee40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_boyd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:58:52 compute-0 podman[310589]: 2025-11-24 20:58:52.709360861 +0000 UTC m=+0.187922238 container start 4ace9b6029fd20dd091c1c823a440ca459622b2b74aa58e8485a0888f516ee40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:58:52 compute-0 podman[310589]: 2025-11-24 20:58:52.713893262 +0000 UTC m=+0.192454659 container attach 4ace9b6029fd20dd091c1c823a440ca459622b2b74aa58e8485a0888f516ee40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_boyd, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 20:58:52 compute-0 amazing_boyd[310605]: 167 167
Nov 24 20:58:52 compute-0 systemd[1]: libpod-4ace9b6029fd20dd091c1c823a440ca459622b2b74aa58e8485a0888f516ee40.scope: Deactivated successfully.
Nov 24 20:58:52 compute-0 podman[310589]: 2025-11-24 20:58:52.715520486 +0000 UTC m=+0.194081863 container died 4ace9b6029fd20dd091c1c823a440ca459622b2b74aa58e8485a0888f516ee40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_boyd, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:58:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a5a92fa327720fd55c9838e339e9143ea0ad913116dc381d80c9cddb0c8cc54-merged.mount: Deactivated successfully.
Nov 24 20:58:52 compute-0 podman[310589]: 2025-11-24 20:58:52.757985341 +0000 UTC m=+0.236546678 container remove 4ace9b6029fd20dd091c1c823a440ca459622b2b74aa58e8485a0888f516ee40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:58:52 compute-0 systemd[1]: libpod-conmon-4ace9b6029fd20dd091c1c823a440ca459622b2b74aa58e8485a0888f516ee40.scope: Deactivated successfully.
Nov 24 20:58:52 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:52.808+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:52 compute-0 podman[310628]: 2025-11-24 20:58:52.951623082 +0000 UTC m=+0.051464838 container create 5ac35a409985f5f8ccb42fd4fe4e35295e7c0e74b677b63548819e4f5ba7444f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:58:53 compute-0 systemd[1]: Started libpod-conmon-5ac35a409985f5f8ccb42fd4fe4e35295e7c0e74b677b63548819e4f5ba7444f.scope.
Nov 24 20:58:53 compute-0 podman[310628]: 2025-11-24 20:58:52.932193388 +0000 UTC m=+0.032035134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:58:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e67996b775c49477d5a902e6ec22b5ac80370add7e426c12eaa412d6974c88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e67996b775c49477d5a902e6ec22b5ac80370add7e426c12eaa412d6974c88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e67996b775c49477d5a902e6ec22b5ac80370add7e426c12eaa412d6974c88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e67996b775c49477d5a902e6ec22b5ac80370add7e426c12eaa412d6974c88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:53 compute-0 podman[310628]: 2025-11-24 20:58:53.059373067 +0000 UTC m=+0.159214893 container init 5ac35a409985f5f8ccb42fd4fe4e35295e7c0e74b677b63548819e4f5ba7444f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 20:58:53 compute-0 podman[310628]: 2025-11-24 20:58:53.072876111 +0000 UTC m=+0.172717867 container start 5ac35a409985f5f8ccb42fd4fe4e35295e7c0e74b677b63548819e4f5ba7444f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:58:53 compute-0 podman[310628]: 2025-11-24 20:58:53.0769133 +0000 UTC m=+0.176755116 container attach 5ac35a409985f5f8ccb42fd4fe4e35295e7c0e74b677b63548819e4f5ba7444f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 20:58:53 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:53.417+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:53 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:53.803+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:53 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]: {
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:     "0": [
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:         {
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "devices": [
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "/dev/loop3"
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             ],
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_name": "ceph_lv0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_size": "21470642176",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "name": "ceph_lv0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "tags": {
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.cluster_name": "ceph",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.crush_device_class": "",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.encrypted": "0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.osd_id": "0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.type": "block",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.vdo": "0"
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             },
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "type": "block",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "vg_name": "ceph_vg0"
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:         }
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:     ],
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:     "1": [
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:         {
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "devices": [
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "/dev/loop4"
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             ],
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_name": "ceph_lv1",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_size": "21470642176",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "name": "ceph_lv1",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "tags": {
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.cluster_name": "ceph",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.crush_device_class": "",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.encrypted": "0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.osd_id": "1",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.type": "block",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.vdo": "0"
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             },
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "type": "block",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "vg_name": "ceph_vg1"
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:         }
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:     ],
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:     "2": [
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:         {
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "devices": [
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "/dev/loop5"
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             ],
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_name": "ceph_lv2",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_size": "21470642176",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "name": "ceph_lv2",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "tags": {
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.cluster_name": "ceph",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.crush_device_class": "",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.encrypted": "0",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.osd_id": "2",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.type": "block",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:                 "ceph.vdo": "0"
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             },
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "type": "block",
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:             "vg_name": "ceph_vg2"
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:         }
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]:     ]
Nov 24 20:58:53 compute-0 cranky_nightingale[310644]: }
Nov 24 20:58:53 compute-0 systemd[1]: libpod-5ac35a409985f5f8ccb42fd4fe4e35295e7c0e74b677b63548819e4f5ba7444f.scope: Deactivated successfully.
Nov 24 20:58:53 compute-0 podman[310628]: 2025-11-24 20:58:53.862539811 +0000 UTC m=+0.962381577 container died 5ac35a409985f5f8ccb42fd4fe4e35295e7c0e74b677b63548819e4f5ba7444f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6e67996b775c49477d5a902e6ec22b5ac80370add7e426c12eaa412d6974c88-merged.mount: Deactivated successfully.
Nov 24 20:58:53 compute-0 podman[310628]: 2025-11-24 20:58:53.962031494 +0000 UTC m=+1.061873250 container remove 5ac35a409985f5f8ccb42fd4fe4e35295e7c0e74b677b63548819e4f5ba7444f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 20:58:53 compute-0 systemd[1]: libpod-conmon-5ac35a409985f5f8ccb42fd4fe4e35295e7c0e74b677b63548819e4f5ba7444f.scope: Deactivated successfully.
Nov 24 20:58:54 compute-0 sudo[310523]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:54 compute-0 sudo[310667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:58:54 compute-0 sudo[310667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:54 compute-0 sudo[310667]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:54 compute-0 sudo[310692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:58:54 compute-0 sudo[310692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:54 compute-0 sudo[310692]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:54 compute-0 sudo[310717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:58:54 compute-0 sudo[310717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:54 compute-0 sudo[310717]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:54 compute-0 sudo[310742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 20:58:54 compute-0 sudo[310742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:54 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:54.444+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:58:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:58:54 compute-0 ceph-mon[75677]: pgmap v2322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:54 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:54 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:54.761+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:54 compute-0 podman[310808]: 2025-11-24 20:58:54.851145385 +0000 UTC m=+0.067883361 container create 69993000f640ac9e8f253bc342251bb23e69e69a4710225072c1294d16d83bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bose, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 20:58:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:54 compute-0 systemd[1]: Started libpod-conmon-69993000f640ac9e8f253bc342251bb23e69e69a4710225072c1294d16d83bcb.scope.
Nov 24 20:58:54 compute-0 podman[310808]: 2025-11-24 20:58:54.822640897 +0000 UTC m=+0.039378923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:58:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:58:54 compute-0 podman[310808]: 2025-11-24 20:58:54.971467729 +0000 UTC m=+0.188205775 container init 69993000f640ac9e8f253bc342251bb23e69e69a4710225072c1294d16d83bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bose, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 20:58:54 compute-0 podman[310808]: 2025-11-24 20:58:54.982671411 +0000 UTC m=+0.199409397 container start 69993000f640ac9e8f253bc342251bb23e69e69a4710225072c1294d16d83bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 20:58:54 compute-0 podman[310808]: 2025-11-24 20:58:54.987039089 +0000 UTC m=+0.203777155 container attach 69993000f640ac9e8f253bc342251bb23e69e69a4710225072c1294d16d83bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:58:54 compute-0 vigilant_bose[310823]: 167 167
Nov 24 20:58:54 compute-0 systemd[1]: libpod-69993000f640ac9e8f253bc342251bb23e69e69a4710225072c1294d16d83bcb.scope: Deactivated successfully.
Nov 24 20:58:54 compute-0 podman[310808]: 2025-11-24 20:58:54.991179901 +0000 UTC m=+0.207917927 container died 69993000f640ac9e8f253bc342251bb23e69e69a4710225072c1294d16d83bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 20:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec958bcec50d9872a33524b7f0c055800d8e7f93119d1bd4f6f6641c69afb8a1-merged.mount: Deactivated successfully.
Nov 24 20:58:55 compute-0 podman[310808]: 2025-11-24 20:58:55.044447877 +0000 UTC m=+0.261185853 container remove 69993000f640ac9e8f253bc342251bb23e69e69a4710225072c1294d16d83bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bose, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:58:55 compute-0 systemd[1]: libpod-conmon-69993000f640ac9e8f253bc342251bb23e69e69a4710225072c1294d16d83bcb.scope: Deactivated successfully.
Nov 24 20:58:55 compute-0 podman[310849]: 2025-11-24 20:58:55.284242552 +0000 UTC m=+0.063920834 container create 54c30068dacee5305fcb0252561b44170d4b39bf3f8be593786b33e70a1444b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bassi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 20:58:55 compute-0 systemd[1]: Started libpod-conmon-54c30068dacee5305fcb0252561b44170d4b39bf3f8be593786b33e70a1444b2.scope.
Nov 24 20:58:55 compute-0 podman[310849]: 2025-11-24 20:58:55.262698791 +0000 UTC m=+0.042377053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:58:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bb76b4a354146b0ca607ad40e73157600a7ab82b6da4c4b854177c60d2406e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bb76b4a354146b0ca607ad40e73157600a7ab82b6da4c4b854177c60d2406e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bb76b4a354146b0ca607ad40e73157600a7ab82b6da4c4b854177c60d2406e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bb76b4a354146b0ca607ad40e73157600a7ab82b6da4c4b854177c60d2406e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:58:55 compute-0 podman[310849]: 2025-11-24 20:58:55.393666912 +0000 UTC m=+0.173345244 container init 54c30068dacee5305fcb0252561b44170d4b39bf3f8be593786b33e70a1444b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 24 20:58:55 compute-0 podman[310849]: 2025-11-24 20:58:55.406376654 +0000 UTC m=+0.186054906 container start 54c30068dacee5305fcb0252561b44170d4b39bf3f8be593786b33e70a1444b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 24 20:58:55 compute-0 podman[310849]: 2025-11-24 20:58:55.412097699 +0000 UTC m=+0.191776021 container attach 54c30068dacee5305fcb0252561b44170d4b39bf3f8be593786b33e70a1444b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 20:58:55 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:55.442+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:55 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:55 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:55.809+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:56 compute-0 modest_bassi[310865]: {
Nov 24 20:58:56 compute-0 modest_bassi[310865]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "osd_id": 2,
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "type": "bluestore"
Nov 24 20:58:56 compute-0 modest_bassi[310865]:     },
Nov 24 20:58:56 compute-0 modest_bassi[310865]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "osd_id": 1,
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "type": "bluestore"
Nov 24 20:58:56 compute-0 modest_bassi[310865]:     },
Nov 24 20:58:56 compute-0 modest_bassi[310865]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "osd_id": 0,
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 20:58:56 compute-0 modest_bassi[310865]:         "type": "bluestore"
Nov 24 20:58:56 compute-0 modest_bassi[310865]:     }
Nov 24 20:58:56 compute-0 modest_bassi[310865]: }
Nov 24 20:58:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:56.484+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:56 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:56 compute-0 systemd[1]: libpod-54c30068dacee5305fcb0252561b44170d4b39bf3f8be593786b33e70a1444b2.scope: Deactivated successfully.
Nov 24 20:58:56 compute-0 podman[310849]: 2025-11-24 20:58:56.501287025 +0000 UTC m=+1.280965277 container died 54c30068dacee5305fcb0252561b44170d4b39bf3f8be593786b33e70a1444b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bassi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 20:58:56 compute-0 systemd[1]: libpod-54c30068dacee5305fcb0252561b44170d4b39bf3f8be593786b33e70a1444b2.scope: Consumed 1.102s CPU time.
Nov 24 20:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-98bb76b4a354146b0ca607ad40e73157600a7ab82b6da4c4b854177c60d2406e-merged.mount: Deactivated successfully.
Nov 24 20:58:56 compute-0 podman[310849]: 2025-11-24 20:58:56.584295123 +0000 UTC m=+1.363973405 container remove 54c30068dacee5305fcb0252561b44170d4b39bf3f8be593786b33e70a1444b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bassi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 20:58:56 compute-0 systemd[1]: libpod-conmon-54c30068dacee5305fcb0252561b44170d4b39bf3f8be593786b33e70a1444b2.scope: Deactivated successfully.
Nov 24 20:58:56 compute-0 sudo[310742]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 20:58:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:58:56 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 20:58:56 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:58:56 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1ce19e44-8c23-4ee2-98ce-c1c30704202e does not exist
Nov 24 20:58:56 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3498656a-b004-4f5d-9c71-7fcf994af3bf does not exist
Nov 24 20:58:56 compute-0 ceph-mon[75677]: pgmap v2323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:56 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:56 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:58:56 compute-0 sudo[310910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:58:56 compute-0 sudo[310910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:56 compute-0 sudo[310910]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:56 compute-0 sudo[310935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 20:58:56 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:56.845+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:56 compute-0 sudo[310935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:58:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:56 compute-0 sudo[310935]: pam_unix(sudo:session): session closed for user root
Nov 24 20:58:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:58:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:57.530+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:57 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4057 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:57 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:58:57 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:57.841+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:57 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:58.579+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:58 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:58 compute-0 ceph-mon[75677]: pgmap v2324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:58 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4057 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:58:58 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:58.797+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:58 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:58:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:58:59.612+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:59 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:58:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:58:59 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:58:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:58:59.783+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:59 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:58:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:00.569+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:00 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:00 compute-0 ceph-mon[75677]: pgmap v2325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:00 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:00.767+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:00 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:01.573+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:01 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:01.783+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:01 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:01 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:01 compute-0 ceph-mon[75677]: pgmap v2326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:02.565+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:02 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:02.803+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:02 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:02 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:03.593+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:03 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:03.804+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:03 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:03 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:03 compute-0 ceph-mon[75677]: pgmap v2327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:04.551+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:04 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:04.779+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:04 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:04 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:05.586+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:05 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:05.750+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:05 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:05 compute-0 podman[310960]: 2025-11-24 20:59:05.874115936 +0000 UTC m=+0.083473632 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 20:59:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:05 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:05 compute-0 ceph-mon[75677]: pgmap v2328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:06.610+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:06 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:06.720+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:06 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:06 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4067 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:06 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:07.634+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:07 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:07.769+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:07 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:07 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:07 compute-0 ceph-mon[75677]: pgmap v2329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:07 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4067 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:08.590+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:08 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:08.732+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:08 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:08 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:09 compute-0 sshd-session[310980]: Invalid user cashier from 51.158.120.121 port 39002
Nov 24 20:59:09 compute-0 sshd-session[310980]: Received disconnect from 51.158.120.121 port 39002:11: Bye Bye [preauth]
Nov 24 20:59:09 compute-0 sshd-session[310980]: Disconnected from invalid user cashier 51.158.120.121 port 39002 [preauth]
Nov 24 20:59:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:59:09.416 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 20:59:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:59:09.417 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 20:59:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 20:59:09.417 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 20:59:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:09.606+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:09 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:09.684+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:09 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:09 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:09 compute-0 ceph-mon[75677]: pgmap v2330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:10.597+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:10 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:10.729+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:10 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:10 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:11.615+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:11 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:11.712+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:11 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:11 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:11 compute-0 ceph-mon[75677]: pgmap v2331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:11 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4072 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:12.573+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:12 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:12.700+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:12 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:12 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4072 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:12 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:13.578+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:13 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:13.737+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:13 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:14 compute-0 ceph-mon[75677]: pgmap v2332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:14 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:14.545+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:14 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:14.769+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:14 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:15 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:15.500+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:15 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:15.722+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:15 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:16 compute-0 ceph-mon[75677]: pgmap v2333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:16 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 20:59:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1820348631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:59:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 20:59:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1820348631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:59:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:16.489+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:16 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:16.734+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:16 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:16 compute-0 podman[310982]: 2025-11-24 20:59:16.849057603 +0000 UTC m=+0.078062816 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 20:59:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1820348631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 20:59:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1820348631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 20:59:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:17 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4077 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.143908) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017957143957, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1030, "num_deletes": 390, "total_data_size": 904317, "memory_usage": 923816, "flush_reason": "Manual Compaction"}
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017957159689, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 888722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68711, "largest_seqno": 69740, "table_properties": {"data_size": 884008, "index_size": 1853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15569, "raw_average_key_size": 22, "raw_value_size": 872530, "raw_average_value_size": 1244, "num_data_blocks": 80, "num_entries": 701, "num_filter_entries": 701, "num_deletions": 390, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017908, "oldest_key_time": 1764017908, "file_creation_time": 1764017957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 15834 microseconds, and 5155 cpu microseconds.
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.159741) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 888722 bytes OK
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.159764) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.161771) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.161797) EVENT_LOG_v1 {"time_micros": 1764017957161789, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.161818) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 898589, prev total WAL file size 898589, number of live WAL files 2.
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.162479) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323734' seq:72057594037927935, type:22 .. '6C6F676D0033353237' seq:0, type:0; will stop at (end)
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(867KB)], [146(8946KB)]
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017957162554, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 10049721, "oldest_snapshot_seqno": -1}
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 13206 keys, 9811971 bytes, temperature: kUnknown
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017957241663, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 9811971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9740134, "index_size": 37740, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33029, "raw_key_size": 365507, "raw_average_key_size": 27, "raw_value_size": 9513826, "raw_average_value_size": 720, "num_data_blocks": 1366, "num_entries": 13206, "num_filter_entries": 13206, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764017957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.242396) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 9811971 bytes
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.244135) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.0 rd, 124.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(22.3) write-amplify(11.0) OK, records in: 13998, records dropped: 792 output_compression: NoCompression
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.244169) EVENT_LOG_v1 {"time_micros": 1764017957244153, "job": 90, "event": "compaction_finished", "compaction_time_micros": 79147, "compaction_time_cpu_micros": 49262, "output_level": 6, "num_output_files": 1, "total_output_size": 9811971, "num_input_records": 13998, "num_output_records": 13206, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017957244881, "job": 90, "event": "table_file_deletion", "file_number": 148}
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764017957248179, "job": 90, "event": "table_file_deletion", "file_number": 146}
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.162399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.248285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.248293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.248297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.248301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:59:17 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-20:59:17.248305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 20:59:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:17.454+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:17 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:17.713+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:17 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:18 compute-0 ceph-mon[75677]: pgmap v2334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:18 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4077 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:18 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:18.449+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:18 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:18.694+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:18 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:19 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:19.496+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:19 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:19.705+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:19 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:19 compute-0 podman[311003]: 2025-11-24 20:59:19.923919284 +0000 UTC m=+0.154783004 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 24 20:59:20 compute-0 ceph-mon[75677]: pgmap v2335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:20 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:20.467+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:20 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:20.711+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:20 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:21 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:21.494+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:21 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:21.722+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:21 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:22 compute-0 ceph-mon[75677]: pgmap v2336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:22 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4082 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:22.460+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:22 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:22.686+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:22 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:23 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4082 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:23 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:23.477+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:23 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:23.701+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:23 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:24 compute-0 ceph-mon[75677]: pgmap v2337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:24 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:59:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:24.474+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:24 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_20:59:24
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'images', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups']
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 20:59:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:24.681+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:24 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 20:59:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:25 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:25.493+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:25 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:25.708+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:25 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:26 compute-0 ceph-mon[75677]: pgmap v2338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 20:59:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:26 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:26.493+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:26 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:26.671+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:26 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:59:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:27 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4087 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:27.486+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:27 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:27.656+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:27 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:28 compute-0 ceph-mon[75677]: pgmap v2339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:59:28 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4087 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:28 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:28.534+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:28 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:28.696+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:28 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:59:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:29 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:29.558+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:29 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:29.663+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:29 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:30 compute-0 ceph-mon[75677]: pgmap v2340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:59:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:30 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:30.517+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:30 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:30 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:30.662+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:59:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:31 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:31.501+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:31 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:31 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:31.627+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4092 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:32 compute-0 ceph-mon[75677]: pgmap v2341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:59:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:32 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:32 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4092 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:32.469+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:32 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:32 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:32.603+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:59:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:33 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:33.517+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:33 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:33 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:33.601+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:34 compute-0 ceph-mon[75677]: pgmap v2342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:59:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:34 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:34.499+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:34 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:34 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:34.588+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:59:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:35 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 20:59:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 20:59:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:35.459+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:35 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:35 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:35.577+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:36 compute-0 ceph-mon[75677]: pgmap v2343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 20:59:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:36 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:36.475+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:36 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:36.558+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:36 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 24 20:59:36 compute-0 podman[311029]: 2025-11-24 20:59:36.897034632 +0000 UTC m=+0.115364231 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 24 20:59:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4097 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:37 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:37.475+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:37 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:37.543+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:37 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:38 compute-0 ceph-mon[75677]: pgmap v2344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 24 20:59:38 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4097 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:38 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:38.492+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:38 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:38.582+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:38 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:39 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:39.459+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:39 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:39.547+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:39 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:40 compute-0 ceph-mon[75677]: pgmap v2345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:40 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:40.463+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:40 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:40.565+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:40 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 20:59:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 20:59:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 20:59:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 20:59:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 20:59:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:41 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:41.467+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:41 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:41.563+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:41 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:42 compute-0 ceph-mon[75677]: pgmap v2346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:42 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:42.423+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:42 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:42.603+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:42 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:43 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:43.422+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:43 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:43.619+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:43 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:44 compute-0 ceph-mon[75677]: pgmap v2347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:44 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:44.411+0000 7f1a67169640 -1 osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:44 compute-0 ceph-osd[89640]: osd.1 188 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:44.605+0000 7f2ca3ee7640 -1 osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:44 compute-0 ceph-osd[88624]: osd.0 188 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 24 20:59:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:45 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:45 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 24 20:59:45 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 24 20:59:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:45.404+0000 7f1a67169640 -1 osd.1 189 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:45 compute-0 ceph-osd[89640]: osd.1 189 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:45.629+0000 7f2ca3ee7640 -1 osd.0 189 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:45 compute-0 ceph-osd[88624]: osd.0 189 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:46 compute-0 ceph-mon[75677]: pgmap v2348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 20:59:46 compute-0 ceph-mon[75677]: osdmap e189: 3 total, 3 up, 3 in
Nov 24 20:59:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:46 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:46.422+0000 7f1a67169640 -1 osd.1 189 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:46 compute-0 ceph-osd[89640]: osd.1 189 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14257.0:648 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'default.rgw.log' : 3 ])
Nov 24 20:59:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:46.613+0000 7f2ca3ee7640 -1 osd.0 189 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:46 compute-0 ceph-osd[88624]: osd.0 189 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 140 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Nov 24 20:59:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4102 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 24 20:59:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 24 20:59:47 compute-0 ceph-mon[75677]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'default.rgw.log' : 3 ])
Nov 24 20:59:47 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:47 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4102 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 24 20:59:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:47.413+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:47 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:47.591+0000 7f2ca3ee7640 -1 osd.0 189 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:47 compute-0 ceph-osd[88624]: osd.0 189 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:47 compute-0 podman[311051]: 2025-11-24 20:59:47.864781115 +0000 UTC m=+0.090429129 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 20:59:48 compute-0 ceph-mon[75677]: pgmap v2350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 140 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Nov 24 20:59:48 compute-0 ceph-mon[75677]: osdmap e190: 3 total, 3 up, 3 in
Nov 24 20:59:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:48 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:48.401+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:48 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:48.582+0000 7f2ca3ee7640 -1 osd.0 189 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:48 compute-0 ceph-osd[88624]: osd.0 189 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 140 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Nov 24 20:59:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:49 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:49.401+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:49 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:49.547+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:49 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:50 compute-0 ceph-mon[75677]: pgmap v2352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 140 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Nov 24 20:59:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:50 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:50.398+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:50 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:50.545+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:50 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:50 compute-0 podman[311071]: 2025-11-24 20:59:50.876411212 +0000 UTC m=+0.109399960 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 24 20:59:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 24 20:59:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:51 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:51.401+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:51 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:51.592+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:51 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4112 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:52 compute-0 ceph-mon[75677]: pgmap v2353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 24 20:59:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:52 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:52 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4112 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:52.436+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:52 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:52.557+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:52 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 24 20:59:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:53 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:53.411+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:53 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:53.535+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:53 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:54.399+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:54 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:54 compute-0 ceph-mon[75677]: pgmap v2354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 24 20:59:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:54 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 20:59:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 20:59:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:54.524+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:54 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.3 MiB/s wr, 40 op/s
Nov 24 20:59:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:55 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:55.429+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:55 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:55.507+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:55 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:56.386+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:56 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:56 compute-0 ceph-mon[75677]: pgmap v2355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.3 MiB/s wr, 40 op/s
Nov 24 20:59:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:56 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:56.482+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:56 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.8 MiB/s wr, 24 op/s
Nov 24 20:59:56 compute-0 sudo[311097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:59:56 compute-0 sudo[311097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:59:56 compute-0 sudo[311097]: pam_unix(sudo:session): session closed for user root
Nov 24 20:59:56 compute-0 sudo[311122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:59:56 compute-0 sudo[311122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:59:57 compute-0 sudo[311122]: pam_unix(sudo:session): session closed for user root
Nov 24 20:59:57 compute-0 sudo[311147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:59:57 compute-0 sudo[311147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:59:57 compute-0 sudo[311147]: pam_unix(sudo:session): session closed for user root
Nov 24 20:59:57 compute-0 sudo[311172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 20:59:57 compute-0 sudo[311172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:59:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 20:59:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:57.414+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:57 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4117 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:57 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:57.505+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:57 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:57 compute-0 sudo[311172]: pam_unix(sudo:session): session closed for user root
Nov 24 20:59:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:59:57 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:59:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 20:59:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:59:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 20:59:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:59:57 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 96e0a8b5-01fc-49c4-911e-3dc5e3d54f39 does not exist
Nov 24 20:59:57 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f299ab79-7755-490f-9fc7-b64844cc6c2a does not exist
Nov 24 20:59:57 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev bc2076e7-bdf5-45e7-a733-8b8c87b5971c does not exist
Nov 24 20:59:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 20:59:57 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:59:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 20:59:57 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:59:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 20:59:57 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:59:57 compute-0 sudo[311228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:59:57 compute-0 sudo[311228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:59:57 compute-0 sudo[311228]: pam_unix(sudo:session): session closed for user root
Nov 24 20:59:57 compute-0 sudo[311253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 20:59:57 compute-0 sudo[311253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:59:57 compute-0 sudo[311253]: pam_unix(sudo:session): session closed for user root
Nov 24 20:59:58 compute-0 sudo[311278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 20:59:58 compute-0 sudo[311278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:59:58 compute-0 sudo[311278]: pam_unix(sudo:session): session closed for user root
Nov 24 20:59:58 compute-0 sudo[311303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 20:59:58 compute-0 sudo[311303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 20:59:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:58.396+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:58 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:58 compute-0 ceph-mon[75677]: pgmap v2356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.8 MiB/s wr, 24 op/s
Nov 24 20:59:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:58 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4117 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 20:59:58 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:59:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 20:59:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 20:59:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 20:59:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 20:59:58 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 20:59:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:58.511+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:58 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:58 compute-0 podman[311370]: 2025-11-24 20:59:58.58625685 +0000 UTC m=+0.060634986 container create e4e4e3c27fbe082c6e21280dc593ba0dffc6a12947411ef71b8c8e44abaacfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:59:58 compute-0 systemd[1]: Started libpod-conmon-e4e4e3c27fbe082c6e21280dc593ba0dffc6a12947411ef71b8c8e44abaacfb9.scope.
Nov 24 20:59:58 compute-0 podman[311370]: 2025-11-24 20:59:58.557120404 +0000 UTC m=+0.031498590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:59:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:59:58 compute-0 podman[311370]: 2025-11-24 20:59:58.70197583 +0000 UTC m=+0.176354006 container init e4e4e3c27fbe082c6e21280dc593ba0dffc6a12947411ef71b8c8e44abaacfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 20:59:58 compute-0 podman[311370]: 2025-11-24 20:59:58.710701345 +0000 UTC m=+0.185079801 container start e4e4e3c27fbe082c6e21280dc593ba0dffc6a12947411ef71b8c8e44abaacfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 20:59:58 compute-0 podman[311370]: 2025-11-24 20:59:58.715202747 +0000 UTC m=+0.189580943 container attach e4e4e3c27fbe082c6e21280dc593ba0dffc6a12947411ef71b8c8e44abaacfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:59:58 compute-0 keen_williamson[311387]: 167 167
Nov 24 20:59:58 compute-0 systemd[1]: libpod-e4e4e3c27fbe082c6e21280dc593ba0dffc6a12947411ef71b8c8e44abaacfb9.scope: Deactivated successfully.
Nov 24 20:59:58 compute-0 podman[311370]: 2025-11-24 20:59:58.720460307 +0000 UTC m=+0.194838443 container died e4e4e3c27fbe082c6e21280dc593ba0dffc6a12947411ef71b8c8e44abaacfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 20:59:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1f98dc407864a2b00db2e7a782b4c7b92a67a0c2ba205aae69dc7a13a476bb5-merged.mount: Deactivated successfully.
Nov 24 20:59:58 compute-0 podman[311370]: 2025-11-24 20:59:58.773150938 +0000 UTC m=+0.247529034 container remove e4e4e3c27fbe082c6e21280dc593ba0dffc6a12947411ef71b8c8e44abaacfb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 20:59:58 compute-0 systemd[1]: libpod-conmon-e4e4e3c27fbe082c6e21280dc593ba0dffc6a12947411ef71b8c8e44abaacfb9.scope: Deactivated successfully.
Nov 24 20:59:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.5 MiB/s wr, 20 op/s
Nov 24 20:59:59 compute-0 podman[311411]: 2025-11-24 20:59:59.011077313 +0000 UTC m=+0.067696486 container create e464af2d50e5382d3f41a709d1de0c5b6d0530141bb8c874580f8d5e4c46d91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 20:59:59 compute-0 systemd[1]: Started libpod-conmon-e464af2d50e5382d3f41a709d1de0c5b6d0530141bb8c874580f8d5e4c46d91d.scope.
Nov 24 20:59:59 compute-0 podman[311411]: 2025-11-24 20:59:58.990660803 +0000 UTC m=+0.047279976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 20:59:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 20:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f94862a00bc2422e21cb00bd757fea58976dcaca96c1b3d60bbdca7f355dcda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 20:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f94862a00bc2422e21cb00bd757fea58976dcaca96c1b3d60bbdca7f355dcda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 20:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f94862a00bc2422e21cb00bd757fea58976dcaca96c1b3d60bbdca7f355dcda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 20:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f94862a00bc2422e21cb00bd757fea58976dcaca96c1b3d60bbdca7f355dcda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 20:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f94862a00bc2422e21cb00bd757fea58976dcaca96c1b3d60bbdca7f355dcda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 20:59:59 compute-0 podman[311411]: 2025-11-24 20:59:59.139556757 +0000 UTC m=+0.196175990 container init e464af2d50e5382d3f41a709d1de0c5b6d0530141bb8c874580f8d5e4c46d91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 20:59:59 compute-0 podman[311411]: 2025-11-24 20:59:59.157040908 +0000 UTC m=+0.213660101 container start e464af2d50e5382d3f41a709d1de0c5b6d0530141bb8c874580f8d5e4c46d91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 20:59:59 compute-0 podman[311411]: 2025-11-24 20:59:59.162000972 +0000 UTC m=+0.218620135 container attach e464af2d50e5382d3f41a709d1de0c5b6d0530141bb8c874580f8d5e4c46d91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 20:59:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T20:59:59.364+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:59 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 20:59:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 20:59:59 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 20:59:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T20:59:59.547+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:59 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 20:59:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:00 compute-0 determined_ellis[311428]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:00:00 compute-0 determined_ellis[311428]: --> relative data size: 1.0
Nov 24 21:00:00 compute-0 determined_ellis[311428]: --> All data devices are unavailable
Nov 24 21:00:00 compute-0 systemd[1]: libpod-e464af2d50e5382d3f41a709d1de0c5b6d0530141bb8c874580f8d5e4c46d91d.scope: Deactivated successfully.
Nov 24 21:00:00 compute-0 systemd[1]: libpod-e464af2d50e5382d3f41a709d1de0c5b6d0530141bb8c874580f8d5e4c46d91d.scope: Consumed 1.037s CPU time.
Nov 24 21:00:00 compute-0 podman[311411]: 2025-11-24 21:00:00.236228465 +0000 UTC m=+1.292847618 container died e464af2d50e5382d3f41a709d1de0c5b6d0530141bb8c874580f8d5e4c46d91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 21:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f94862a00bc2422e21cb00bd757fea58976dcaca96c1b3d60bbdca7f355dcda-merged.mount: Deactivated successfully.
Nov 24 21:00:00 compute-0 podman[311411]: 2025-11-24 21:00:00.304972938 +0000 UTC m=+1.361592101 container remove e464af2d50e5382d3f41a709d1de0c5b6d0530141bb8c874580f8d5e4c46d91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:00:00 compute-0 systemd[1]: libpod-conmon-e464af2d50e5382d3f41a709d1de0c5b6d0530141bb8c874580f8d5e4c46d91d.scope: Deactivated successfully.
Nov 24 21:00:00 compute-0 sudo[311303]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:00.382+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:00 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:00 compute-0 sudo[311471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:00:00 compute-0 sudo[311471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:00:00 compute-0 sudo[311471]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:00 compute-0 ceph-mon[75677]: pgmap v2357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.5 MiB/s wr, 20 op/s
Nov 24 21:00:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:00 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:00 compute-0 sudo[311496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:00:00 compute-0 sudo[311496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:00:00 compute-0 sudo[311496]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:00.544+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:00 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:00 compute-0 sudo[311521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:00:00 compute-0 sudo[311521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:00:00 compute-0 sudo[311521]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:00 compute-0 sudo[311546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:00:00 compute-0 sudo[311546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:00:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.4 MiB/s wr, 20 op/s
Nov 24 21:00:01 compute-0 podman[311612]: 2025-11-24 21:00:01.141963684 +0000 UTC m=+0.069445003 container create 62b98e0f32c4cafbc179a7dbd7902f56dacb38a4406f35815035ae59f689c79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_gould, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 21:00:01 compute-0 systemd[1]: Started libpod-conmon-62b98e0f32c4cafbc179a7dbd7902f56dacb38a4406f35815035ae59f689c79c.scope.
Nov 24 21:00:01 compute-0 podman[311612]: 2025-11-24 21:00:01.111520303 +0000 UTC m=+0.039001672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:00:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:00:01 compute-0 podman[311612]: 2025-11-24 21:00:01.246293607 +0000 UTC m=+0.173774906 container init 62b98e0f32c4cafbc179a7dbd7902f56dacb38a4406f35815035ae59f689c79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:00:01 compute-0 podman[311612]: 2025-11-24 21:00:01.253265756 +0000 UTC m=+0.180747065 container start 62b98e0f32c4cafbc179a7dbd7902f56dacb38a4406f35815035ae59f689c79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 21:00:01 compute-0 podman[311612]: 2025-11-24 21:00:01.257782787 +0000 UTC m=+0.185264086 container attach 62b98e0f32c4cafbc179a7dbd7902f56dacb38a4406f35815035ae59f689c79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_gould, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:00:01 compute-0 happy_gould[311628]: 167 167
Nov 24 21:00:01 compute-0 systemd[1]: libpod-62b98e0f32c4cafbc179a7dbd7902f56dacb38a4406f35815035ae59f689c79c.scope: Deactivated successfully.
Nov 24 21:00:01 compute-0 conmon[311628]: conmon 62b98e0f32c4cafbc179 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62b98e0f32c4cafbc179a7dbd7902f56dacb38a4406f35815035ae59f689c79c.scope/container/memory.events
Nov 24 21:00:01 compute-0 podman[311612]: 2025-11-24 21:00:01.26050998 +0000 UTC m=+0.187991269 container died 62b98e0f32c4cafbc179a7dbd7902f56dacb38a4406f35815035ae59f689c79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 21:00:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f09086963691995409861181349bf5e8241dc13284a38b502b2e996852e6f022-merged.mount: Deactivated successfully.
Nov 24 21:00:01 compute-0 podman[311612]: 2025-11-24 21:00:01.29907887 +0000 UTC m=+0.226560149 container remove 62b98e0f32c4cafbc179a7dbd7902f56dacb38a4406f35815035ae59f689c79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 24 21:00:01 compute-0 systemd[1]: libpod-conmon-62b98e0f32c4cafbc179a7dbd7902f56dacb38a4406f35815035ae59f689c79c.scope: Deactivated successfully.
Nov 24 21:00:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:01.411+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:01 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:01 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:01 compute-0 podman[311651]: 2025-11-24 21:00:01.512952806 +0000 UTC m=+0.063602845 container create e6c455a2d7ddc8382128b5e0ba65209b064b0d1680ab079fd49691228ff1254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lewin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:00:01 compute-0 systemd[1]: Started libpod-conmon-e6c455a2d7ddc8382128b5e0ba65209b064b0d1680ab079fd49691228ff1254d.scope.
Nov 24 21:00:01 compute-0 podman[311651]: 2025-11-24 21:00:01.486013211 +0000 UTC m=+0.036663250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:00:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:01.594+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:01 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f21a9161daa3748a76ab61644b234e9b4daabf055572f368c39054ef6ee1740/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f21a9161daa3748a76ab61644b234e9b4daabf055572f368c39054ef6ee1740/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f21a9161daa3748a76ab61644b234e9b4daabf055572f368c39054ef6ee1740/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f21a9161daa3748a76ab61644b234e9b4daabf055572f368c39054ef6ee1740/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:00:01 compute-0 podman[311651]: 2025-11-24 21:00:01.628508503 +0000 UTC m=+0.179158522 container init e6c455a2d7ddc8382128b5e0ba65209b064b0d1680ab079fd49691228ff1254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 21:00:01 compute-0 podman[311651]: 2025-11-24 21:00:01.640804014 +0000 UTC m=+0.191454053 container start e6c455a2d7ddc8382128b5e0ba65209b064b0d1680ab079fd49691228ff1254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:00:01 compute-0 podman[311651]: 2025-11-24 21:00:01.651234485 +0000 UTC m=+0.201884504 container attach e6c455a2d7ddc8382128b5e0ba65209b064b0d1680ab079fd49691228ff1254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 24 21:00:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:02.389+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:02 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:02 compute-0 elegant_lewin[311667]: {
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:     "0": [
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:         {
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "devices": [
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "/dev/loop3"
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             ],
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_name": "ceph_lv0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_size": "21470642176",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "name": "ceph_lv0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "tags": {
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.cluster_name": "ceph",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.crush_device_class": "",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.encrypted": "0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.osd_id": "0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.type": "block",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.vdo": "0"
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             },
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "type": "block",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "vg_name": "ceph_vg0"
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:         }
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:     ],
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:     "1": [
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:         {
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "devices": [
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "/dev/loop4"
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             ],
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_name": "ceph_lv1",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_size": "21470642176",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "name": "ceph_lv1",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "tags": {
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.cluster_name": "ceph",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.crush_device_class": "",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.encrypted": "0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.osd_id": "1",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.type": "block",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.vdo": "0"
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             },
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "type": "block",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "vg_name": "ceph_vg1"
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:         }
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:     ],
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:     "2": [
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:         {
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "devices": [
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "/dev/loop5"
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             ],
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_name": "ceph_lv2",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_size": "21470642176",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "name": "ceph_lv2",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "tags": {
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.cluster_name": "ceph",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.crush_device_class": "",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.encrypted": "0",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.osd_id": "2",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.type": "block",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:                 "ceph.vdo": "0"
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             },
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "type": "block",
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:             "vg_name": "ceph_vg2"
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:         }
Nov 24 21:00:02 compute-0 elegant_lewin[311667]:     ]
Nov 24 21:00:02 compute-0 elegant_lewin[311667]: }
Nov 24 21:00:02 compute-0 systemd[1]: libpod-e6c455a2d7ddc8382128b5e0ba65209b064b0d1680ab079fd49691228ff1254d.scope: Deactivated successfully.
Nov 24 21:00:02 compute-0 ceph-mon[75677]: pgmap v2358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.4 MiB/s wr, 20 op/s
Nov 24 21:00:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:02 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:02 compute-0 podman[311676]: 2025-11-24 21:00:02.562805212 +0000 UTC m=+0.032666332 container died e6c455a2d7ddc8382128b5e0ba65209b064b0d1680ab079fd49691228ff1254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lewin, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 21:00:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:02.563+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:02 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f21a9161daa3748a76ab61644b234e9b4daabf055572f368c39054ef6ee1740-merged.mount: Deactivated successfully.
Nov 24 21:00:02 compute-0 podman[311676]: 2025-11-24 21:00:02.625999886 +0000 UTC m=+0.095860876 container remove e6c455a2d7ddc8382128b5e0ba65209b064b0d1680ab079fd49691228ff1254d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:00:02 compute-0 systemd[1]: libpod-conmon-e6c455a2d7ddc8382128b5e0ba65209b064b0d1680ab079fd49691228ff1254d.scope: Deactivated successfully.
Nov 24 21:00:02 compute-0 sudo[311546]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:02 compute-0 sudo[311691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:00:02 compute-0 sudo[311691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:00:02 compute-0 sudo[311691]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:02 compute-0 sudo[311716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:00:02 compute-0 sudo[311716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:00:02 compute-0 sudo[311716]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:02 compute-0 sudo[311741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:00:02 compute-0 sudo[311741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:00:02 compute-0 sudo[311741]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:02 compute-0 sudo[311766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:00:02 compute-0 sudo[311766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:00:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:03.375+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:03 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:03 compute-0 podman[311832]: 2025-11-24 21:00:03.388040861 +0000 UTC m=+0.053643288 container create b7a4a56cfadf8b47732278940cbf9811fc88b863bc3b45f43b04fd224094756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 24 21:00:03 compute-0 systemd[1]: Started libpod-conmon-b7a4a56cfadf8b47732278940cbf9811fc88b863bc3b45f43b04fd224094756e.scope.
Nov 24 21:00:03 compute-0 podman[311832]: 2025-11-24 21:00:03.366941232 +0000 UTC m=+0.032543679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:00:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:00:03 compute-0 podman[311832]: 2025-11-24 21:00:03.48406182 +0000 UTC m=+0.149664287 container init b7a4a56cfadf8b47732278940cbf9811fc88b863bc3b45f43b04fd224094756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lederberg, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 24 21:00:03 compute-0 podman[311832]: 2025-11-24 21:00:03.495156459 +0000 UTC m=+0.160758886 container start b7a4a56cfadf8b47732278940cbf9811fc88b863bc3b45f43b04fd224094756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lederberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 21:00:03 compute-0 podman[311832]: 2025-11-24 21:00:03.498472868 +0000 UTC m=+0.164075325 container attach b7a4a56cfadf8b47732278940cbf9811fc88b863bc3b45f43b04fd224094756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 21:00:03 compute-0 quizzical_lederberg[311849]: 167 167
Nov 24 21:00:03 compute-0 systemd[1]: libpod-b7a4a56cfadf8b47732278940cbf9811fc88b863bc3b45f43b04fd224094756e.scope: Deactivated successfully.
Nov 24 21:00:03 compute-0 podman[311832]: 2025-11-24 21:00:03.502125897 +0000 UTC m=+0.167728394 container died b7a4a56cfadf8b47732278940cbf9811fc88b863bc3b45f43b04fd224094756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 21:00:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:03 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-38e3e1a7ae9f9589e355f2327fd9277d816d4e092092571ff6f5be5113d6cb7d-merged.mount: Deactivated successfully.
Nov 24 21:00:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:03.538+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:03 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:03 compute-0 podman[311832]: 2025-11-24 21:00:03.553747389 +0000 UTC m=+0.219349816 container remove b7a4a56cfadf8b47732278940cbf9811fc88b863bc3b45f43b04fd224094756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 24 21:00:03 compute-0 systemd[1]: libpod-conmon-b7a4a56cfadf8b47732278940cbf9811fc88b863bc3b45f43b04fd224094756e.scope: Deactivated successfully.
Nov 24 21:00:03 compute-0 podman[311872]: 2025-11-24 21:00:03.77226303 +0000 UTC m=+0.068250641 container create 922d93c16197f09ec4761961461e79b3d3fddecbfe5c6d8622d87b17016dec79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:00:03 compute-0 systemd[1]: Started libpod-conmon-922d93c16197f09ec4761961461e79b3d3fddecbfe5c6d8622d87b17016dec79.scope.
Nov 24 21:00:03 compute-0 podman[311872]: 2025-11-24 21:00:03.747085692 +0000 UTC m=+0.043073283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:00:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c09d63bda801509dfeec81aa8175c3c443c8327fda46bf63012f464fa51f54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c09d63bda801509dfeec81aa8175c3c443c8327fda46bf63012f464fa51f54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c09d63bda801509dfeec81aa8175c3c443c8327fda46bf63012f464fa51f54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c09d63bda801509dfeec81aa8175c3c443c8327fda46bf63012f464fa51f54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:00:03 compute-0 podman[311872]: 2025-11-24 21:00:03.873471519 +0000 UTC m=+0.169459110 container init 922d93c16197f09ec4761961461e79b3d3fddecbfe5c6d8622d87b17016dec79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 21:00:03 compute-0 podman[311872]: 2025-11-24 21:00:03.885690878 +0000 UTC m=+0.181678449 container start 922d93c16197f09ec4761961461e79b3d3fddecbfe5c6d8622d87b17016dec79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 21:00:03 compute-0 podman[311872]: 2025-11-24 21:00:03.889256215 +0000 UTC m=+0.185243786 container attach 922d93c16197f09ec4761961461e79b3d3fddecbfe5c6d8622d87b17016dec79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:00:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:04.336+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:04 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:04.516+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:04 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:04 compute-0 ceph-mon[75677]: pgmap v2359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:04 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:04 compute-0 practical_antonelli[311889]: {
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "osd_id": 2,
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "type": "bluestore"
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:     },
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "osd_id": 1,
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "type": "bluestore"
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:     },
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "osd_id": 0,
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:         "type": "bluestore"
Nov 24 21:00:04 compute-0 practical_antonelli[311889]:     }
Nov 24 21:00:04 compute-0 practical_antonelli[311889]: }
Nov 24 21:00:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:04 compute-0 podman[311872]: 2025-11-24 21:00:04.920819007 +0000 UTC m=+1.216806618 container died 922d93c16197f09ec4761961461e79b3d3fddecbfe5c6d8622d87b17016dec79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 21:00:04 compute-0 systemd[1]: libpod-922d93c16197f09ec4761961461e79b3d3fddecbfe5c6d8622d87b17016dec79.scope: Deactivated successfully.
Nov 24 21:00:04 compute-0 systemd[1]: libpod-922d93c16197f09ec4761961461e79b3d3fddecbfe5c6d8622d87b17016dec79.scope: Consumed 1.044s CPU time.
Nov 24 21:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1c09d63bda801509dfeec81aa8175c3c443c8327fda46bf63012f464fa51f54-merged.mount: Deactivated successfully.
Nov 24 21:00:04 compute-0 podman[311872]: 2025-11-24 21:00:04.993089785 +0000 UTC m=+1.289077366 container remove 922d93c16197f09ec4761961461e79b3d3fddecbfe5c6d8622d87b17016dec79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_antonelli, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 24 21:00:05 compute-0 systemd[1]: libpod-conmon-922d93c16197f09ec4761961461e79b3d3fddecbfe5c6d8622d87b17016dec79.scope: Deactivated successfully.
Nov 24 21:00:05 compute-0 sudo[311766]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:00:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:00:05 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:00:05 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:00:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev c5069b3d-28ed-4dc6-8a91-2de5b8b4dbfd does not exist
Nov 24 21:00:05 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev cf02ba52-f4e3-4f78-8608-e9f79e18576f does not exist
Nov 24 21:00:05 compute-0 sudo[311934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:00:05 compute-0 sudo[311934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:00:05 compute-0 sudo[311934]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:05 compute-0 sudo[311959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:00:05 compute-0 sudo[311959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:00:05 compute-0 sudo[311959]: pam_unix(sudo:session): session closed for user root
Nov 24 21:00:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:05.345+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:05 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:05.469+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:05 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:05 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:00:05 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:00:05 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:00:05.748 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 21:00:05 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:00:05.750 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 21:00:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:06.304+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:06 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:06.443+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:06 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:06 compute-0 ceph-mon[75677]: pgmap v2360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:06 compute-0 ceph-mon[75677]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 20 ])
Nov 24 21:00:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 41 slow ops, oldest one blocked for 4122 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:07.284+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:07 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:07.484+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:07 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:07 compute-0 ceph-mon[75677]: Health check update: 41 slow ops, oldest one blocked for 4122 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:07 compute-0 podman[311984]: 2025-11-24 21:00:07.830643478 +0000 UTC m=+0.062637779 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:00:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:08.262+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:08 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:08.518+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:08 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:08 compute-0 ceph-mon[75677]: pgmap v2361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:08 compute-0 sshd-session[312004]: Invalid user ubuntu from 51.158.120.121 port 33314
Nov 24 21:00:09 compute-0 sshd-session[312004]: Received disconnect from 51.158.120.121 port 33314:11: Bye Bye [preauth]
Nov 24 21:00:09 compute-0 sshd-session[312004]: Disconnected from invalid user ubuntu 51.158.120.121 port 33314 [preauth]
Nov 24 21:00:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:09.309+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:09 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:00:09.418 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:00:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:00:09.418 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:00:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:00:09.418 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:00:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:09.479+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:09 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:10.289+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:10 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:10.480+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:10 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:10 compute-0 ceph-mon[75677]: pgmap v2362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:11.259+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:11 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:11.510+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:11 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4132 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:12.270+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:12 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:12.544+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:12 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:12 compute-0 ceph-mon[75677]: pgmap v2363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:12 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4132 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:13.268+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:13 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:13.569+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:13 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:13 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:00:13.753 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 21:00:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:14.290+0000 7f1a67169640 -1 osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:14 compute-0 ceph-osd[89640]: osd.1 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:14.532+0000 7f2ca3ee7640 -1 osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:14 compute-0 ceph-osd[88624]: osd.0 190 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 24 21:00:14 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 24 21:00:14 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 24 21:00:14 compute-0 ceph-mon[75677]: pgmap v2364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 511 B/s wr, 7 op/s
Nov 24 21:00:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:15.264+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:15 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:15.577+0000 7f2ca3ee7640 -1 osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:15 compute-0 ceph-osd[88624]: osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:15 compute-0 ceph-mon[75677]: osdmap e191: 3 total, 3 up, 3 in
Nov 24 21:00:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:16.215+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:16 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:00:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/646202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:00:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:00:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/646202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:00:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:16.619+0000 7f2ca3ee7640 -1 osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:16 compute-0 ceph-osd[88624]: osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:16 compute-0 ceph-mon[75677]: pgmap v2366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 511 B/s wr, 7 op/s
Nov 24 21:00:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/646202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:00:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/646202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:00:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 1023 B/s wr, 13 op/s
Nov 24 21:00:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:17.235+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:17 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:17.631+0000 7f2ca3ee7640 -1 osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:17 compute-0 ceph-osd[88624]: osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4137 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:18.271+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:18 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:18.613+0000 7f2ca3ee7640 -1 osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:18 compute-0 ceph-osd[88624]: osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:18 compute-0 ceph-mon[75677]: pgmap v2367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 1023 B/s wr, 13 op/s
Nov 24 21:00:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:18 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4137 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:18 compute-0 podman[312006]: 2025-11-24 21:00:18.878184594 +0000 UTC m=+0.090436730 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 21:00:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 1023 B/s wr, 13 op/s
Nov 24 21:00:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:19.222+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:19 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:19.564+0000 7f2ca3ee7640 -1 osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:19 compute-0 ceph-osd[88624]: osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:20.265+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:20 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:20.596+0000 7f2ca3ee7640 -1 osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:20 compute-0 ceph-osd[88624]: osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:20 compute-0 ceph-mon[75677]: pgmap v2368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 1023 B/s wr, 13 op/s
Nov 24 21:00:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 24 21:00:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:21.258+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:21 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:21.643+0000 7f2ca3ee7640 -1 osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:21 compute-0 ceph-osd[88624]: osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:21 compute-0 podman[312026]: 2025-11-24 21:00:21.891806504 +0000 UTC m=+0.124003524 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 21:00:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 24 21:00:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 24 21:00:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 24 21:00:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:22.280+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:22.614+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:22 compute-0 ceph-mon[75677]: pgmap v2369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 24 21:00:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:22 compute-0 ceph-mon[75677]: osdmap e192: 3 total, 3 up, 3 in
Nov 24 21:00:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Nov 24 21:00:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:23.280+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:23.599+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:24.280+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:00:24
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', '.mgr', 'vms', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:00:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:24.591+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:24 compute-0 ceph-mon[75677]: pgmap v2371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Nov 24 21:00:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 921 B/s wr, 17 op/s
Nov 24 21:00:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:25.236+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:25.599+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:25 compute-0 ceph-mon[75677]: pgmap v2372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 921 B/s wr, 17 op/s
Nov 24 21:00:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:26.263+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:26.636+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 409 B/s wr, 11 op/s
Nov 24 21:00:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4142 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:27.233+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:27.603+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:27 compute-0 ceph-mon[75677]: pgmap v2373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 409 B/s wr, 11 op/s
Nov 24 21:00:27 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4142 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:28.190+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:28.603+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 409 B/s wr, 11 op/s
Nov 24 21:00:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:29.224+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:29.600+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:29 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:29 compute-0 ceph-mon[75677]: pgmap v2374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 409 B/s wr, 11 op/s
Nov 24 21:00:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:30.245+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:30 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:30.650+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:31.256+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:31.636+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:31 compute-0 ceph-mon[75677]: pgmap v2375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4152 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:32.251+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:32 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:32.624+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:32 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4152 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:33.280+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:33.628+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:33 compute-0 ceph-mon[75677]: pgmap v2376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:34.238+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:34.670+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:35.201+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:00:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:00:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:35.713+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:35 compute-0 ceph-mon[75677]: pgmap v2377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:36.214+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:36.722+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:37.174+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:37.713+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:37 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4157 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:37 compute-0 ceph-mon[75677]: pgmap v2378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:38.176+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:38.683+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:38 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:38 compute-0 podman[312053]: 2025-11-24 21:00:38.840768438 +0000 UTC m=+0.067110381 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 21:00:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:38 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4157 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:39.133+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:39.724+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:39 compute-0 ceph-mon[75677]: pgmap v2379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:40.155+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:40.683+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:00:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:00:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:00:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:00:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:00:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:41.147+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:41.659+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:41 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:42 compute-0 ceph-mon[75677]: pgmap v2380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:42.099+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:42.678+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:43.143+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:43.714+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:44 compute-0 ceph-mon[75677]: pgmap v2381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:44.116+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:44 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:44.735+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:45.151+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:45.727+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:45 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:46 compute-0 ceph-mon[75677]: pgmap v2382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:46.124+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:46.736+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4167 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:47.121+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:47.699+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:48 compute-0 ceph-mon[75677]: pgmap v2383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:48 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4167 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:48.085+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:48.724+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:48 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:49.082+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:49 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:49.753+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:49 compute-0 podman[312072]: 2025-11-24 21:00:49.811675077 +0000 UTC m=+0.052475724 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 21:00:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:50.035+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:50 compute-0 ceph-mon[75677]: pgmap v2384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:50.777+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:51.012+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:51.786+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:52.038+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:52 compute-0 ceph-mon[75677]: pgmap v2385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4172 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:52.751+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:52 compute-0 podman[312092]: 2025-11-24 21:00:52.913888838 +0000 UTC m=+0.132983437 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:00:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:53.023+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:53 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4172 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:53.796+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:53.975+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:54 compute-0 ceph-mon[75677]: pgmap v2386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:00:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:00:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:54.800+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:55.014+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:55 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:55.762+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:56.050+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:56 compute-0 ceph-mon[75677]: pgmap v2387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:56.790+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:56 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:57.068+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4177 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:00:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:57.835+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:58.110+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:58 compute-0 ceph-mon[75677]: pgmap v2388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:58 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4177 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:00:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:58.814+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:00:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:00:59.097+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:00:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:00:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:00:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:00:59.771+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:00:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:00.052+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:00 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:00 compute-0 ceph-mon[75677]: pgmap v2389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:00.748+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:01.009+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:01 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:01 compute-0 CROND[312119]: (root) CMD (run-parts /etc/cron.hourly)
Nov 24 21:01:01 compute-0 run-parts[312122]: (/etc/cron.hourly) starting 0anacron
Nov 24 21:01:01 compute-0 run-parts[312128]: (/etc/cron.hourly) finished 0anacron
Nov 24 21:01:01 compute-0 CROND[312118]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 24 21:01:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:01.756+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:02.037+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4182 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:02 compute-0 ceph-mon[75677]: pgmap v2390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:02.735+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:03.026+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:03 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4182 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:03.751+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:04.001+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:04 compute-0 ceph-mon[75677]: pgmap v2391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:04.799+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:05.022+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:05 compute-0 sudo[312129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:05 compute-0 sudo[312129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:05 compute-0 sudo[312129]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:05 compute-0 sudo[312154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:01:05 compute-0 sudo[312154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:05 compute-0 sudo[312154]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:05 compute-0 sudo[312179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:05 compute-0 sudo[312179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:05 compute-0 sudo[312179]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:05 compute-0 sudo[312204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 21:01:05 compute-0 sudo[312204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:05.769+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:06 compute-0 sudo[312204]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:01:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:01:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:01:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:06.034+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:06 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:01:06 compute-0 sudo[312248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:06 compute-0 sudo[312248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:06 compute-0 sudo[312248]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:06 compute-0 ceph-mon[75677]: pgmap v2392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:01:06 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:01:06 compute-0 sudo[312273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:01:06 compute-0 sudo[312273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:06 compute-0 sudo[312273]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:06 compute-0 sudo[312298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:06 compute-0 sudo[312298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:06 compute-0 sudo[312298]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:06 compute-0 sudo[312323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:01:06 compute-0 sudo[312323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:06.783+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:07 compute-0 sudo[312323]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:07.044+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:01:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:01:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:01:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:01:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:01:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:01:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 02f759ed-5058-4104-b1ae-1a16d7b57311 does not exist
Nov 24 21:01:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ae24762f-b9c4-4f1e-bb40-0fb96516f29f does not exist
Nov 24 21:01:07 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7b5d2c8b-5332-4553-ab6e-e7b081554211 does not exist
Nov 24 21:01:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:01:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:01:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:01:07 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:01:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:01:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:01:07 compute-0 sudo[312379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:07 compute-0 sudo[312379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:07 compute-0 sudo[312379]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4187 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:01:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:01:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:01:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:01:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:01:07 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.237691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018067237776, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1991, "num_deletes": 549, "total_data_size": 2030281, "memory_usage": 2072768, "flush_reason": "Manual Compaction"}
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018067257315, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1982866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69741, "largest_seqno": 71731, "table_properties": {"data_size": 1974566, "index_size": 4093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 29289, "raw_average_key_size": 23, "raw_value_size": 1953444, "raw_average_value_size": 1579, "num_data_blocks": 178, "num_entries": 1237, "num_filter_entries": 1237, "num_deletions": 549, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764017958, "oldest_key_time": 1764017958, "file_creation_time": 1764018067, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 20081 microseconds, and 10022 cpu microseconds.
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:01:07 compute-0 sudo[312404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.257398) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1982866 bytes OK
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.257807) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.259862) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.259885) EVENT_LOG_v1 {"time_micros": 1764018067259876, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.259913) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 2019995, prev total WAL file size 2019995, number of live WAL files 2.
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.261006) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1936KB)], [149(9582KB)]
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018067261118, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11794837, "oldest_snapshot_seqno": -1}
Nov 24 21:01:07 compute-0 sudo[312404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:07 compute-0 sudo[312404]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:07 compute-0 sudo[312429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:07 compute-0 sudo[312429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 13329 keys, 10224291 bytes, temperature: kUnknown
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018067347735, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 10224291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10150933, "index_size": 38941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33349, "raw_key_size": 367941, "raw_average_key_size": 27, "raw_value_size": 9921899, "raw_average_value_size": 744, "num_data_blocks": 1415, "num_entries": 13329, "num_filter_entries": 13329, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764018067, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:01:07 compute-0 sudo[312429]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.348170) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10224291 bytes
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.372132) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.9 rd, 117.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.4 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(11.1) write-amplify(5.2) OK, records in: 14443, records dropped: 1114 output_compression: NoCompression
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.372188) EVENT_LOG_v1 {"time_micros": 1764018067372167, "job": 92, "event": "compaction_finished", "compaction_time_micros": 86819, "compaction_time_cpu_micros": 51162, "output_level": 6, "num_output_files": 1, "total_output_size": 10224291, "num_input_records": 14443, "num_output_records": 13329, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018067373366, "job": 92, "event": "table_file_deletion", "file_number": 151}
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018067375739, "job": 92, "event": "table_file_deletion", "file_number": 149}
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.260874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.375872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.375879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.375881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.375883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:01:07 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:01:07.375886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:01:07 compute-0 sudo[312454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:01:07 compute-0 sudo[312454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:07.824+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:07 compute-0 podman[312519]: 2025-11-24 21:01:07.947030689 +0000 UTC m=+0.103970704 container create 0e047e1da861b2c49946f50357c5f2a7eed9c34145283b4b8a44a66cdd9b2189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:01:07 compute-0 podman[312519]: 2025-11-24 21:01:07.886528698 +0000 UTC m=+0.043468733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:01:07 compute-0 systemd[1]: Started libpod-conmon-0e047e1da861b2c49946f50357c5f2a7eed9c34145283b4b8a44a66cdd9b2189.scope.
Nov 24 21:01:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:01:08 compute-0 podman[312519]: 2025-11-24 21:01:08.060431076 +0000 UTC m=+0.217371091 container init 0e047e1da861b2c49946f50357c5f2a7eed9c34145283b4b8a44a66cdd9b2189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 21:01:08 compute-0 podman[312519]: 2025-11-24 21:01:08.074084625 +0000 UTC m=+0.231024650 container start 0e047e1da861b2c49946f50357c5f2a7eed9c34145283b4b8a44a66cdd9b2189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:01:08 compute-0 cranky_villani[312535]: 167 167
Nov 24 21:01:08 compute-0 podman[312519]: 2025-11-24 21:01:08.082737558 +0000 UTC m=+0.239677583 container attach 0e047e1da861b2c49946f50357c5f2a7eed9c34145283b4b8a44a66cdd9b2189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:01:08 compute-0 systemd[1]: libpod-0e047e1da861b2c49946f50357c5f2a7eed9c34145283b4b8a44a66cdd9b2189.scope: Deactivated successfully.
Nov 24 21:01:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:08.085+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:08 compute-0 podman[312540]: 2025-11-24 21:01:08.137578297 +0000 UTC m=+0.037552743 container died 0e047e1da861b2c49946f50357c5f2a7eed9c34145283b4b8a44a66cdd9b2189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 21:01:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d96e70effb8b41549e952abc5d569578cc3d2549d26d4ba06651f934f4739984-merged.mount: Deactivated successfully.
Nov 24 21:01:08 compute-0 podman[312540]: 2025-11-24 21:01:08.204200283 +0000 UTC m=+0.104174679 container remove 0e047e1da861b2c49946f50357c5f2a7eed9c34145283b4b8a44a66cdd9b2189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:01:08 compute-0 systemd[1]: libpod-conmon-0e047e1da861b2c49946f50357c5f2a7eed9c34145283b4b8a44a66cdd9b2189.scope: Deactivated successfully.
Nov 24 21:01:08 compute-0 ceph-mon[75677]: pgmap v2393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:08 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4187 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:08 compute-0 podman[312562]: 2025-11-24 21:01:08.482293711 +0000 UTC m=+0.075400434 container create 6e56cd106e5f6e02ff9d89a9bc6f1d7bdf2f428f0504972c2c453e5e21e0b43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 21:01:08 compute-0 systemd[1]: Started libpod-conmon-6e56cd106e5f6e02ff9d89a9bc6f1d7bdf2f428f0504972c2c453e5e21e0b43a.scope.
Nov 24 21:01:08 compute-0 podman[312562]: 2025-11-24 21:01:08.452806066 +0000 UTC m=+0.045912849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:01:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:01:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dc9938d4b4320b28c9436501ff4635625ac2b5053efe885e6a886c995b8635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dc9938d4b4320b28c9436501ff4635625ac2b5053efe885e6a886c995b8635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dc9938d4b4320b28c9436501ff4635625ac2b5053efe885e6a886c995b8635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dc9938d4b4320b28c9436501ff4635625ac2b5053efe885e6a886c995b8635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dc9938d4b4320b28c9436501ff4635625ac2b5053efe885e6a886c995b8635/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:08 compute-0 podman[312562]: 2025-11-24 21:01:08.606960622 +0000 UTC m=+0.200067345 container init 6e56cd106e5f6e02ff9d89a9bc6f1d7bdf2f428f0504972c2c453e5e21e0b43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 21:01:08 compute-0 podman[312562]: 2025-11-24 21:01:08.618631737 +0000 UTC m=+0.211738460 container start 6e56cd106e5f6e02ff9d89a9bc6f1d7bdf2f428f0504972c2c453e5e21e0b43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:01:08 compute-0 podman[312562]: 2025-11-24 21:01:08.632711346 +0000 UTC m=+0.225818069 container attach 6e56cd106e5f6e02ff9d89a9bc6f1d7bdf2f428f0504972c2c453e5e21e0b43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:01:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:08.850+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:09.067+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:01:09.419 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:01:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:01:09.419 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:01:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:01:09.419 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:01:09 compute-0 peaceful_joliot[312579]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:01:09 compute-0 peaceful_joliot[312579]: --> relative data size: 1.0
Nov 24 21:01:09 compute-0 peaceful_joliot[312579]: --> All data devices are unavailable
Nov 24 21:01:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:09.805+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:09 compute-0 systemd[1]: libpod-6e56cd106e5f6e02ff9d89a9bc6f1d7bdf2f428f0504972c2c453e5e21e0b43a.scope: Deactivated successfully.
Nov 24 21:01:09 compute-0 systemd[1]: libpod-6e56cd106e5f6e02ff9d89a9bc6f1d7bdf2f428f0504972c2c453e5e21e0b43a.scope: Consumed 1.147s CPU time.
Nov 24 21:01:09 compute-0 podman[312607]: 2025-11-24 21:01:09.854293652 +0000 UTC m=+0.064131890 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:01:09 compute-0 podman[312618]: 2025-11-24 21:01:09.872483322 +0000 UTC m=+0.034086540 container died 6e56cd106e5f6e02ff9d89a9bc6f1d7bdf2f428f0504972c2c453e5e21e0b43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 21:01:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9dc9938d4b4320b28c9436501ff4635625ac2b5053efe885e6a886c995b8635-merged.mount: Deactivated successfully.
Nov 24 21:01:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:10.078+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:10 compute-0 podman[312618]: 2025-11-24 21:01:10.106754148 +0000 UTC m=+0.268357316 container remove 6e56cd106e5f6e02ff9d89a9bc6f1d7bdf2f428f0504972c2c453e5e21e0b43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 21:01:10 compute-0 systemd[1]: libpod-conmon-6e56cd106e5f6e02ff9d89a9bc6f1d7bdf2f428f0504972c2c453e5e21e0b43a.scope: Deactivated successfully.
Nov 24 21:01:10 compute-0 sudo[312454]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:10 compute-0 ceph-mon[75677]: pgmap v2394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:10 compute-0 sudo[312640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:10 compute-0 sudo[312640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:10 compute-0 sudo[312640]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:10 compute-0 sudo[312665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:01:10 compute-0 sudo[312665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:10 compute-0 sudo[312665]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:10 compute-0 sudo[312690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:10 compute-0 sudo[312690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:10 compute-0 sudo[312690]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:10 compute-0 sudo[312715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:01:10 compute-0 sudo[312715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:10.780+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:11 compute-0 podman[312779]: 2025-11-24 21:01:11.021796339 +0000 UTC m=+0.066719230 container create 0e9433e58b4461b96d0ec69707164e18552ffe60aba3f7e94390237361c493a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 21:01:11 compute-0 systemd[1]: Started libpod-conmon-0e9433e58b4461b96d0ec69707164e18552ffe60aba3f7e94390237361c493a8.scope.
Nov 24 21:01:11 compute-0 podman[312779]: 2025-11-24 21:01:10.995285384 +0000 UTC m=+0.040208335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:01:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:11.102+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:11 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:01:11 compute-0 podman[312779]: 2025-11-24 21:01:11.126549513 +0000 UTC m=+0.171472384 container init 0e9433e58b4461b96d0ec69707164e18552ffe60aba3f7e94390237361c493a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:01:11 compute-0 podman[312779]: 2025-11-24 21:01:11.14165882 +0000 UTC m=+0.186581721 container start 0e9433e58b4461b96d0ec69707164e18552ffe60aba3f7e94390237361c493a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:01:11 compute-0 upbeat_bohr[312795]: 167 167
Nov 24 21:01:11 compute-0 systemd[1]: libpod-0e9433e58b4461b96d0ec69707164e18552ffe60aba3f7e94390237361c493a8.scope: Deactivated successfully.
Nov 24 21:01:11 compute-0 podman[312779]: 2025-11-24 21:01:11.147823447 +0000 UTC m=+0.192746378 container attach 0e9433e58b4461b96d0ec69707164e18552ffe60aba3f7e94390237361c493a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 21:01:11 compute-0 podman[312779]: 2025-11-24 21:01:11.149645705 +0000 UTC m=+0.194568666 container died 0e9433e58b4461b96d0ec69707164e18552ffe60aba3f7e94390237361c493a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-822ee38c84da2b63cbe1c00244368a0357cf3645bc8b6ce672cf7effb0d6a75d-merged.mount: Deactivated successfully.
Nov 24 21:01:11 compute-0 podman[312779]: 2025-11-24 21:01:11.211092242 +0000 UTC m=+0.256015143 container remove 0e9433e58b4461b96d0ec69707164e18552ffe60aba3f7e94390237361c493a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 21:01:11 compute-0 systemd[1]: libpod-conmon-0e9433e58b4461b96d0ec69707164e18552ffe60aba3f7e94390237361c493a8.scope: Deactivated successfully.
Nov 24 21:01:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:11 compute-0 podman[312819]: 2025-11-24 21:01:11.377788137 +0000 UTC m=+0.044493901 container create ba1b386060f8afa0501c763f3bc590e854e1bb2440f5b8218f618fbf3a505442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 21:01:11 compute-0 systemd[1]: Started libpod-conmon-ba1b386060f8afa0501c763f3bc590e854e1bb2440f5b8218f618fbf3a505442.scope.
Nov 24 21:01:11 compute-0 podman[312819]: 2025-11-24 21:01:11.357979633 +0000 UTC m=+0.024685437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:01:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d857c5bcb8527e1125e06ba6cd9454d61f3c3fceb55461b18722c8cb1660cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d857c5bcb8527e1125e06ba6cd9454d61f3c3fceb55461b18722c8cb1660cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d857c5bcb8527e1125e06ba6cd9454d61f3c3fceb55461b18722c8cb1660cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d857c5bcb8527e1125e06ba6cd9454d61f3c3fceb55461b18722c8cb1660cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:11 compute-0 podman[312819]: 2025-11-24 21:01:11.494798841 +0000 UTC m=+0.161504705 container init ba1b386060f8afa0501c763f3bc590e854e1bb2440f5b8218f618fbf3a505442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 21:01:11 compute-0 podman[312819]: 2025-11-24 21:01:11.510270278 +0000 UTC m=+0.176976082 container start ba1b386060f8afa0501c763f3bc590e854e1bb2440f5b8218f618fbf3a505442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 21:01:11 compute-0 podman[312819]: 2025-11-24 21:01:11.515280054 +0000 UTC m=+0.181985888 container attach ba1b386060f8afa0501c763f3bc590e854e1bb2440f5b8218f618fbf3a505442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 21:01:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:11.787+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:12.076+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]: {
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:     "0": [
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:         {
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "devices": [
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "/dev/loop3"
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             ],
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_name": "ceph_lv0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_size": "21470642176",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "name": "ceph_lv0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "tags": {
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.cluster_name": "ceph",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.crush_device_class": "",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.encrypted": "0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.osd_id": "0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.type": "block",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.vdo": "0"
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             },
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "type": "block",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "vg_name": "ceph_vg0"
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:         }
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:     ],
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:     "1": [
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:         {
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "devices": [
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "/dev/loop4"
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             ],
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_name": "ceph_lv1",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_size": "21470642176",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "name": "ceph_lv1",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "tags": {
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.cluster_name": "ceph",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.crush_device_class": "",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.encrypted": "0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.osd_id": "1",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.type": "block",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.vdo": "0"
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             },
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "type": "block",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "vg_name": "ceph_vg1"
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:         }
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:     ],
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:     "2": [
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:         {
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "devices": [
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "/dev/loop5"
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             ],
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_name": "ceph_lv2",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_size": "21470642176",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "name": "ceph_lv2",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "tags": {
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.cluster_name": "ceph",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.crush_device_class": "",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.encrypted": "0",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.osd_id": "2",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.type": "block",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:                 "ceph.vdo": "0"
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             },
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "type": "block",
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:             "vg_name": "ceph_vg2"
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:         }
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]:     ]
Nov 24 21:01:12 compute-0 eloquent_cartwright[312836]: }
Nov 24 21:01:12 compute-0 systemd[1]: libpod-ba1b386060f8afa0501c763f3bc590e854e1bb2440f5b8218f618fbf3a505442.scope: Deactivated successfully.
Nov 24 21:01:12 compute-0 conmon[312836]: conmon ba1b386060f8afa0501c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba1b386060f8afa0501c763f3bc590e854e1bb2440f5b8218f618fbf3a505442.scope/container/memory.events
Nov 24 21:01:12 compute-0 podman[312819]: 2025-11-24 21:01:12.258712567 +0000 UTC m=+0.925418361 container died ba1b386060f8afa0501c763f3bc590e854e1bb2440f5b8218f618fbf3a505442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:01:12 compute-0 ceph-mon[75677]: pgmap v2395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-14d857c5bcb8527e1125e06ba6cd9454d61f3c3fceb55461b18722c8cb1660cf-merged.mount: Deactivated successfully.
Nov 24 21:01:12 compute-0 podman[312819]: 2025-11-24 21:01:12.333088913 +0000 UTC m=+0.999794677 container remove ba1b386060f8afa0501c763f3bc590e854e1bb2440f5b8218f618fbf3a505442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:01:12 compute-0 systemd[1]: libpod-conmon-ba1b386060f8afa0501c763f3bc590e854e1bb2440f5b8218f618fbf3a505442.scope: Deactivated successfully.
Nov 24 21:01:12 compute-0 sudo[312715]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:12 compute-0 sudo[312859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:12 compute-0 sudo[312859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:12 compute-0 sudo[312859]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:12 compute-0 sudo[312884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:01:12 compute-0 sudo[312884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:12 compute-0 sudo[312884]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:12 compute-0 sudo[312909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:12 compute-0 sudo[312909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:12 compute-0 sudo[312909]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:12 compute-0 sudo[312934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:01:12 compute-0 sudo[312934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:12.761+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:13.061+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:13 compute-0 podman[312999]: 2025-11-24 21:01:13.140324127 +0000 UTC m=+0.064743767 container create f75bb328ffe33e8635c8e651173f8ddff47c257786682c995d40e790183a1277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 21:01:13 compute-0 systemd[1]: Started libpod-conmon-f75bb328ffe33e8635c8e651173f8ddff47c257786682c995d40e790183a1277.scope.
Nov 24 21:01:13 compute-0 podman[312999]: 2025-11-24 21:01:13.112443165 +0000 UTC m=+0.036862845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:01:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:01:13 compute-0 podman[312999]: 2025-11-24 21:01:13.243347154 +0000 UTC m=+0.167766824 container init f75bb328ffe33e8635c8e651173f8ddff47c257786682c995d40e790183a1277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:01:13 compute-0 podman[312999]: 2025-11-24 21:01:13.25395533 +0000 UTC m=+0.178374960 container start f75bb328ffe33e8635c8e651173f8ddff47c257786682c995d40e790183a1277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 21:01:13 compute-0 podman[312999]: 2025-11-24 21:01:13.25988919 +0000 UTC m=+0.184308870 container attach f75bb328ffe33e8635c8e651173f8ddff47c257786682c995d40e790183a1277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:01:13 compute-0 infallible_haibt[313016]: 167 167
Nov 24 21:01:13 compute-0 systemd[1]: libpod-f75bb328ffe33e8635c8e651173f8ddff47c257786682c995d40e790183a1277.scope: Deactivated successfully.
Nov 24 21:01:13 compute-0 podman[312999]: 2025-11-24 21:01:13.263160549 +0000 UTC m=+0.187580189 container died f75bb328ffe33e8635c8e651173f8ddff47c257786682c995d40e790183a1277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:01:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d284430ce0deb6a3905361ec3d389af68ec2b94e4d7fb76ccf13aa434cd9769c-merged.mount: Deactivated successfully.
Nov 24 21:01:13 compute-0 podman[312999]: 2025-11-24 21:01:13.316457976 +0000 UTC m=+0.240877616 container remove f75bb328ffe33e8635c8e651173f8ddff47c257786682c995d40e790183a1277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 21:01:13 compute-0 systemd[1]: libpod-conmon-f75bb328ffe33e8635c8e651173f8ddff47c257786682c995d40e790183a1277.scope: Deactivated successfully.
Nov 24 21:01:13 compute-0 podman[313042]: 2025-11-24 21:01:13.571366468 +0000 UTC m=+0.065479637 container create 01b11c456770440ba6b32d49cda3f0d1a225a79fd070ca752cf4948f80dd4815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 21:01:13 compute-0 systemd[1]: Started libpod-conmon-01b11c456770440ba6b32d49cda3f0d1a225a79fd070ca752cf4948f80dd4815.scope.
Nov 24 21:01:13 compute-0 podman[313042]: 2025-11-24 21:01:13.550281879 +0000 UTC m=+0.044395058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:01:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98b69e1c34c2fb931a7c7e25eb4e4ff6aad658df16c23764c4e2da4451c3ec8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98b69e1c34c2fb931a7c7e25eb4e4ff6aad658df16c23764c4e2da4451c3ec8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98b69e1c34c2fb931a7c7e25eb4e4ff6aad658df16c23764c4e2da4451c3ec8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98b69e1c34c2fb931a7c7e25eb4e4ff6aad658df16c23764c4e2da4451c3ec8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:01:13 compute-0 podman[313042]: 2025-11-24 21:01:13.682118084 +0000 UTC m=+0.176231283 container init 01b11c456770440ba6b32d49cda3f0d1a225a79fd070ca752cf4948f80dd4815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 21:01:13 compute-0 podman[313042]: 2025-11-24 21:01:13.696344277 +0000 UTC m=+0.190457446 container start 01b11c456770440ba6b32d49cda3f0d1a225a79fd070ca752cf4948f80dd4815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 21:01:13 compute-0 podman[313042]: 2025-11-24 21:01:13.702886294 +0000 UTC m=+0.196999473 container attach 01b11c456770440ba6b32d49cda3f0d1a225a79fd070ca752cf4948f80dd4815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 21:01:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:13.763+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:14.034+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:14 compute-0 ceph-mon[75677]: pgmap v2396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:14.771+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:14 compute-0 competent_sutherland[313058]: {
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "osd_id": 2,
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "type": "bluestore"
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:     },
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "osd_id": 1,
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "type": "bluestore"
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:     },
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "osd_id": 0,
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:         "type": "bluestore"
Nov 24 21:01:14 compute-0 competent_sutherland[313058]:     }
Nov 24 21:01:14 compute-0 competent_sutherland[313058]: }
Nov 24 21:01:14 compute-0 systemd[1]: libpod-01b11c456770440ba6b32d49cda3f0d1a225a79fd070ca752cf4948f80dd4815.scope: Deactivated successfully.
Nov 24 21:01:14 compute-0 podman[313042]: 2025-11-24 21:01:14.844866973 +0000 UTC m=+1.338980142 container died 01b11c456770440ba6b32d49cda3f0d1a225a79fd070ca752cf4948f80dd4815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:01:14 compute-0 systemd[1]: libpod-01b11c456770440ba6b32d49cda3f0d1a225a79fd070ca752cf4948f80dd4815.scope: Consumed 1.161s CPU time.
Nov 24 21:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e98b69e1c34c2fb931a7c7e25eb4e4ff6aad658df16c23764c4e2da4451c3ec8-merged.mount: Deactivated successfully.
Nov 24 21:01:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:14 compute-0 podman[313042]: 2025-11-24 21:01:14.942150866 +0000 UTC m=+1.436264035 container remove 01b11c456770440ba6b32d49cda3f0d1a225a79fd070ca752cf4948f80dd4815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 21:01:14 compute-0 systemd[1]: libpod-conmon-01b11c456770440ba6b32d49cda3f0d1a225a79fd070ca752cf4948f80dd4815.scope: Deactivated successfully.
Nov 24 21:01:14 compute-0 sudo[312934]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:01:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:01:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:01:15 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:01:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev cc270eaf-27f0-46fa-817a-4afd99ae9735 does not exist
Nov 24 21:01:15 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 33e2d3c2-3ad0-4328-9e46-8a8d5cc85fdc does not exist
Nov 24 21:01:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:15.043+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:15 compute-0 sudo[313105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:01:15 compute-0 sudo[313105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:15 compute-0 sudo[313105]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:15 compute-0 sudo[313130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:01:15 compute-0 sudo[313130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:01:15 compute-0 sudo[313130]: pam_unix(sudo:session): session closed for user root
Nov 24 21:01:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:01:15 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:01:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:15.813+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:15 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:16.055+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:16 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:16 compute-0 ceph-mon[75677]: pgmap v2397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:01:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1131138386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:01:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:01:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1131138386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:01:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:16.834+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:17.097+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4192 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1131138386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:01:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1131138386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:01:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:17 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4192 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:17.823+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:18.069+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:18 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:18 compute-0 ceph-mon[75677]: pgmap v2398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:18.785+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:19.110+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:19.805+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:20.104+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:20 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:20 compute-0 ceph-mon[75677]: pgmap v2399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:20.826+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:20 compute-0 podman[313155]: 2025-11-24 21:01:20.902117585 +0000 UTC m=+0.118254310 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:01:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:21.142+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:21.862+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:22.117+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4202 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:22 compute-0 ceph-mon[75677]: pgmap v2400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:22 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4202 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:22.879+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:23.095+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:23 compute-0 podman[313176]: 2025-11-24 21:01:23.916851036 +0000 UTC m=+0.137735464 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 21:01:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:23.927+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:24.053+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:01:24 compute-0 ceph-mon[75677]: pgmap v2401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:01:24
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.mgr', '.rgw.root', 'vms', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:01:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:24.936+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:25.055+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:25.962+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:26.030+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:26 compute-0 ceph-mon[75677]: pgmap v2402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:26.937+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:27.043+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4207 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:27.955+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:28.077+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:28 compute-0 ceph-mon[75677]: pgmap v2403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:28 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4207 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:28.913+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:29.059+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:29.943+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:29 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:30.093+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:30 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:30 compute-0 ceph-mon[75677]: pgmap v2404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:30.968+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:31.051+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:31.945+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:32.038+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:32 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:32 compute-0 ceph-mon[75677]: pgmap v2405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:32.942+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:33.056+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:33.923+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:34.047+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:34 compute-0 ceph-mon[75677]: pgmap v2406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:34 compute-0 sshd-session[313203]: Invalid user ftpuser from 182.93.7.194 port 60704
Nov 24 21:01:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:34.948+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:35 compute-0 sshd-session[313203]: Received disconnect from 182.93.7.194 port 60704:11: Bye Bye [preauth]
Nov 24 21:01:35 compute-0 sshd-session[313203]: Disconnected from invalid user ftpuser 182.93.7.194 port 60704 [preauth]
Nov 24 21:01:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:35.087+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:01:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:01:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:35.998+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:36.125+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:36 compute-0 ceph-mon[75677]: pgmap v2407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:36.955+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:37.089+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4212 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:37 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4212 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:37.990+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:37 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:38.135+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:38 compute-0 ceph-mon[75677]: pgmap v2408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:38.951+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:38 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:39.103+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:39.957+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:40.134+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:40 compute-0 ceph-mon[75677]: pgmap v2409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:01:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:01:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:01:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:01:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:01:40 compute-0 podman[313207]: 2025-11-24 21:01:40.862474692 +0000 UTC m=+0.082667590 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 24 21:01:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:40.974+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:41.123+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:41.977+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:41 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:42.115+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4222 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:42 compute-0 ceph-mon[75677]: pgmap v2410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:42 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4222 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:43.017+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:43.095+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:43.976+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:44.087+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:44 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:44 compute-0 ceph-mon[75677]: pgmap v2411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:44.934+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:45.116+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:45 compute-0 sshd-session[313205]: Invalid user gerrit from 14.63.196.175 port 34612
Nov 24 21:01:45 compute-0 sshd-session[313205]: Received disconnect from 14.63.196.175 port 34612:11: Bye Bye [preauth]
Nov 24 21:01:45 compute-0 sshd-session[313205]: Disconnected from invalid user gerrit 14.63.196.175 port 34612 [preauth]
Nov 24 21:01:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:45.915+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:45 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:46.165+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:46 compute-0 ceph-mon[75677]: pgmap v2412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:46.925+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:47.210+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4227 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:47 compute-0 ceph-mon[75677]: pgmap v2413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:47.942+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:48.184+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:48 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4227 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:48.912+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:48 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:49.182+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:49 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:49 compute-0 ceph-mon[75677]: pgmap v2414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:49.950+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:50.160+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:50.946+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:51.137+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:51 compute-0 ceph-mon[75677]: pgmap v2415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:51 compute-0 podman[313227]: 2025-11-24 21:01:51.858492888 +0000 UTC m=+0.086836023 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 24 21:01:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:51.943+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:52.171+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:52.982+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:53.151+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:53 compute-0 ceph-mon[75677]: pgmap v2416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:53.974+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:54.106+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:54 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:01:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:01:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:54 compute-0 podman[313248]: 2025-11-24 21:01:54.908929041 +0000 UTC m=+0.135072893 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 21:01:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:54.988+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:55.081+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:55 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:55 compute-0 ceph-mon[75677]: pgmap v2417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:55.978+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:56.078+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:57.026+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:57.084+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4232 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:01:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:57 compute-0 ceph-mon[75677]: pgmap v2418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:57 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4232 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:01:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:58.031+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:58.093+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:01:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:01:59.071+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:01:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:01:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:01:59.086+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:01:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:01:59 compute-0 ceph-mon[75677]: pgmap v2419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:00.093+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:00.119+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:00 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:01.072+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:01 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:01.113+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:01 compute-0 ceph-mon[75677]: pgmap v2420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:02.086+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:02.145+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4242 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:02 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4242 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:03.077+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:03.109+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:03 compute-0 ceph-mon[75677]: pgmap v2421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:04.066+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:04.098+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:05.047+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:05.074+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:05 compute-0 ceph-mon[75677]: pgmap v2422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:06.041+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:06.041+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:07.002+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:07.002+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4247 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:07 compute-0 ceph-mon[75677]: pgmap v2423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:08.016+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:08.032+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:08 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4247 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:09.039+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:09.041+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:02:09.420 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:02:09.421 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:02:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:02:09.422 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:02:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:09 compute-0 ceph-mon[75677]: pgmap v2424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:10.003+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:10.024+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:10.992+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:11.047+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:11 compute-0 podman[313274]: 2025-11-24 21:02:11.861697947 +0000 UTC m=+0.079681809 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 21:02:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:11 compute-0 ceph-mon[75677]: pgmap v2425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:12.017+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:12.073+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:13.027+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:13.057+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:13 compute-0 ceph-mon[75677]: pgmap v2426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:14.031+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:14.050+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:15.003+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:15.063+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:15 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:15 compute-0 sudo[313294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:02:15 compute-0 sudo[313294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:15 compute-0 sudo[313294]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:15 compute-0 sudo[313319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:02:15 compute-0 sudo[313319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:15 compute-0 sudo[313319]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:15 compute-0 sudo[313344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:02:15 compute-0 sudo[313344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:15 compute-0 sudo[313344]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:15 compute-0 sudo[313369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:02:15 compute-0 sudo[313369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:15.988+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:16 compute-0 ceph-mon[75677]: pgmap v2427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:16.023+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:16 compute-0 sudo[313369]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 21:02:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 21:02:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:02:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:02:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:02:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:02:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:02:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:02:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e9628688-a0cc-41d0-957a-03d466e4a311 does not exist
Nov 24 21:02:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e9d25e28-bc88-435a-869c-e9660dd4a2c7 does not exist
Nov 24 21:02:16 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 8de49da4-47e6-4e55-b43c-d69ff62a852a does not exist
Nov 24 21:02:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:02:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:02:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:02:16 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:02:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:02:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:02:16 compute-0 sudo[313426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:02:16 compute-0 sudo[313426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:16 compute-0 sudo[313426]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:02:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1221398270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:02:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:02:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1221398270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:02:16 compute-0 sudo[313451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:02:16 compute-0 sudo[313451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:16 compute-0 sudo[313451]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:16 compute-0 sudo[313476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:02:16 compute-0 sudo[313476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:16 compute-0 sudo[313476]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:16 compute-0 sudo[313501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:02:16 compute-0 sudo[313501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 21:02:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:02:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:02:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:02:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:02:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:02:17 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:02:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1221398270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:02:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1221398270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:02:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:17.008+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:17.031+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:17 compute-0 podman[313568]: 2025-11-24 21:02:17.117833699 +0000 UTC m=+0.051159291 container create e6c1c2fdb043925774d099ad221b4ef70d00dd2027ffc462c2bee8d0de498a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:02:17 compute-0 systemd[1]: Started libpod-conmon-e6c1c2fdb043925774d099ad221b4ef70d00dd2027ffc462c2bee8d0de498a84.scope.
Nov 24 21:02:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4252 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:17 compute-0 podman[313568]: 2025-11-24 21:02:17.094750036 +0000 UTC m=+0.028075658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:02:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:02:17 compute-0 podman[313568]: 2025-11-24 21:02:17.224905436 +0000 UTC m=+0.158231048 container init e6c1c2fdb043925774d099ad221b4ef70d00dd2027ffc462c2bee8d0de498a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 21:02:17 compute-0 podman[313568]: 2025-11-24 21:02:17.239277973 +0000 UTC m=+0.172603585 container start e6c1c2fdb043925774d099ad221b4ef70d00dd2027ffc462c2bee8d0de498a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:02:17 compute-0 podman[313568]: 2025-11-24 21:02:17.244568086 +0000 UTC m=+0.177893708 container attach e6c1c2fdb043925774d099ad221b4ef70d00dd2027ffc462c2bee8d0de498a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 21:02:17 compute-0 condescending_banzai[313585]: 167 167
Nov 24 21:02:17 compute-0 systemd[1]: libpod-e6c1c2fdb043925774d099ad221b4ef70d00dd2027ffc462c2bee8d0de498a84.scope: Deactivated successfully.
Nov 24 21:02:17 compute-0 conmon[313585]: conmon e6c1c2fdb043925774d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6c1c2fdb043925774d099ad221b4ef70d00dd2027ffc462c2bee8d0de498a84.scope/container/memory.events
Nov 24 21:02:17 compute-0 podman[313568]: 2025-11-24 21:02:17.249848738 +0000 UTC m=+0.183174360 container died e6c1c2fdb043925774d099ad221b4ef70d00dd2027ffc462c2bee8d0de498a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba543b35eeadd1f9a8e395598bb2e53bc21ab57292e2ebfc4edeec6b50321bc6-merged.mount: Deactivated successfully.
Nov 24 21:02:17 compute-0 podman[313568]: 2025-11-24 21:02:17.30553644 +0000 UTC m=+0.238862052 container remove e6c1c2fdb043925774d099ad221b4ef70d00dd2027ffc462c2bee8d0de498a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banzai, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:02:17 compute-0 systemd[1]: libpod-conmon-e6c1c2fdb043925774d099ad221b4ef70d00dd2027ffc462c2bee8d0de498a84.scope: Deactivated successfully.
Nov 24 21:02:17 compute-0 podman[313609]: 2025-11-24 21:02:17.575755165 +0000 UTC m=+0.082624138 container create b7dd84183300945c7b4bbffda1bc89be6d7d8b82f50f0728e7fd264728e4af9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:02:17 compute-0 systemd[1]: Started libpod-conmon-b7dd84183300945c7b4bbffda1bc89be6d7d8b82f50f0728e7fd264728e4af9b.scope.
Nov 24 21:02:17 compute-0 podman[313609]: 2025-11-24 21:02:17.54626364 +0000 UTC m=+0.053132653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:02:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7bf7ceb66813012e65c3c4bffe5aeaa756d673f7f173e998b9de004da6a5d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7bf7ceb66813012e65c3c4bffe5aeaa756d673f7f173e998b9de004da6a5d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7bf7ceb66813012e65c3c4bffe5aeaa756d673f7f173e998b9de004da6a5d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7bf7ceb66813012e65c3c4bffe5aeaa756d673f7f173e998b9de004da6a5d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7bf7ceb66813012e65c3c4bffe5aeaa756d673f7f173e998b9de004da6a5d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:17 compute-0 podman[313609]: 2025-11-24 21:02:17.700817067 +0000 UTC m=+0.207686070 container init b7dd84183300945c7b4bbffda1bc89be6d7d8b82f50f0728e7fd264728e4af9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 21:02:17 compute-0 podman[313609]: 2025-11-24 21:02:17.712118552 +0000 UTC m=+0.218987515 container start b7dd84183300945c7b4bbffda1bc89be6d7d8b82f50f0728e7fd264728e4af9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 21:02:17 compute-0 podman[313609]: 2025-11-24 21:02:17.716304884 +0000 UTC m=+0.223173847 container attach b7dd84183300945c7b4bbffda1bc89be6d7d8b82f50f0728e7fd264728e4af9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:02:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:17.997+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:18 compute-0 ceph-mon[75677]: pgmap v2428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:18 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4252 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:18.018+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:18 compute-0 fervent_goldberg[313626]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:02:18 compute-0 fervent_goldberg[313626]: --> relative data size: 1.0
Nov 24 21:02:18 compute-0 fervent_goldberg[313626]: --> All data devices are unavailable
Nov 24 21:02:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:18 compute-0 systemd[1]: libpod-b7dd84183300945c7b4bbffda1bc89be6d7d8b82f50f0728e7fd264728e4af9b.scope: Deactivated successfully.
Nov 24 21:02:18 compute-0 systemd[1]: libpod-b7dd84183300945c7b4bbffda1bc89be6d7d8b82f50f0728e7fd264728e4af9b.scope: Consumed 1.218s CPU time.
Nov 24 21:02:18 compute-0 podman[313609]: 2025-11-24 21:02:18.964318242 +0000 UTC m=+1.471187225 container died b7dd84183300945c7b4bbffda1bc89be6d7d8b82f50f0728e7fd264728e4af9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 21:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff7bf7ceb66813012e65c3c4bffe5aeaa756d673f7f173e998b9de004da6a5d9-merged.mount: Deactivated successfully.
Nov 24 21:02:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:19.028+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:19.039+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:19 compute-0 podman[313609]: 2025-11-24 21:02:19.057184886 +0000 UTC m=+1.564053849 container remove b7dd84183300945c7b4bbffda1bc89be6d7d8b82f50f0728e7fd264728e4af9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 21:02:19 compute-0 systemd[1]: libpod-conmon-b7dd84183300945c7b4bbffda1bc89be6d7d8b82f50f0728e7fd264728e4af9b.scope: Deactivated successfully.
Nov 24 21:02:19 compute-0 sudo[313501]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:19 compute-0 sudo[313670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:02:19 compute-0 sudo[313670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:19 compute-0 sudo[313670]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:19 compute-0 sudo[313695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:02:19 compute-0 sudo[313695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:19 compute-0 sudo[313695]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:19 compute-0 sudo[313720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:02:19 compute-0 sudo[313720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:19 compute-0 sudo[313720]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:19 compute-0 sudo[313745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:02:19 compute-0 sudo[313745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:20 compute-0 ceph-mon[75677]: pgmap v2429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:20 compute-0 podman[313813]: 2025-11-24 21:02:20.036742216 +0000 UTC m=+0.071081648 container create d43b169405950aa796b34046e52aa0f658f264bb21faa908ae8292bf7b239151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 21:02:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:20.059+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:20 compute-0 systemd[1]: Started libpod-conmon-d43b169405950aa796b34046e52aa0f658f264bb21faa908ae8292bf7b239151.scope.
Nov 24 21:02:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:20.086+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:20 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:20 compute-0 podman[313813]: 2025-11-24 21:02:20.007514238 +0000 UTC m=+0.041853730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:02:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:02:20 compute-0 podman[313813]: 2025-11-24 21:02:20.139937637 +0000 UTC m=+0.174277099 container init d43b169405950aa796b34046e52aa0f658f264bb21faa908ae8292bf7b239151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 21:02:20 compute-0 podman[313813]: 2025-11-24 21:02:20.151066387 +0000 UTC m=+0.185405819 container start d43b169405950aa796b34046e52aa0f658f264bb21faa908ae8292bf7b239151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 21:02:20 compute-0 podman[313813]: 2025-11-24 21:02:20.155716773 +0000 UTC m=+0.190056195 container attach d43b169405950aa796b34046e52aa0f658f264bb21faa908ae8292bf7b239151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 21:02:20 compute-0 pedantic_booth[313830]: 167 167
Nov 24 21:02:20 compute-0 systemd[1]: libpod-d43b169405950aa796b34046e52aa0f658f264bb21faa908ae8292bf7b239151.scope: Deactivated successfully.
Nov 24 21:02:20 compute-0 podman[313813]: 2025-11-24 21:02:20.15894504 +0000 UTC m=+0.193284442 container died d43b169405950aa796b34046e52aa0f658f264bb21faa908ae8292bf7b239151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:02:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-483aff4efaaadd8dd847b7b53271d62a8e3ca74ea968143a93215cef4fcd87fd-merged.mount: Deactivated successfully.
Nov 24 21:02:20 compute-0 podman[313813]: 2025-11-24 21:02:20.210970413 +0000 UTC m=+0.245309845 container remove d43b169405950aa796b34046e52aa0f658f264bb21faa908ae8292bf7b239151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 21:02:20 compute-0 systemd[1]: libpod-conmon-d43b169405950aa796b34046e52aa0f658f264bb21faa908ae8292bf7b239151.scope: Deactivated successfully.
Nov 24 21:02:20 compute-0 podman[313854]: 2025-11-24 21:02:20.479649347 +0000 UTC m=+0.070248065 container create a0118b16c16f95021e6c1e46f6e618029c5527f5c7d6cb835d4c539a250dc5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:02:20 compute-0 systemd[1]: Started libpod-conmon-a0118b16c16f95021e6c1e46f6e618029c5527f5c7d6cb835d4c539a250dc5f8.scope.
Nov 24 21:02:20 compute-0 podman[313854]: 2025-11-24 21:02:20.451816906 +0000 UTC m=+0.042415654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:02:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adaec0ec9356c99020c48485762d66eefa57bfbaa0a34ad3dc54b63977342af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adaec0ec9356c99020c48485762d66eefa57bfbaa0a34ad3dc54b63977342af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adaec0ec9356c99020c48485762d66eefa57bfbaa0a34ad3dc54b63977342af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adaec0ec9356c99020c48485762d66eefa57bfbaa0a34ad3dc54b63977342af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:20 compute-0 podman[313854]: 2025-11-24 21:02:20.580184158 +0000 UTC m=+0.170782916 container init a0118b16c16f95021e6c1e46f6e618029c5527f5c7d6cb835d4c539a250dc5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:02:20 compute-0 podman[313854]: 2025-11-24 21:02:20.59731608 +0000 UTC m=+0.187914798 container start a0118b16c16f95021e6c1e46f6e618029c5527f5c7d6cb835d4c539a250dc5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 21:02:20 compute-0 podman[313854]: 2025-11-24 21:02:20.600872995 +0000 UTC m=+0.191471713 container attach a0118b16c16f95021e6c1e46f6e618029c5527f5c7d6cb835d4c539a250dc5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:02:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:21.025+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:21.065+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:21 compute-0 suspicious_turing[313870]: {
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:     "0": [
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:         {
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "devices": [
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "/dev/loop3"
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             ],
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_name": "ceph_lv0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_size": "21470642176",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "name": "ceph_lv0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "tags": {
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.cluster_name": "ceph",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.crush_device_class": "",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.encrypted": "0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.osd_id": "0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.type": "block",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.vdo": "0"
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             },
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "type": "block",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "vg_name": "ceph_vg0"
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:         }
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:     ],
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:     "1": [
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:         {
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "devices": [
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "/dev/loop4"
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             ],
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_name": "ceph_lv1",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_size": "21470642176",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "name": "ceph_lv1",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "tags": {
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.cluster_name": "ceph",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.crush_device_class": "",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.encrypted": "0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.osd_id": "1",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.type": "block",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.vdo": "0"
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             },
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "type": "block",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "vg_name": "ceph_vg1"
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:         }
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:     ],
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:     "2": [
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:         {
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "devices": [
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "/dev/loop5"
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             ],
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_name": "ceph_lv2",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_size": "21470642176",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "name": "ceph_lv2",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "tags": {
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.cluster_name": "ceph",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.crush_device_class": "",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.encrypted": "0",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.osd_id": "2",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.type": "block",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:                 "ceph.vdo": "0"
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             },
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "type": "block",
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:             "vg_name": "ceph_vg2"
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:         }
Nov 24 21:02:21 compute-0 suspicious_turing[313870]:     ]
Nov 24 21:02:21 compute-0 suspicious_turing[313870]: }
Nov 24 21:02:21 compute-0 systemd[1]: libpod-a0118b16c16f95021e6c1e46f6e618029c5527f5c7d6cb835d4c539a250dc5f8.scope: Deactivated successfully.
Nov 24 21:02:21 compute-0 podman[313854]: 2025-11-24 21:02:21.379756225 +0000 UTC m=+0.970354903 container died a0118b16c16f95021e6c1e46f6e618029c5527f5c7d6cb835d4c539a250dc5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7adaec0ec9356c99020c48485762d66eefa57bfbaa0a34ad3dc54b63977342af-merged.mount: Deactivated successfully.
Nov 24 21:02:21 compute-0 podman[313854]: 2025-11-24 21:02:21.447523092 +0000 UTC m=+1.038121770 container remove a0118b16c16f95021e6c1e46f6e618029c5527f5c7d6cb835d4c539a250dc5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:02:21 compute-0 systemd[1]: libpod-conmon-a0118b16c16f95021e6c1e46f6e618029c5527f5c7d6cb835d4c539a250dc5f8.scope: Deactivated successfully.
Nov 24 21:02:21 compute-0 sudo[313745]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:21 compute-0 sudo[313892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:02:21 compute-0 sudo[313892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:21 compute-0 sudo[313892]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:21 compute-0 sudo[313917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:02:21 compute-0 sudo[313917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:21 compute-0 sudo[313917]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:21 compute-0 sudo[313942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:02:21 compute-0 sudo[313942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:21 compute-0 sudo[313942]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:21 compute-0 sudo[313967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:02:21 compute-0 sudo[313967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:21.982+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:22 compute-0 ceph-mon[75677]: pgmap v2430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:22.065+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4262 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:22 compute-0 podman[314033]: 2025-11-24 21:02:22.28938573 +0000 UTC m=+0.062351443 container create 04744f016a154da3267755159440177dad780a20e1f441bdf8d4baeb6d8d4a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:02:22 compute-0 systemd[1]: Started libpod-conmon-04744f016a154da3267755159440177dad780a20e1f441bdf8d4baeb6d8d4a41.scope.
Nov 24 21:02:22 compute-0 podman[314033]: 2025-11-24 21:02:22.261666032 +0000 UTC m=+0.034631795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:02:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:02:22 compute-0 podman[314033]: 2025-11-24 21:02:22.394349549 +0000 UTC m=+0.167315272 container init 04744f016a154da3267755159440177dad780a20e1f441bdf8d4baeb6d8d4a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:02:22 compute-0 podman[314033]: 2025-11-24 21:02:22.406401774 +0000 UTC m=+0.179367457 container start 04744f016a154da3267755159440177dad780a20e1f441bdf8d4baeb6d8d4a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:02:22 compute-0 podman[314033]: 2025-11-24 21:02:22.410661179 +0000 UTC m=+0.183626862 container attach 04744f016a154da3267755159440177dad780a20e1f441bdf8d4baeb6d8d4a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 21:02:22 compute-0 sharp_haslett[314050]: 167 167
Nov 24 21:02:22 compute-0 systemd[1]: libpod-04744f016a154da3267755159440177dad780a20e1f441bdf8d4baeb6d8d4a41.scope: Deactivated successfully.
Nov 24 21:02:22 compute-0 podman[314033]: 2025-11-24 21:02:22.41590109 +0000 UTC m=+0.188866773 container died 04744f016a154da3267755159440177dad780a20e1f441bdf8d4baeb6d8d4a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:02:22 compute-0 podman[314047]: 2025-11-24 21:02:22.428801028 +0000 UTC m=+0.093063530 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cd57e822a1ec297058f063d406326c06a8a9540f9cd3ad32fbf9872c97cffc0-merged.mount: Deactivated successfully.
Nov 24 21:02:22 compute-0 podman[314033]: 2025-11-24 21:02:22.469121635 +0000 UTC m=+0.242087348 container remove 04744f016a154da3267755159440177dad780a20e1f441bdf8d4baeb6d8d4a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 21:02:22 compute-0 systemd[1]: libpod-conmon-04744f016a154da3267755159440177dad780a20e1f441bdf8d4baeb6d8d4a41.scope: Deactivated successfully.
Nov 24 21:02:22 compute-0 sshd-session[314086]: Accepted publickey for zuul from 192.168.122.30 port 54018 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 21:02:22 compute-0 systemd-logind[795]: New session 52 of user zuul.
Nov 24 21:02:22 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 24 21:02:22 compute-0 sshd-session[314086]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:02:22 compute-0 podman[314094]: 2025-11-24 21:02:22.706393013 +0000 UTC m=+0.042219821 container create da81e00190c14a83ae8a230d47379f5b279befaa9eff2e9630feccef65e94fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 21:02:22 compute-0 systemd[1]: Started libpod-conmon-da81e00190c14a83ae8a230d47379f5b279befaa9eff2e9630feccef65e94fea.scope.
Nov 24 21:02:22 compute-0 podman[314094]: 2025-11-24 21:02:22.687839112 +0000 UTC m=+0.023665930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:02:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d076754bfd3bc04f6e8bf774bccfb52e386593c3f925a748d57a17f00bdf61b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d076754bfd3bc04f6e8bf774bccfb52e386593c3f925a748d57a17f00bdf61b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d076754bfd3bc04f6e8bf774bccfb52e386593c3f925a748d57a17f00bdf61b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d076754bfd3bc04f6e8bf774bccfb52e386593c3f925a748d57a17f00bdf61b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:02:22 compute-0 podman[314094]: 2025-11-24 21:02:22.845475872 +0000 UTC m=+0.181302740 container init da81e00190c14a83ae8a230d47379f5b279befaa9eff2e9630feccef65e94fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_spence, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:02:22 compute-0 podman[314094]: 2025-11-24 21:02:22.862423789 +0000 UTC m=+0.198250597 container start da81e00190c14a83ae8a230d47379f5b279befaa9eff2e9630feccef65e94fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 21:02:22 compute-0 podman[314094]: 2025-11-24 21:02:22.866968472 +0000 UTC m=+0.202795290 container attach da81e00190c14a83ae8a230d47379f5b279befaa9eff2e9630feccef65e94fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_spence, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:02:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:22.958+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:23.017+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:23 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4262 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:23.911+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:23 compute-0 musing_spence[314134]: {
Nov 24 21:02:23 compute-0 musing_spence[314134]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "osd_id": 2,
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "type": "bluestore"
Nov 24 21:02:23 compute-0 musing_spence[314134]:     },
Nov 24 21:02:23 compute-0 musing_spence[314134]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "osd_id": 1,
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "type": "bluestore"
Nov 24 21:02:23 compute-0 musing_spence[314134]:     },
Nov 24 21:02:23 compute-0 musing_spence[314134]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "osd_id": 0,
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:02:23 compute-0 musing_spence[314134]:         "type": "bluestore"
Nov 24 21:02:23 compute-0 musing_spence[314134]:     }
Nov 24 21:02:23 compute-0 musing_spence[314134]: }
Nov 24 21:02:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:24.008+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:24 compute-0 systemd[1]: libpod-da81e00190c14a83ae8a230d47379f5b279befaa9eff2e9630feccef65e94fea.scope: Deactivated successfully.
Nov 24 21:02:24 compute-0 systemd[1]: libpod-da81e00190c14a83ae8a230d47379f5b279befaa9eff2e9630feccef65e94fea.scope: Consumed 1.185s CPU time.
Nov 24 21:02:24 compute-0 ceph-mon[75677]: pgmap v2431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:24 compute-0 podman[314190]: 2025-11-24 21:02:24.113290164 +0000 UTC m=+0.046841434 container died da81e00190c14a83ae8a230d47379f5b279befaa9eff2e9630feccef65e94fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 21:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d076754bfd3bc04f6e8bf774bccfb52e386593c3f925a748d57a17f00bdf61b3-merged.mount: Deactivated successfully.
Nov 24 21:02:24 compute-0 podman[314190]: 2025-11-24 21:02:24.195406748 +0000 UTC m=+0.128957988 container remove da81e00190c14a83ae8a230d47379f5b279befaa9eff2e9630feccef65e94fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_spence, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 21:02:24 compute-0 systemd[1]: libpod-conmon-da81e00190c14a83ae8a230d47379f5b279befaa9eff2e9630feccef65e94fea.scope: Deactivated successfully.
Nov 24 21:02:24 compute-0 sudo[313967]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:02:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:02:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:02:24 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9ab6f672-d7c8-48c0-8967-97c1b9333e96 does not exist
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 12bd979b-d98d-4892-8a7e-1e88c4b685b5 does not exist
Nov 24 21:02:24 compute-0 sudo[314204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:02:24 compute-0 sudo[314204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:24 compute-0 sudo[314204]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:02:24 compute-0 sudo[314229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:02:24 compute-0 sudo[314229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:02:24 compute-0 sudo[314229]: pam_unix(sudo:session): session closed for user root
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:02:24
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr', 'backups', 'volumes', 'vms']
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:02:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:24.925+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:25.048+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:02:25 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:02:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:25 compute-0 ceph-mgr[75975]: client.0 ms_handle_reset on v2:192.168.122.100:6800/103018990
Nov 24 21:02:25 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:02:25.568 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 21:02:25 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:02:25.569 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 21:02:25 compute-0 podman[314254]: 2025-11-24 21:02:25.916193422 +0000 UTC m=+0.141367702 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 21:02:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:25.953+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:26.071+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:26 compute-0 ceph-mon[75677]: pgmap v2432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:26.977+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:27.115+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4267 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:27.983+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:28.101+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:28 compute-0 ceph-mon[75677]: pgmap v2433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:28 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4267 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:28.995+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:29.084+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:29 compute-0 sshd-session[314096]: Connection closed by 192.168.122.30 port 54018
Nov 24 21:02:29 compute-0 sshd-session[314086]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:02:29 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 24 21:02:29 compute-0 systemd-logind[795]: Session 52 logged out. Waiting for processes to exit.
Nov 24 21:02:29 compute-0 systemd-logind[795]: Removed session 52.
Nov 24 21:02:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:30.020+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:30.112+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:30 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:30 compute-0 ceph-mon[75677]: pgmap v2434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:31.040+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:31.096+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:31 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:02:31.571 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 21:02:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:32.076+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:32.120+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:32 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:32 compute-0 ceph-mon[75677]: pgmap v2435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:33.080+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:33.157+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:34.030+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:34.173+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:34 compute-0 ceph-mon[75677]: pgmap v2436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:35.003+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:35.217+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:02:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:02:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:35.986+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:36.251+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:36 compute-0 ceph-mon[75677]: pgmap v2437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:36.942+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:36 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4272 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:37.236+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:37 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4272 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:37 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:37.978+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:38.245+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:38 compute-0 ceph-mon[75677]: pgmap v2438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:38 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:39.007+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:39.275+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:40.029+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:40.284+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:40 compute-0 ceph-mon[75677]: pgmap v2439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:02:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:02:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:02:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:02:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:02:40 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:40.984+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:41.294+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:42.024+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4282 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:42.263+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:42 compute-0 ceph-mon[75677]: pgmap v2440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:42 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4282 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:42 compute-0 podman[314489]: 2025-11-24 21:02:42.861079776 +0000 UTC m=+0.084997272 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:02:42 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:42.985+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:43.222+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:43.997+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:44 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:44.196+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:44 compute-0 ceph-mon[75677]: pgmap v2441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:44 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:44.990+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:45.205+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:46.006+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:46.238+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:46 compute-0 ceph-mon[75677]: pgmap v2442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:46.968+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:46 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:47.209+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4287 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:47.985+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:48.247+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:48 compute-0 ceph-mon[75677]: pgmap v2443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:48 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4287 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:48 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:49.023+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:49.287+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:49 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:50.029+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:50.327+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:50 compute-0 ceph-mon[75677]: pgmap v2444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:50 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:51.042+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:51.283+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:52.046+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:52.238+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:52 compute-0 ceph-mon[75677]: pgmap v2445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:52 compute-0 podman[314509]: 2025-11-24 21:02:52.858715706 +0000 UTC m=+0.085791514 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:02:52 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:53.000+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:53.205+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:54.027+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:54.189+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:54 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:02:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:02:54 compute-0 ceph-mon[75677]: pgmap v2446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:54 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:55.073+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:55.157+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:55 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:56.098+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:56 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:56.169+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:56 compute-0 ceph-mon[75677]: pgmap v2447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:56 compute-0 podman[314530]: 2025-11-24 21:02:56.89290535 +0000 UTC m=+0.121753623 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 24 21:02:56 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:57.051+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:57.166+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4292 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:02:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:57 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4292 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:02:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:58.068+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:58.207+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:58 compute-0 ceph-mon[75677]: pgmap v2448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:58 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:02:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:02:59.048+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:02:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:02:59.248+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:02:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:02:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:02:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:00.016+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:00.208+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:00 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:00 compute-0 ceph-mon[75677]: pgmap v2449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:00 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:01.053+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:01.182+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:01 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:02.008+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:02.170+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4302 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:02 compute-0 ceph-mon[75677]: pgmap v2450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:02 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4302 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:02 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:03.008+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:03.192+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:04.021+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:04.217+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:04 compute-0 ceph-mon[75677]: pgmap v2451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:04 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:05.070+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:05.225+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:06.026+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:06.183+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:06 compute-0 ceph-mon[75677]: pgmap v2452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:06 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:06.985+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:07.185+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4307 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:07.937+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:08.176+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:08 compute-0 ceph-mon[75677]: pgmap v2453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:08 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4307 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:08.936+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:08 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:09.194+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:03:09.421 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:03:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:03:09.422 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:03:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:03:09.422 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:03:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:09.951+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:10.240+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:10 compute-0 ceph-mon[75677]: pgmap v2454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:10.943+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:10 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:11 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:11.193+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:11.966+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:12.184+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:12 compute-0 ceph-mon[75677]: pgmap v2455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:12 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:12.991+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:13.189+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:13 compute-0 podman[314557]: 2025-11-24 21:03:13.847122335 +0000 UTC m=+0.071631722 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:03:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:13.966+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:14.181+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:14 compute-0 ceph-mon[75677]: pgmap v2456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:14.962+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:14 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:15.222+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:16.003+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:16.270+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:16 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:03:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3473686710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:03:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:03:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3473686710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:03:16 compute-0 ceph-mon[75677]: pgmap v2457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3473686710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:03:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3473686710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:03:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:16.981+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:16 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4312 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:17.279+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:17 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4312 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:17.989+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:18.290+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:18 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:18 compute-0 ceph-mon[75677]: pgmap v2458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:18.949+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:18 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:19.276+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:19 compute-0 ceph-mon[75677]: pgmap v2459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:19.962+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:20.313+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:20 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:20.960+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:20 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:21.351+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:21 compute-0 ceph-mon[75677]: pgmap v2460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:21.998+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4322 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:22.361+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:22 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:23.009+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:23 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4322 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:23.381+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:23 compute-0 podman[314578]: 2025-11-24 21:03:23.851637919 +0000 UTC m=+0.078457557 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 21:03:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:24.018+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:24 compute-0 ceph-mon[75677]: pgmap v2461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:24.425+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:03:24 compute-0 sudo[314601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:03:24 compute-0 sudo[314601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:24 compute-0 sudo[314601]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:03:24
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'backups', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'images']
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:03:24 compute-0 sudo[314626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:03:24 compute-0 sudo[314626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:24 compute-0 sudo[314626]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:24 compute-0 sudo[314651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:03:24 compute-0 sudo[314651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:24 compute-0 sudo[314651]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:24 compute-0 sudo[314676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:03:24 compute-0 sudo[314676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:24 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:24.995+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:25.383+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:25 compute-0 sudo[314676]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:03:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:03:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:03:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:03:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:03:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:03:25 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 536f2d7f-2112-42dc-8b00-bab1bb0b3b44 does not exist
Nov 24 21:03:25 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 8990f33f-b47a-4e6b-ad03-c3ea42a35534 does not exist
Nov 24 21:03:25 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 18a430dc-f0b5-490c-b036-989970c5bf8c does not exist
Nov 24 21:03:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:03:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:03:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:03:25 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:03:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:03:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:03:25 compute-0 sudo[314732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:03:25 compute-0 sudo[314732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:25 compute-0 sudo[314732]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:25 compute-0 sudo[314757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:03:25 compute-0 sudo[314757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:25 compute-0 sudo[314757]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:25 compute-0 sudo[314782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:03:25 compute-0 sudo[314782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:25 compute-0 sudo[314782]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:25 compute-0 sudo[314807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:03:25 compute-0 sudo[314807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:25.985+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:26 compute-0 ceph-mon[75677]: pgmap v2462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:03:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:03:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:03:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:03:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:03:26 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:03:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:26 compute-0 podman[314873]: 2025-11-24 21:03:26.230134786 +0000 UTC m=+0.070769619 container create 7d131b51fde08b7304d8b4d04b8633d602ec208cd42c05390cef4e73acc782c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 21:03:26 compute-0 systemd[1]: Started libpod-conmon-7d131b51fde08b7304d8b4d04b8633d602ec208cd42c05390cef4e73acc782c2.scope.
Nov 24 21:03:26 compute-0 podman[314873]: 2025-11-24 21:03:26.200829156 +0000 UTC m=+0.041464049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:03:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:03:26 compute-0 podman[314873]: 2025-11-24 21:03:26.338415055 +0000 UTC m=+0.179049918 container init 7d131b51fde08b7304d8b4d04b8633d602ec208cd42c05390cef4e73acc782c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 21:03:26 compute-0 podman[314873]: 2025-11-24 21:03:26.350247464 +0000 UTC m=+0.190882287 container start 7d131b51fde08b7304d8b4d04b8633d602ec208cd42c05390cef4e73acc782c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gates, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:03:26 compute-0 podman[314873]: 2025-11-24 21:03:26.354429167 +0000 UTC m=+0.195064050 container attach 7d131b51fde08b7304d8b4d04b8633d602ec208cd42c05390cef4e73acc782c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 21:03:26 compute-0 peaceful_gates[314889]: 167 167
Nov 24 21:03:26 compute-0 systemd[1]: libpod-7d131b51fde08b7304d8b4d04b8633d602ec208cd42c05390cef4e73acc782c2.scope: Deactivated successfully.
Nov 24 21:03:26 compute-0 podman[314873]: 2025-11-24 21:03:26.360909522 +0000 UTC m=+0.201544365 container died 7d131b51fde08b7304d8b4d04b8633d602ec208cd42c05390cef4e73acc782c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gates, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:03:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:26.379+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-84031dcdca2990fd53af1d0185b1c5f710e91169ea5fff5829d6b706e517ddd5-merged.mount: Deactivated successfully.
Nov 24 21:03:26 compute-0 podman[314873]: 2025-11-24 21:03:26.416208423 +0000 UTC m=+0.256843256 container remove 7d131b51fde08b7304d8b4d04b8633d602ec208cd42c05390cef4e73acc782c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 21:03:26 compute-0 systemd[1]: libpod-conmon-7d131b51fde08b7304d8b4d04b8633d602ec208cd42c05390cef4e73acc782c2.scope: Deactivated successfully.
Nov 24 21:03:26 compute-0 podman[314915]: 2025-11-24 21:03:26.614561541 +0000 UTC m=+0.052724323 container create e26ff0324227078eccbe2b09dead7ca84c98d1fde1d9a7d9aa2b3be2abe0098b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kowalevski, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:03:26 compute-0 systemd[1]: Started libpod-conmon-e26ff0324227078eccbe2b09dead7ca84c98d1fde1d9a7d9aa2b3be2abe0098b.scope.
Nov 24 21:03:26 compute-0 podman[314915]: 2025-11-24 21:03:26.59227436 +0000 UTC m=+0.030437162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:03:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b3556a2a0a6eab3caf3682c9e4587aab20890d381f2646ab9c2e24eb27348a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b3556a2a0a6eab3caf3682c9e4587aab20890d381f2646ab9c2e24eb27348a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b3556a2a0a6eab3caf3682c9e4587aab20890d381f2646ab9c2e24eb27348a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b3556a2a0a6eab3caf3682c9e4587aab20890d381f2646ab9c2e24eb27348a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b3556a2a0a6eab3caf3682c9e4587aab20890d381f2646ab9c2e24eb27348a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:26 compute-0 podman[314915]: 2025-11-24 21:03:26.716424517 +0000 UTC m=+0.154587359 container init e26ff0324227078eccbe2b09dead7ca84c98d1fde1d9a7d9aa2b3be2abe0098b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kowalevski, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 21:03:26 compute-0 podman[314915]: 2025-11-24 21:03:26.729754987 +0000 UTC m=+0.167917779 container start e26ff0324227078eccbe2b09dead7ca84c98d1fde1d9a7d9aa2b3be2abe0098b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kowalevski, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:03:26 compute-0 podman[314915]: 2025-11-24 21:03:26.733804915 +0000 UTC m=+0.171967707 container attach e26ff0324227078eccbe2b09dead7ca84c98d1fde1d9a7d9aa2b3be2abe0098b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kowalevski, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 21:03:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:26.983+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:26 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4327 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:27.426+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:27 compute-0 nostalgic_kowalevski[314931]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:03:27 compute-0 nostalgic_kowalevski[314931]: --> relative data size: 1.0
Nov 24 21:03:27 compute-0 nostalgic_kowalevski[314931]: --> All data devices are unavailable
Nov 24 21:03:27 compute-0 systemd[1]: libpod-e26ff0324227078eccbe2b09dead7ca84c98d1fde1d9a7d9aa2b3be2abe0098b.scope: Deactivated successfully.
Nov 24 21:03:27 compute-0 systemd[1]: libpod-e26ff0324227078eccbe2b09dead7ca84c98d1fde1d9a7d9aa2b3be2abe0098b.scope: Consumed 1.123s CPU time.
Nov 24 21:03:27 compute-0 podman[314915]: 2025-11-24 21:03:27.90380195 +0000 UTC m=+1.341964742 container died e26ff0324227078eccbe2b09dead7ca84c98d1fde1d9a7d9aa2b3be2abe0098b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kowalevski, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:03:27 compute-0 podman[314956]: 2025-11-24 21:03:27.941798684 +0000 UTC m=+0.167964340 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 21:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-93b3556a2a0a6eab3caf3682c9e4587aab20890d381f2646ab9c2e24eb27348a-merged.mount: Deactivated successfully.
Nov 24 21:03:27 compute-0 podman[314915]: 2025-11-24 21:03:27.974620029 +0000 UTC m=+1.412782781 container remove e26ff0324227078eccbe2b09dead7ca84c98d1fde1d9a7d9aa2b3be2abe0098b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 24 21:03:27 compute-0 systemd[1]: libpod-conmon-e26ff0324227078eccbe2b09dead7ca84c98d1fde1d9a7d9aa2b3be2abe0098b.scope: Deactivated successfully.
Nov 24 21:03:28 compute-0 sudo[314807]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:28.031+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:28 compute-0 sudo[314996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:03:28 compute-0 sudo[314996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:28 compute-0 sudo[314996]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:28 compute-0 sudo[315021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:03:28 compute-0 sudo[315021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:28 compute-0 sudo[315021]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:28 compute-0 ceph-mon[75677]: pgmap v2463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:28 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4327 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:28 compute-0 sudo[315046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:03:28 compute-0 sudo[315046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:28 compute-0 sudo[315046]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:28 compute-0 sudo[315071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:03:28 compute-0 sudo[315071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:28.385+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:28 compute-0 podman[315138]: 2025-11-24 21:03:28.832065097 +0000 UTC m=+0.067388338 container create ae81623a5608ad4d36edb81a5dbdebe7f063a7c8bd98bc55ef0682b3746603de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:03:28 compute-0 systemd[1]: Started libpod-conmon-ae81623a5608ad4d36edb81a5dbdebe7f063a7c8bd98bc55ef0682b3746603de.scope.
Nov 24 21:03:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:03:28 compute-0 podman[315138]: 2025-11-24 21:03:28.806393915 +0000 UTC m=+0.041717216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:03:28 compute-0 podman[315138]: 2025-11-24 21:03:28.927196442 +0000 UTC m=+0.162519703 container init ae81623a5608ad4d36edb81a5dbdebe7f063a7c8bd98bc55ef0682b3746603de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 24 21:03:28 compute-0 podman[315138]: 2025-11-24 21:03:28.939049391 +0000 UTC m=+0.174372632 container start ae81623a5608ad4d36edb81a5dbdebe7f063a7c8bd98bc55ef0682b3746603de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:03:28 compute-0 youthful_gauss[315155]: 167 167
Nov 24 21:03:28 compute-0 systemd[1]: libpod-ae81623a5608ad4d36edb81a5dbdebe7f063a7c8bd98bc55ef0682b3746603de.scope: Deactivated successfully.
Nov 24 21:03:28 compute-0 podman[315138]: 2025-11-24 21:03:28.979857512 +0000 UTC m=+0.215180813 container attach ae81623a5608ad4d36edb81a5dbdebe7f063a7c8bd98bc55ef0682b3746603de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 21:03:28 compute-0 podman[315138]: 2025-11-24 21:03:28.980965861 +0000 UTC m=+0.216289112 container died ae81623a5608ad4d36edb81a5dbdebe7f063a7c8bd98bc55ef0682b3746603de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:03:28 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:29.058+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:29 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9abbddf7205b7641ab4d7afb60284f17e22f8e648539fdb9794be18181e41d9a-merged.mount: Deactivated successfully.
Nov 24 21:03:29 compute-0 podman[315138]: 2025-11-24 21:03:29.220371486 +0000 UTC m=+0.455694737 container remove ae81623a5608ad4d36edb81a5dbdebe7f063a7c8bd98bc55ef0682b3746603de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:03:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:29 compute-0 systemd[1]: libpod-conmon-ae81623a5608ad4d36edb81a5dbdebe7f063a7c8bd98bc55ef0682b3746603de.scope: Deactivated successfully.
Nov 24 21:03:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:29.359+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:29 compute-0 podman[315181]: 2025-11-24 21:03:29.465410062 +0000 UTC m=+0.055856127 container create fae6c18eeedc2a73dd86824ca1ba302f2562ac7273d01462a8007e15e7c64b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:03:29 compute-0 systemd[1]: Started libpod-conmon-fae6c18eeedc2a73dd86824ca1ba302f2562ac7273d01462a8007e15e7c64b86.scope.
Nov 24 21:03:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:03:29 compute-0 podman[315181]: 2025-11-24 21:03:29.442072164 +0000 UTC m=+0.032518279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b3eb20eb741f3dba11c41205644d371dd6fb6071d4f41bf8b03b5954130364/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b3eb20eb741f3dba11c41205644d371dd6fb6071d4f41bf8b03b5954130364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b3eb20eb741f3dba11c41205644d371dd6fb6071d4f41bf8b03b5954130364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b3eb20eb741f3dba11c41205644d371dd6fb6071d4f41bf8b03b5954130364/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:29 compute-0 podman[315181]: 2025-11-24 21:03:29.559107189 +0000 UTC m=+0.149553284 container init fae6c18eeedc2a73dd86824ca1ba302f2562ac7273d01462a8007e15e7c64b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:03:29 compute-0 podman[315181]: 2025-11-24 21:03:29.572801879 +0000 UTC m=+0.163247904 container start fae6c18eeedc2a73dd86824ca1ba302f2562ac7273d01462a8007e15e7c64b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_satoshi, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:03:29 compute-0 podman[315181]: 2025-11-24 21:03:29.583791675 +0000 UTC m=+0.174237740 container attach fae6c18eeedc2a73dd86824ca1ba302f2562ac7273d01462a8007e15e7c64b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_satoshi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:03:29 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:03:29.981 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 21:03:29 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:03:29.984 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 21:03:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:30.092+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:30 compute-0 ceph-mon[75677]: pgmap v2464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:30.382+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:30 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]: {
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:     "0": [
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:         {
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "devices": [
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "/dev/loop3"
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             ],
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_name": "ceph_lv0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_size": "21470642176",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "name": "ceph_lv0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "tags": {
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.cluster_name": "ceph",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.crush_device_class": "",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.encrypted": "0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.osd_id": "0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.type": "block",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.vdo": "0"
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             },
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "type": "block",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "vg_name": "ceph_vg0"
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:         }
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:     ],
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:     "1": [
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:         {
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "devices": [
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "/dev/loop4"
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             ],
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_name": "ceph_lv1",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_size": "21470642176",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "name": "ceph_lv1",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "tags": {
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.cluster_name": "ceph",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.crush_device_class": "",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.encrypted": "0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.osd_id": "1",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.type": "block",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.vdo": "0"
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             },
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "type": "block",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "vg_name": "ceph_vg1"
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:         }
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:     ],
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:     "2": [
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:         {
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "devices": [
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "/dev/loop5"
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             ],
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_name": "ceph_lv2",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_size": "21470642176",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "name": "ceph_lv2",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "tags": {
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.cluster_name": "ceph",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.crush_device_class": "",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.encrypted": "0",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.osd_id": "2",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.type": "block",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:                 "ceph.vdo": "0"
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             },
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "type": "block",
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:             "vg_name": "ceph_vg2"
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:         }
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]:     ]
Nov 24 21:03:30 compute-0 awesome_satoshi[315198]: }
Nov 24 21:03:30 compute-0 systemd[1]: libpod-fae6c18eeedc2a73dd86824ca1ba302f2562ac7273d01462a8007e15e7c64b86.scope: Deactivated successfully.
Nov 24 21:03:30 compute-0 podman[315181]: 2025-11-24 21:03:30.456820482 +0000 UTC m=+1.047266547 container died fae6c18eeedc2a73dd86824ca1ba302f2562ac7273d01462a8007e15e7c64b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-60b3eb20eb741f3dba11c41205644d371dd6fb6071d4f41bf8b03b5954130364-merged.mount: Deactivated successfully.
Nov 24 21:03:30 compute-0 podman[315181]: 2025-11-24 21:03:30.5190475 +0000 UTC m=+1.109493525 container remove fae6c18eeedc2a73dd86824ca1ba302f2562ac7273d01462a8007e15e7c64b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_satoshi, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 21:03:30 compute-0 systemd[1]: libpod-conmon-fae6c18eeedc2a73dd86824ca1ba302f2562ac7273d01462a8007e15e7c64b86.scope: Deactivated successfully.
Nov 24 21:03:30 compute-0 sudo[315071]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:30 compute-0 sudo[315219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:03:30 compute-0 sudo[315219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:30 compute-0 sudo[315219]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:30 compute-0 sudo[315244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:03:30 compute-0 sudo[315244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:30 compute-0 sudo[315244]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:30 compute-0 sudo[315269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:03:30 compute-0 sudo[315269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:30 compute-0 sudo[315269]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:30 compute-0 sudo[315294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:03:30 compute-0 sudo[315294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:30 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:31.088+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:31 compute-0 podman[315357]: 2025-11-24 21:03:31.25120503 +0000 UTC m=+0.043146905 container create 7f0581028390d4f3bc65a66da83929d66900150d25df1e6df523b9d237606732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_carver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:03:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:31 compute-0 systemd[1]: Started libpod-conmon-7f0581028390d4f3bc65a66da83929d66900150d25df1e6df523b9d237606732.scope.
Nov 24 21:03:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:03:31 compute-0 podman[315357]: 2025-11-24 21:03:31.235734453 +0000 UTC m=+0.027676378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:03:31 compute-0 podman[315357]: 2025-11-24 21:03:31.342138641 +0000 UTC m=+0.134080536 container init 7f0581028390d4f3bc65a66da83929d66900150d25df1e6df523b9d237606732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 21:03:31 compute-0 podman[315357]: 2025-11-24 21:03:31.34872562 +0000 UTC m=+0.140667495 container start 7f0581028390d4f3bc65a66da83929d66900150d25df1e6df523b9d237606732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_carver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:03:31 compute-0 podman[315357]: 2025-11-24 21:03:31.35395187 +0000 UTC m=+0.145893755 container attach 7f0581028390d4f3bc65a66da83929d66900150d25df1e6df523b9d237606732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:03:31 compute-0 recursing_carver[315373]: 167 167
Nov 24 21:03:31 compute-0 systemd[1]: libpod-7f0581028390d4f3bc65a66da83929d66900150d25df1e6df523b9d237606732.scope: Deactivated successfully.
Nov 24 21:03:31 compute-0 podman[315357]: 2025-11-24 21:03:31.356241462 +0000 UTC m=+0.148183337 container died 7f0581028390d4f3bc65a66da83929d66900150d25df1e6df523b9d237606732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9536193c33861c77f61701dc62b0aeb051fb8e123e121401d53b1c6e503131f4-merged.mount: Deactivated successfully.
Nov 24 21:03:31 compute-0 podman[315357]: 2025-11-24 21:03:31.407209726 +0000 UTC m=+0.199151641 container remove 7f0581028390d4f3bc65a66da83929d66900150d25df1e6df523b9d237606732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_carver, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 21:03:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:31.414+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:31 compute-0 systemd[1]: libpod-conmon-7f0581028390d4f3bc65a66da83929d66900150d25df1e6df523b9d237606732.scope: Deactivated successfully.
Nov 24 21:03:31 compute-0 podman[315397]: 2025-11-24 21:03:31.608170004 +0000 UTC m=+0.053023731 container create 4407b1a0880fea1bda18d95b59521d7187f4d653ce8c30b638651179bb0b63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 21:03:31 compute-0 systemd[1]: Started libpod-conmon-4407b1a0880fea1bda18d95b59521d7187f4d653ce8c30b638651179bb0b63bd.scope.
Nov 24 21:03:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:03:31 compute-0 podman[315397]: 2025-11-24 21:03:31.586773358 +0000 UTC m=+0.031627165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2725ff2c65d51423a1f7733b8f7713dd4747cabf3671e48f048b36afe53363e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2725ff2c65d51423a1f7733b8f7713dd4747cabf3671e48f048b36afe53363e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2725ff2c65d51423a1f7733b8f7713dd4747cabf3671e48f048b36afe53363e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2725ff2c65d51423a1f7733b8f7713dd4747cabf3671e48f048b36afe53363e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:03:31 compute-0 podman[315397]: 2025-11-24 21:03:31.694944433 +0000 UTC m=+0.139798190 container init 4407b1a0880fea1bda18d95b59521d7187f4d653ce8c30b638651179bb0b63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 21:03:31 compute-0 podman[315397]: 2025-11-24 21:03:31.701704656 +0000 UTC m=+0.146558383 container start 4407b1a0880fea1bda18d95b59521d7187f4d653ce8c30b638651179bb0b63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 21:03:31 compute-0 podman[315397]: 2025-11-24 21:03:31.704556383 +0000 UTC m=+0.149410100 container attach 4407b1a0880fea1bda18d95b59521d7187f4d653ce8c30b638651179bb0b63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 21:03:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:32.054+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.204776) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018212205116, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2448, "num_deletes": 637, "total_data_size": 2531086, "memory_usage": 2595040, "flush_reason": "Manual Compaction"}
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018212224743, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2474512, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71732, "largest_seqno": 74179, "table_properties": {"data_size": 2464467, "index_size": 5194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 35446, "raw_average_key_size": 23, "raw_value_size": 2438679, "raw_average_value_size": 1630, "num_data_blocks": 226, "num_entries": 1496, "num_filter_entries": 1496, "num_deletions": 637, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764018068, "oldest_key_time": 1764018068, "file_creation_time": 1764018212, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 20105 microseconds, and 7937 cpu microseconds.
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.224903) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2474512 bytes OK
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.224963) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.228431) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.228447) EVENT_LOG_v1 {"time_micros": 1764018212228441, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.228465) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2518617, prev total WAL file size 2518617, number of live WAL files 2.
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.229712) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353236' seq:72057594037927935, type:22 .. '6C6F676D0033373738' seq:0, type:0; will stop at (end)
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2416KB)], [152(9984KB)]
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018212229813, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 12698803, "oldest_snapshot_seqno": -1}
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 13539 keys, 12405142 bytes, temperature: kUnknown
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018212313311, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 12405142, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12327688, "index_size": 42525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33861, "raw_key_size": 372145, "raw_average_key_size": 27, "raw_value_size": 12092354, "raw_average_value_size": 893, "num_data_blocks": 1570, "num_entries": 13539, "num_filter_entries": 13539, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764018212, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.313676) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12405142 bytes
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.315810) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.9 rd, 148.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.8 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(10.1) write-amplify(5.0) OK, records in: 14825, records dropped: 1286 output_compression: NoCompression
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.315830) EVENT_LOG_v1 {"time_micros": 1764018212315819, "job": 94, "event": "compaction_finished", "compaction_time_micros": 83600, "compaction_time_cpu_micros": 33932, "output_level": 6, "num_output_files": 1, "total_output_size": 12405142, "num_input_records": 14825, "num_output_records": 13539, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018212316360, "job": 94, "event": "table_file_deletion", "file_number": 154}
Nov 24 21:03:32 compute-0 ceph-mon[75677]: pgmap v2465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018212318380, "job": 94, "event": "table_file_deletion", "file_number": 152}
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.229560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.319541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.319550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.319553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.319556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:32 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:32.319560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:32.385+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:32 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:32 compute-0 admiring_bell[315411]: {
Nov 24 21:03:32 compute-0 admiring_bell[315411]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "osd_id": 2,
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "type": "bluestore"
Nov 24 21:03:32 compute-0 admiring_bell[315411]:     },
Nov 24 21:03:32 compute-0 admiring_bell[315411]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "osd_id": 1,
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "type": "bluestore"
Nov 24 21:03:32 compute-0 admiring_bell[315411]:     },
Nov 24 21:03:32 compute-0 admiring_bell[315411]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "osd_id": 0,
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:03:32 compute-0 admiring_bell[315411]:         "type": "bluestore"
Nov 24 21:03:32 compute-0 admiring_bell[315411]:     }
Nov 24 21:03:32 compute-0 admiring_bell[315411]: }
Nov 24 21:03:32 compute-0 systemd[1]: libpod-4407b1a0880fea1bda18d95b59521d7187f4d653ce8c30b638651179bb0b63bd.scope: Deactivated successfully.
Nov 24 21:03:32 compute-0 systemd[1]: libpod-4407b1a0880fea1bda18d95b59521d7187f4d653ce8c30b638651179bb0b63bd.scope: Consumed 1.049s CPU time.
Nov 24 21:03:32 compute-0 podman[315397]: 2025-11-24 21:03:32.749500186 +0000 UTC m=+1.194353943 container died 4407b1a0880fea1bda18d95b59521d7187f4d653ce8c30b638651179bb0b63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:03:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2725ff2c65d51423a1f7733b8f7713dd4747cabf3671e48f048b36afe53363e3-merged.mount: Deactivated successfully.
Nov 24 21:03:32 compute-0 podman[315397]: 2025-11-24 21:03:32.829540864 +0000 UTC m=+1.274394621 container remove 4407b1a0880fea1bda18d95b59521d7187f4d653ce8c30b638651179bb0b63bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:03:32 compute-0 systemd[1]: libpod-conmon-4407b1a0880fea1bda18d95b59521d7187f4d653ce8c30b638651179bb0b63bd.scope: Deactivated successfully.
Nov 24 21:03:32 compute-0 sudo[315294]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:03:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:03:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:03:32 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:03:32 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 4ea307b4-9947-4f1b-8778-99fb2e342fab does not exist
Nov 24 21:03:32 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 5c7365af-c518-4783-9578-e1a73cd84c2a does not exist
Nov 24 21:03:32 compute-0 sudo[315458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:03:32 compute-0 sudo[315458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:32 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:03:32.986 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 21:03:32 compute-0 sudo[315458]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:32 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:33.014+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:33 compute-0 sudo[315483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:03:33 compute-0 sudo[315483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:03:33 compute-0 sudo[315483]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:33.365+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:03:33 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:03:33 compute-0 ceph-mon[75677]: pgmap v2466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:34.050+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:34 compute-0 sshd-session[315508]: Accepted publickey for zuul from 192.168.122.30 port 56840 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 21:03:34 compute-0 systemd-logind[795]: New session 53 of user zuul.
Nov 24 21:03:34 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 24 21:03:34 compute-0 sshd-session[315508]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:03:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:34.321+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:34 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:35.016+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:35 compute-0 sudo[315604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Nov 24 21:03:35 compute-0 sudo[315604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:35 compute-0 sudo[315604]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:35 compute-0 sudo[315630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Nov 24 21:03:35 compute-0 sudo[315630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:35 compute-0 groupadd[315632]: group added to /etc/group: name=podman, GID=42479
Nov 24 21:03:35 compute-0 groupadd[315632]: group added to /etc/gshadow: name=podman
Nov 24 21:03:35 compute-0 groupadd[315632]: new group: name=podman, GID=42479
Nov 24 21:03:35 compute-0 sudo[315630]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:35 compute-0 sudo[315638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Nov 24 21:03:35 compute-0 sudo[315638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:35.359+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:35 compute-0 usermod[315640]: add 'zuul' to group 'podman'
Nov 24 21:03:35 compute-0 usermod[315640]: add 'zuul' to shadow group 'podman'
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:03:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:03:35 compute-0 sudo[315638]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:35 compute-0 sudo[315647]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Nov 24 21:03:35 compute-0 sudo[315647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:35 compute-0 sudo[315647]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:35 compute-0 sudo[315650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Nov 24 21:03:35 compute-0 sudo[315650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:35 compute-0 sudo[315650]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:35 compute-0 sudo[315653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Nov 24 21:03:35 compute-0 sudo[315653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:35 compute-0 sudo[315653]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:35 compute-0 sudo[315656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Nov 24 21:03:35 compute-0 sudo[315656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:35 compute-0 sudo[315656]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:35 compute-0 sudo[315659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Nov 24 21:03:35 compute-0 sudo[315659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:35 compute-0 sudo[315659]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:35 compute-0 sudo[315662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Nov 24 21:03:35 compute-0 sudo[315662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:35 compute-0 systemd[1]: Reloading.
Nov 24 21:03:35 compute-0 systemd-rc-local-generator[315693]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:03:35 compute-0 systemd-sysv-generator[315696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:03:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:35 compute-0 ceph-mon[75677]: pgmap v2467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:35.999+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:36 compute-0 sudo[315662]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:36 compute-0 sudo[315700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Nov 24 21:03:36 compute-0 sudo[315700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:36 compute-0 sudo[315700]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:36 compute-0 sudo[315703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Nov 24 21:03:36 compute-0 sudo[315703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:36 compute-0 systemd[1]: Reloading.
Nov 24 21:03:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:36.401+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:36 compute-0 systemd-sysv-generator[315735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 21:03:36 compute-0 systemd-rc-local-generator[315730]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 21:03:36 compute-0 systemd[1]: Starting Podman API Socket...
Nov 24 21:03:36 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 24 21:03:36 compute-0 sudo[315703]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:36 compute-0 sudo[315741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Nov 24 21:03:36 compute-0 sudo[315741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:36 compute-0 sudo[315741]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:36 compute-0 sudo[315744]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Nov 24 21:03:36 compute-0 sudo[315744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:36 compute-0 sudo[315744]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:36 compute-0 sudo[315747]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Nov 24 21:03:36 compute-0 sudo[315747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:36 compute-0 sudo[315747]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:36 compute-0 sudo[315750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Nov 24 21:03:36 compute-0 sudo[315750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:36 compute-0 sudo[315750]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:36 compute-0 sudo[315753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Nov 24 21:03:36 compute-0 sudo[315753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:36.962+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:36 compute-0 sudo[315753]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:37 compute-0 sudo[315756]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Nov 24 21:03:37 compute-0 dbus-broker-launch[775]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Nov 24 21:03:37 compute-0 sudo[315756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:37 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Nov 24 21:03:37 compute-0 systemd[1]: Closed Podman API Socket.
Nov 24 21:03:37 compute-0 systemd[1]: Stopping Podman API Socket...
Nov 24 21:03:37 compute-0 systemd[1]: Starting Podman API Socket...
Nov 24 21:03:37 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 24 21:03:37 compute-0 sudo[315756]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:37 compute-0 sudo[315607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Nov 24 21:03:37 compute-0 sudo[315607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:37 compute-0 sudo[315607]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4332 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:37 compute-0 sshd-session[315763]: Accepted publickey for zuul from 192.168.122.30 port 56846 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 21:03:37 compute-0 systemd-logind[795]: New session 54 of user zuul.
Nov 24 21:03:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:37.410+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:37 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 24 21:03:37 compute-0 sshd-session[315763]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:03:37 compute-0 systemd[1]: Starting Podman API Service...
Nov 24 21:03:37 compute-0 systemd[1]: Started Podman API Service.
Nov 24 21:03:37 compute-0 podman[315767]: time="2025-11-24T21:03:37Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 24 21:03:37 compute-0 podman[315767]: time="2025-11-24T21:03:37Z" level=info msg="Setting parallel job count to 25"
Nov 24 21:03:37 compute-0 podman[315767]: time="2025-11-24T21:03:37Z" level=info msg="Using sqlite as database backend"
Nov 24 21:03:37 compute-0 podman[315767]: time="2025-11-24T21:03:37Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 24 21:03:37 compute-0 podman[315767]: time="2025-11-24T21:03:37Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 24 21:03:37 compute-0 podman[315767]: time="2025-11-24T21:03:37Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 24 21:03:37 compute-0 podman[315767]: @ - - [24/Nov/2025:21:03:37 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 24 21:03:37 compute-0 podman[315767]: @ - - [24/Nov/2025:21:03:37 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 24894 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 24 21:03:37 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:37.936+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:37 compute-0 ceph-mon[75677]: pgmap v2468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:37 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4332 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:38.375+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:38 compute-0 sudo[315780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Nov 24 21:03:38 compute-0 sudo[315780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:38 compute-0 sudo[315780]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:38 compute-0 sudo[315805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Nov 24 21:03:38 compute-0 sudo[315805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:03:38 compute-0 sudo[315805]: pam_unix(sudo:session): session closed for user root
Nov 24 21:03:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:38.925+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:38 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:39.424+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:39.890+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:39 compute-0 ceph-mon[75677]: pgmap v2469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:40.384+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:03:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:03:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:03:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:03:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:03:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:40.894+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:41.410+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:41.875+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:41 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:41 compute-0 ceph-mon[75677]: pgmap v2470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4342 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:42.442+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:42.839+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:43 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4342 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:43.469+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:43.878+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:44 compute-0 ceph-mon[75677]: pgmap v2471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:44.421+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:44 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:44 compute-0 podman[315830]: 2025-11-24 21:03:44.869671149 +0000 UTC m=+0.095549777 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:03:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:44.870+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:45.410+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:45.914+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:45 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:46 compute-0 ceph-mon[75677]: pgmap v2472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:46.433+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:46.914+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4347 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:47.390+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:47.867+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:48 compute-0 ceph-mon[75677]: pgmap v2473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:48 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4347 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:48.345+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:48.826+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:48 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:49 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:49.326+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:49.788+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:50 compute-0 ceph-mon[75677]: pgmap v2474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.115376) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018230115472, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 548, "num_deletes": 302, "total_data_size": 366571, "memory_usage": 377872, "flush_reason": "Manual Compaction"}
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018230121179, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 360980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74180, "largest_seqno": 74727, "table_properties": {"data_size": 358020, "index_size": 803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8975, "raw_average_key_size": 21, "raw_value_size": 351388, "raw_average_value_size": 822, "num_data_blocks": 35, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 302, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764018212, "oldest_key_time": 1764018212, "file_creation_time": 1764018230, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 5853 microseconds, and 3504 cpu microseconds.
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.121240) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 360980 bytes OK
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.121267) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.123172) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.123200) EVENT_LOG_v1 {"time_micros": 1764018230123189, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.123229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 363153, prev total WAL file size 363153, number of live WAL files 2.
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.123952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(352KB)], [155(11MB)]
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018230124001, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 12766122, "oldest_snapshot_seqno": -1}
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 13351 keys, 11241078 bytes, temperature: kUnknown
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018230211129, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 11241078, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11166012, "index_size": 40585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33413, "raw_key_size": 368786, "raw_average_key_size": 27, "raw_value_size": 10935035, "raw_average_value_size": 819, "num_data_blocks": 1483, "num_entries": 13351, "num_filter_entries": 13351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764018230, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.211541) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 11241078 bytes
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.213475) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.3 rd, 128.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(66.5) write-amplify(31.1) OK, records in: 13966, records dropped: 615 output_compression: NoCompression
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.213510) EVENT_LOG_v1 {"time_micros": 1764018230213493, "job": 96, "event": "compaction_finished", "compaction_time_micros": 87283, "compaction_time_cpu_micros": 38637, "output_level": 6, "num_output_files": 1, "total_output_size": 11241078, "num_input_records": 13966, "num_output_records": 13351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018230214065, "job": 96, "event": "table_file_deletion", "file_number": 157}
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018230218648, "job": 96, "event": "table_file_deletion", "file_number": 155}
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.123818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.218744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.218750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.218753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.218756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:50 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:03:50.218759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:03:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:50.302+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:50.771+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:51.293+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:51.808+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:52 compute-0 ceph-mon[75677]: pgmap v2475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4352 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:52.294+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:52 compute-0 podman[315767]: time="2025-11-24T21:03:52Z" level=info msg="Received shutdown.Stop(), terminating!" PID=315767
Nov 24 21:03:52 compute-0 systemd[1]: podman.service: Deactivated successfully.
Nov 24 21:03:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:52.818+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:53 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4352 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:53.307+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:53.857+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:54 compute-0 ceph-mon[75677]: pgmap v2476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:54 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:54.273+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:03:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:03:54 compute-0 podman[315849]: 2025-11-24 21:03:54.875057207 +0000 UTC m=+0.099086033 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:03:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:54.881+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:55 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:55.281+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:55.868+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:56 compute-0 ceph-mon[75677]: pgmap v2477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:56.248+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:56.868+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:56 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:03:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4357 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:57.287+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:57.886+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:58 compute-0 ceph-mon[75677]: pgmap v2478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:58 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4357 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:03:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:58.295+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:58.857+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:58 compute-0 podman[315868]: 2025-11-24 21:03:58.887676081 +0000 UTC m=+0.120835319 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 21:03:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:03:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:03:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:03:59.337+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:03:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:03:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:03:59.831+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:03:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:00 compute-0 ceph-mon[75677]: pgmap v2479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:00.377+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:00 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:00.814+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:01.371+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:01 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:01.820+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:02 compute-0 ceph-mon[75677]: pgmap v2480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:02.376+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:02.823+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:03.346+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:03.832+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:04 compute-0 ceph-mon[75677]: pgmap v2481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:04.331+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:04.803+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:05.327+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:05.775+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:06 compute-0 ceph-mon[75677]: pgmap v2482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:06.372+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:06.730+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4362 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:07 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4362 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:07.395+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:07.777+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:08 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:04:08.043 165944 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '62:e6:4a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4a:e9:db:9e:f2:ee'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 21:04:08 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:04:08.045 165944 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 21:04:08 compute-0 ceph-mon[75677]: pgmap v2483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:08.386+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:08.748+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:09.386+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:04:09.422 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:04:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:04:09.423 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:04:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:04:09.423 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:04:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:09.796+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:10 compute-0 ceph-mon[75677]: pgmap v2484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:10.421+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:10.764+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:11.462+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:11 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:11.771+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4372 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:12 compute-0 ceph-mon[75677]: pgmap v2485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:12 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4372 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:12.478+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:12.755+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:13.490+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:13.738+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:14 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:04:14.048 165944 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2981bd26-4511-4552-b2b8-c2a668887f38, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 21:04:14 compute-0 ceph-mon[75677]: pgmap v2486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:14.472+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:14.733+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:15.516+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:15 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:15.694+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:15 compute-0 podman[315895]: 2025-11-24 21:04:15.875323484 +0000 UTC m=+0.096622066 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:04:16 compute-0 ceph-mon[75677]: pgmap v2487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:04:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1491606980' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:04:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:04:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1491606980' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:04:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:16.522+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:16 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:16.694+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4376 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1491606980' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:04:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1491606980' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:04:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:17.546+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:17.674+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:18 compute-0 ceph-mon[75677]: pgmap v2488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:18 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4376 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:18.578+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:18 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:18.682+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:19.587+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:19.730+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:20 compute-0 ceph-mon[75677]: pgmap v2489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:20.541+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:20 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:20.697+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:21.559+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:21.696+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:22 compute-0 ceph-mon[75677]: pgmap v2490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:22.597+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:22.703+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:23.646+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:23.753+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:04:24 compute-0 ceph-mon[75677]: pgmap v2491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:04:24
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'backups', 'volumes']
Nov 24 21:04:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:04:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:24.694+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:24.706+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:25.670+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:25.698+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:25 compute-0 podman[315914]: 2025-11-24 21:04:25.887614167 +0000 UTC m=+0.118296090 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 24 21:04:26 compute-0 ceph-mon[75677]: pgmap v2492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:26.643+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:26.734+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4381 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:27 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4381 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:27.619+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:27.731+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:28 compute-0 ceph-mon[75677]: pgmap v2493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:28.625+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:28.727+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:29.660+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:29 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:29.771+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:29 compute-0 podman[315936]: 2025-11-24 21:04:29.929751109 +0000 UTC m=+0.148705110 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:04:30 compute-0 sshd-session[315934]: Invalid user admin from 182.93.7.194 port 40932
Nov 24 21:04:30 compute-0 ceph-mon[75677]: pgmap v2494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:30 compute-0 sshd-session[315934]: Received disconnect from 182.93.7.194 port 40932:11: Bye Bye [preauth]
Nov 24 21:04:30 compute-0 sshd-session[315934]: Disconnected from invalid user admin 182.93.7.194 port 40932 [preauth]
Nov 24 21:04:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:30.649+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:30.757+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:30 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:31.627+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:31.792+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4391 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:32 compute-0 ceph-mon[75677]: pgmap v2495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:32 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4391 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:32.587+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:32.824+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:32 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:33 compute-0 sudo[315962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:33 compute-0 sudo[315962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:33 compute-0 sudo[315962]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:33 compute-0 sudo[315987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:04:33 compute-0 sudo[315987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:33 compute-0 sudo[315987]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:33 compute-0 sudo[316012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:33 compute-0 sudo[316012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:33 compute-0 sudo[316012]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:33 compute-0 sudo[316037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 21:04:33 compute-0 sudo[316037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:33.545+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:33.822+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:34 compute-0 podman[316136]: 2025-11-24 21:04:34.098579776 +0000 UTC m=+0.096302007 container exec ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:04:34 compute-0 podman[316136]: 2025-11-24 21:04:34.229158827 +0000 UTC m=+0.226881048 container exec_died ba22cb483d92db41f62976a096e335d9b4d8b58ba9d7bf5b1172672abde7db9e (image=quay.io/ceph/ceph:v18, name=ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:04:34 compute-0 ceph-mon[75677]: pgmap v2496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:34.579+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:34.834+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:35 compute-0 sudo[316037]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:04:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:04:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:04:35 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:04:35 compute-0 sudo[316295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:35 compute-0 sudo[316295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:35 compute-0 sudo[316295]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:35 compute-0 sudo[316320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:04:35 compute-0 sudo[316320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:04:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:04:35 compute-0 sudo[316320]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:35 compute-0 sudo[316345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:35 compute-0 sudo[316345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:35 compute-0 sudo[316345]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:35.533+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:04:35 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:04:35 compute-0 sudo[316370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:04:35 compute-0 sudo[316370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:35.818+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:36 compute-0 sudo[316370]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:04:36 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:04:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:04:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:04:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:04:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:04:36 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 74482af7-8e88-482f-aa94-282325b7fde2 does not exist
Nov 24 21:04:36 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 587b278d-7bf5-4d6f-822b-72f6bffa3c16 does not exist
Nov 24 21:04:36 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 879d9637-39b9-4840-adb5-310301200393 does not exist
Nov 24 21:04:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:04:36 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:04:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:04:36 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:04:36 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:04:36 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:04:36 compute-0 sudo[316426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:36 compute-0 sudo[316426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:36 compute-0 sudo[316426]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:36 compute-0 sudo[316451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:04:36 compute-0 sudo[316451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:36 compute-0 sudo[316451]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:36.525+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:36 compute-0 sudo[316476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:36 compute-0 sudo[316476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:36 compute-0 sudo[316476]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:36 compute-0 ceph-mon[75677]: pgmap v2497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:04:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:04:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:04:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:04:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:04:36 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:04:36 compute-0 sudo[316501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:04:36 compute-0 sudo[316501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:36.780+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:37 compute-0 podman[316566]: 2025-11-24 21:04:37.149444011 +0000 UTC m=+0.068603280 container create 16518646c930ff55fcf101a8af2f782cc472809bd9989034635da5ffc2714ba6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cori, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:04:37 compute-0 systemd[1]: Started libpod-conmon-16518646c930ff55fcf101a8af2f782cc472809bd9989034635da5ffc2714ba6.scope.
Nov 24 21:04:37 compute-0 podman[316566]: 2025-11-24 21:04:37.120340267 +0000 UTC m=+0.039499576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:04:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:04:37 compute-0 podman[316566]: 2025-11-24 21:04:37.241733819 +0000 UTC m=+0.160893118 container init 16518646c930ff55fcf101a8af2f782cc472809bd9989034635da5ffc2714ba6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:04:37 compute-0 podman[316566]: 2025-11-24 21:04:37.254628537 +0000 UTC m=+0.173787786 container start 16518646c930ff55fcf101a8af2f782cc472809bd9989034635da5ffc2714ba6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cori, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:04:37 compute-0 podman[316566]: 2025-11-24 21:04:37.25807017 +0000 UTC m=+0.177229419 container attach 16518646c930ff55fcf101a8af2f782cc472809bd9989034635da5ffc2714ba6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cori, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:04:37 compute-0 magical_cori[316582]: 167 167
Nov 24 21:04:37 compute-0 systemd[1]: libpod-16518646c930ff55fcf101a8af2f782cc472809bd9989034635da5ffc2714ba6.scope: Deactivated successfully.
Nov 24 21:04:37 compute-0 podman[316566]: 2025-11-24 21:04:37.260636229 +0000 UTC m=+0.179795468 container died 16518646c930ff55fcf101a8af2f782cc472809bd9989034635da5ffc2714ba6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-edab808400734209c70737246d9e68e39be247c70bed1d13e95a3725e37b6e2b-merged.mount: Deactivated successfully.
Nov 24 21:04:37 compute-0 podman[316566]: 2025-11-24 21:04:37.314731317 +0000 UTC m=+0.233890576 container remove 16518646c930ff55fcf101a8af2f782cc472809bd9989034635da5ffc2714ba6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:04:37 compute-0 systemd[1]: libpod-conmon-16518646c930ff55fcf101a8af2f782cc472809bd9989034635da5ffc2714ba6.scope: Deactivated successfully.
Nov 24 21:04:37 compute-0 podman[316608]: 2025-11-24 21:04:37.512612943 +0000 UTC m=+0.048408096 container create e452febc9acd75f7b8d8f95283c34717c11835168f19a21c0e98b5eac91a8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:04:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:37.537+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:37 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:37 compute-0 systemd[1]: Started libpod-conmon-e452febc9acd75f7b8d8f95283c34717c11835168f19a21c0e98b5eac91a8ffe.scope.
Nov 24 21:04:37 compute-0 podman[316608]: 2025-11-24 21:04:37.48949312 +0000 UTC m=+0.025288303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:04:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:04:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4397 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c87efb71aed716c72d3f185b870abf1cebbbc1b3340e45ddadf7b7f549e9250/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c87efb71aed716c72d3f185b870abf1cebbbc1b3340e45ddadf7b7f549e9250/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c87efb71aed716c72d3f185b870abf1cebbbc1b3340e45ddadf7b7f549e9250/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c87efb71aed716c72d3f185b870abf1cebbbc1b3340e45ddadf7b7f549e9250/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c87efb71aed716c72d3f185b870abf1cebbbc1b3340e45ddadf7b7f549e9250/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:37 compute-0 podman[316608]: 2025-11-24 21:04:37.623668046 +0000 UTC m=+0.159463249 container init e452febc9acd75f7b8d8f95283c34717c11835168f19a21c0e98b5eac91a8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:04:37 compute-0 podman[316608]: 2025-11-24 21:04:37.637331195 +0000 UTC m=+0.173126378 container start e452febc9acd75f7b8d8f95283c34717c11835168f19a21c0e98b5eac91a8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_blackwell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 21:04:37 compute-0 podman[316608]: 2025-11-24 21:04:37.642052852 +0000 UTC m=+0.177848025 container attach e452febc9acd75f7b8d8f95283c34717c11835168f19a21c0e98b5eac91a8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 21:04:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:37.799+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:38.554+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:38 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:38 compute-0 ceph-mon[75677]: pgmap v2498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:38 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4397 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:38 compute-0 ecstatic_blackwell[316624]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:04:38 compute-0 ecstatic_blackwell[316624]: --> relative data size: 1.0
Nov 24 21:04:38 compute-0 ecstatic_blackwell[316624]: --> All data devices are unavailable
Nov 24 21:04:38 compute-0 systemd[1]: libpod-e452febc9acd75f7b8d8f95283c34717c11835168f19a21c0e98b5eac91a8ffe.scope: Deactivated successfully.
Nov 24 21:04:38 compute-0 podman[316608]: 2025-11-24 21:04:38.765323307 +0000 UTC m=+1.301118470 container died e452febc9acd75f7b8d8f95283c34717c11835168f19a21c0e98b5eac91a8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:04:38 compute-0 systemd[1]: libpod-e452febc9acd75f7b8d8f95283c34717c11835168f19a21c0e98b5eac91a8ffe.scope: Consumed 1.080s CPU time.
Nov 24 21:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c87efb71aed716c72d3f185b870abf1cebbbc1b3340e45ddadf7b7f549e9250-merged.mount: Deactivated successfully.
Nov 24 21:04:38 compute-0 podman[316608]: 2025-11-24 21:04:38.836029513 +0000 UTC m=+1.371824676 container remove e452febc9acd75f7b8d8f95283c34717c11835168f19a21c0e98b5eac91a8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 21:04:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:38.840+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:38 compute-0 systemd[1]: libpod-conmon-e452febc9acd75f7b8d8f95283c34717c11835168f19a21c0e98b5eac91a8ffe.scope: Deactivated successfully.
Nov 24 21:04:38 compute-0 sudo[316501]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:38 compute-0 sudo[316665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:38 compute-0 sudo[316665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:38 compute-0 sudo[316665]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:39 compute-0 sudo[316690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:04:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:39 compute-0 sudo[316690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:39 compute-0 sudo[316690]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:39 compute-0 sudo[316715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:39 compute-0 sudo[316715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:39 compute-0 sudo[316715]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:39 compute-0 sudo[316740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:04:39 compute-0 sudo[316740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:39.578+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:39 compute-0 podman[316805]: 2025-11-24 21:04:39.718889256 +0000 UTC m=+0.084380256 container create c3f4ef033128b69df592cea798475feedd6b898754ae71f98050cd0bcd000a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 21:04:39 compute-0 systemd[1]: Started libpod-conmon-c3f4ef033128b69df592cea798475feedd6b898754ae71f98050cd0bcd000a28.scope.
Nov 24 21:04:39 compute-0 podman[316805]: 2025-11-24 21:04:39.676242926 +0000 UTC m=+0.041734006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:04:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:04:39 compute-0 podman[316805]: 2025-11-24 21:04:39.811669188 +0000 UTC m=+0.177160268 container init c3f4ef033128b69df592cea798475feedd6b898754ae71f98050cd0bcd000a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 21:04:39 compute-0 podman[316805]: 2025-11-24 21:04:39.822072348 +0000 UTC m=+0.187563378 container start c3f4ef033128b69df592cea798475feedd6b898754ae71f98050cd0bcd000a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:04:39 compute-0 podman[316805]: 2025-11-24 21:04:39.828866892 +0000 UTC m=+0.194357972 container attach c3f4ef033128b69df592cea798475feedd6b898754ae71f98050cd0bcd000a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 21:04:39 compute-0 objective_germain[316821]: 167 167
Nov 24 21:04:39 compute-0 systemd[1]: libpod-c3f4ef033128b69df592cea798475feedd6b898754ae71f98050cd0bcd000a28.scope: Deactivated successfully.
Nov 24 21:04:39 compute-0 podman[316805]: 2025-11-24 21:04:39.831649567 +0000 UTC m=+0.197140597 container died c3f4ef033128b69df592cea798475feedd6b898754ae71f98050cd0bcd000a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 21:04:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:39.843+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d39397025139a3661fe1a65cf9ce6ccf24922a7f5dba0c7042eb953211e9fe08-merged.mount: Deactivated successfully.
Nov 24 21:04:39 compute-0 podman[316805]: 2025-11-24 21:04:39.891177472 +0000 UTC m=+0.256668512 container remove c3f4ef033128b69df592cea798475feedd6b898754ae71f98050cd0bcd000a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:04:39 compute-0 systemd[1]: libpod-conmon-c3f4ef033128b69df592cea798475feedd6b898754ae71f98050cd0bcd000a28.scope: Deactivated successfully.
Nov 24 21:04:40 compute-0 podman[316844]: 2025-11-24 21:04:40.113907296 +0000 UTC m=+0.056497684 container create b3e8cf6903189cd4b3af1bf3a9a568e6eea6ad1e7aa04b0a15925ef22b7bfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:04:40 compute-0 systemd[1]: Started libpod-conmon-b3e8cf6903189cd4b3af1bf3a9a568e6eea6ad1e7aa04b0a15925ef22b7bfb81.scope.
Nov 24 21:04:40 compute-0 podman[316844]: 2025-11-24 21:04:40.088983075 +0000 UTC m=+0.031573483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:04:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f83884acd4f58493f5fe2fb76a635728e66182d783ce759d4ea3b351bfb16d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f83884acd4f58493f5fe2fb76a635728e66182d783ce759d4ea3b351bfb16d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f83884acd4f58493f5fe2fb76a635728e66182d783ce759d4ea3b351bfb16d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f83884acd4f58493f5fe2fb76a635728e66182d783ce759d4ea3b351bfb16d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:40 compute-0 podman[316844]: 2025-11-24 21:04:40.203808981 +0000 UTC m=+0.146399449 container init b3e8cf6903189cd4b3af1bf3a9a568e6eea6ad1e7aa04b0a15925ef22b7bfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 24 21:04:40 compute-0 podman[316844]: 2025-11-24 21:04:40.216468872 +0000 UTC m=+0.159059270 container start b3e8cf6903189cd4b3af1bf3a9a568e6eea6ad1e7aa04b0a15925ef22b7bfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 21:04:40 compute-0 podman[316844]: 2025-11-24 21:04:40.221685552 +0000 UTC m=+0.164275950 container attach b3e8cf6903189cd4b3af1bf3a9a568e6eea6ad1e7aa04b0a15925ef22b7bfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:04:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:40.558+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:40 compute-0 ceph-mon[75677]: pgmap v2499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:04:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:04:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:04:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:04:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:04:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:40.875+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:41 compute-0 blissful_williams[316861]: {
Nov 24 21:04:41 compute-0 blissful_williams[316861]:     "0": [
Nov 24 21:04:41 compute-0 blissful_williams[316861]:         {
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "devices": [
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "/dev/loop3"
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             ],
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_name": "ceph_lv0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_size": "21470642176",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "name": "ceph_lv0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "tags": {
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.cluster_name": "ceph",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.crush_device_class": "",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.encrypted": "0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.osd_id": "0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.type": "block",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.vdo": "0"
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             },
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "type": "block",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "vg_name": "ceph_vg0"
Nov 24 21:04:41 compute-0 blissful_williams[316861]:         }
Nov 24 21:04:41 compute-0 blissful_williams[316861]:     ],
Nov 24 21:04:41 compute-0 blissful_williams[316861]:     "1": [
Nov 24 21:04:41 compute-0 blissful_williams[316861]:         {
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "devices": [
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "/dev/loop4"
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             ],
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_name": "ceph_lv1",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_size": "21470642176",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "name": "ceph_lv1",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "tags": {
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.cluster_name": "ceph",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.crush_device_class": "",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.encrypted": "0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.osd_id": "1",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.type": "block",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.vdo": "0"
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             },
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "type": "block",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "vg_name": "ceph_vg1"
Nov 24 21:04:41 compute-0 blissful_williams[316861]:         }
Nov 24 21:04:41 compute-0 blissful_williams[316861]:     ],
Nov 24 21:04:41 compute-0 blissful_williams[316861]:     "2": [
Nov 24 21:04:41 compute-0 blissful_williams[316861]:         {
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "devices": [
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "/dev/loop5"
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             ],
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_name": "ceph_lv2",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_size": "21470642176",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "name": "ceph_lv2",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "tags": {
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.cluster_name": "ceph",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.crush_device_class": "",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.encrypted": "0",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.osd_id": "2",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.type": "block",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:                 "ceph.vdo": "0"
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             },
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "type": "block",
Nov 24 21:04:41 compute-0 blissful_williams[316861]:             "vg_name": "ceph_vg2"
Nov 24 21:04:41 compute-0 blissful_williams[316861]:         }
Nov 24 21:04:41 compute-0 blissful_williams[316861]:     ]
Nov 24 21:04:41 compute-0 blissful_williams[316861]: }
Nov 24 21:04:41 compute-0 systemd[1]: libpod-b3e8cf6903189cd4b3af1bf3a9a568e6eea6ad1e7aa04b0a15925ef22b7bfb81.scope: Deactivated successfully.
Nov 24 21:04:41 compute-0 podman[316844]: 2025-11-24 21:04:41.108638655 +0000 UTC m=+1.051229103 container died b3e8cf6903189cd4b3af1bf3a9a568e6eea6ad1e7aa04b0a15925ef22b7bfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 21:04:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9f83884acd4f58493f5fe2fb76a635728e66182d783ce759d4ea3b351bfb16d-merged.mount: Deactivated successfully.
Nov 24 21:04:41 compute-0 podman[316844]: 2025-11-24 21:04:41.183167675 +0000 UTC m=+1.125758033 container remove b3e8cf6903189cd4b3af1bf3a9a568e6eea6ad1e7aa04b0a15925ef22b7bfb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:04:41 compute-0 systemd[1]: libpod-conmon-b3e8cf6903189cd4b3af1bf3a9a568e6eea6ad1e7aa04b0a15925ef22b7bfb81.scope: Deactivated successfully.
Nov 24 21:04:41 compute-0 sudo[316740]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:41 compute-0 sudo[316882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:41 compute-0 sudo[316882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:41 compute-0 sudo[316882]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:41 compute-0 sudo[316907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:04:41 compute-0 sudo[316907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:41 compute-0 sudo[316907]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:41 compute-0 sudo[316932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:41 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:41.543+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:41 compute-0 sudo[316932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:41 compute-0 sudo[316932]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:41 compute-0 sudo[316957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:04:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:41 compute-0 sudo[316957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:41.900+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:42 compute-0 podman[317023]: 2025-11-24 21:04:42.091864304 +0000 UTC m=+0.063114942 container create 8efb76f7ff44b3cfb42ed63d37662bfc8ff380a150731cb1469cea315209c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:04:42 compute-0 systemd[1]: Started libpod-conmon-8efb76f7ff44b3cfb42ed63d37662bfc8ff380a150731cb1469cea315209c4dc.scope.
Nov 24 21:04:42 compute-0 podman[317023]: 2025-11-24 21:04:42.063570321 +0000 UTC m=+0.034821009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:04:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:04:42 compute-0 podman[317023]: 2025-11-24 21:04:42.21592921 +0000 UTC m=+0.187179898 container init 8efb76f7ff44b3cfb42ed63d37662bfc8ff380a150731cb1469cea315209c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 21:04:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:42 compute-0 podman[317023]: 2025-11-24 21:04:42.229316191 +0000 UTC m=+0.200566819 container start 8efb76f7ff44b3cfb42ed63d37662bfc8ff380a150731cb1469cea315209c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:04:42 compute-0 podman[317023]: 2025-11-24 21:04:42.234817619 +0000 UTC m=+0.206068307 container attach 8efb76f7ff44b3cfb42ed63d37662bfc8ff380a150731cb1469cea315209c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:04:42 compute-0 awesome_antonelli[317039]: 167 167
Nov 24 21:04:42 compute-0 systemd[1]: libpod-8efb76f7ff44b3cfb42ed63d37662bfc8ff380a150731cb1469cea315209c4dc.scope: Deactivated successfully.
Nov 24 21:04:42 compute-0 podman[317023]: 2025-11-24 21:04:42.238847207 +0000 UTC m=+0.210097845 container died 8efb76f7ff44b3cfb42ed63d37662bfc8ff380a150731cb1469cea315209c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:04:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-91fa4f9615a40316999b96ee8d83822fdd080deb81305ea4623c65dfa896ba40-merged.mount: Deactivated successfully.
Nov 24 21:04:42 compute-0 podman[317023]: 2025-11-24 21:04:42.299351489 +0000 UTC m=+0.270602117 container remove 8efb76f7ff44b3cfb42ed63d37662bfc8ff380a150731cb1469cea315209c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 21:04:42 compute-0 systemd[1]: libpod-conmon-8efb76f7ff44b3cfb42ed63d37662bfc8ff380a150731cb1469cea315209c4dc.scope: Deactivated successfully.
Nov 24 21:04:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:42.503+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:42 compute-0 podman[317064]: 2025-11-24 21:04:42.538205328 +0000 UTC m=+0.074430438 container create 14d69a4a5fc2cf91238b487f75ccd0ebb7e01b9c5552963f38e49fc2e8421b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 21:04:42 compute-0 systemd[1]: Started libpod-conmon-14d69a4a5fc2cf91238b487f75ccd0ebb7e01b9c5552963f38e49fc2e8421b05.scope.
Nov 24 21:04:42 compute-0 podman[317064]: 2025-11-24 21:04:42.503141813 +0000 UTC m=+0.039366993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:04:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1edd32ea8bd862616eb3d3d68752136f856624d476762cf14ac3a6c869538ef7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1edd32ea8bd862616eb3d3d68752136f856624d476762cf14ac3a6c869538ef7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1edd32ea8bd862616eb3d3d68752136f856624d476762cf14ac3a6c869538ef7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1edd32ea8bd862616eb3d3d68752136f856624d476762cf14ac3a6c869538ef7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:04:42 compute-0 podman[317064]: 2025-11-24 21:04:42.632615964 +0000 UTC m=+0.168841084 container init 14d69a4a5fc2cf91238b487f75ccd0ebb7e01b9c5552963f38e49fc2e8421b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 21:04:42 compute-0 ceph-mon[75677]: pgmap v2500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:42 compute-0 podman[317064]: 2025-11-24 21:04:42.650247599 +0000 UTC m=+0.186472719 container start 14d69a4a5fc2cf91238b487f75ccd0ebb7e01b9c5552963f38e49fc2e8421b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 21:04:42 compute-0 podman[317064]: 2025-11-24 21:04:42.653958029 +0000 UTC m=+0.190183139 container attach 14d69a4a5fc2cf91238b487f75ccd0ebb7e01b9c5552963f38e49fc2e8421b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 21:04:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:42.866+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:43.463+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:43 compute-0 nice_wright[317080]: {
Nov 24 21:04:43 compute-0 nice_wright[317080]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "osd_id": 2,
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "type": "bluestore"
Nov 24 21:04:43 compute-0 nice_wright[317080]:     },
Nov 24 21:04:43 compute-0 nice_wright[317080]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "osd_id": 1,
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "type": "bluestore"
Nov 24 21:04:43 compute-0 nice_wright[317080]:     },
Nov 24 21:04:43 compute-0 nice_wright[317080]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "osd_id": 0,
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:04:43 compute-0 nice_wright[317080]:         "type": "bluestore"
Nov 24 21:04:43 compute-0 nice_wright[317080]:     }
Nov 24 21:04:43 compute-0 nice_wright[317080]: }
Nov 24 21:04:43 compute-0 systemd[1]: libpod-14d69a4a5fc2cf91238b487f75ccd0ebb7e01b9c5552963f38e49fc2e8421b05.scope: Deactivated successfully.
Nov 24 21:04:43 compute-0 podman[317064]: 2025-11-24 21:04:43.632233734 +0000 UTC m=+1.168458824 container died 14d69a4a5fc2cf91238b487f75ccd0ebb7e01b9c5552963f38e49fc2e8421b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 21:04:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1edd32ea8bd862616eb3d3d68752136f856624d476762cf14ac3a6c869538ef7-merged.mount: Deactivated successfully.
Nov 24 21:04:43 compute-0 podman[317064]: 2025-11-24 21:04:43.718382237 +0000 UTC m=+1.254607307 container remove 14d69a4a5fc2cf91238b487f75ccd0ebb7e01b9c5552963f38e49fc2e8421b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:04:43 compute-0 systemd[1]: libpod-conmon-14d69a4a5fc2cf91238b487f75ccd0ebb7e01b9c5552963f38e49fc2e8421b05.scope: Deactivated successfully.
Nov 24 21:04:43 compute-0 sudo[316957]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:04:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:04:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:04:43 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:04:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 5d377d6b-20c7-4d2c-abb0-ef63ab517b2d does not exist
Nov 24 21:04:43 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1ba7c7ac-3203-468c-8dfd-763fb83a4e5a does not exist
Nov 24 21:04:43 compute-0 sudo[317127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:04:43 compute-0 sudo[317127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:43 compute-0 sudo[317127]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:43.872+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:43 compute-0 sudo[317152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:04:43 compute-0 sudo[317152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:04:43 compute-0 sudo[317152]: pam_unix(sudo:session): session closed for user root
Nov 24 21:04:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:44.505+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:44 compute-0 ceph-mon[75677]: pgmap v2501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:04:44 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:04:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:44.920+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:44 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:45 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:45.540+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:45.896+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:46.500+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:46 compute-0 ceph-mon[75677]: pgmap v2502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:46 compute-0 podman[317177]: 2025-11-24 21:04:46.847155222 +0000 UTC m=+0.075339682 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 24 21:04:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:46.937+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4402 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:47.460+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:47 compute-0 ceph-mon[75677]: pgmap v2503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:47 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4402 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:47.971+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:48 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:48.432+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:48.960+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:49.397+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:49 compute-0 ceph-mon[75677]: pgmap v2504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:49.970+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:49 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:50.357+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:50.949+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:51.310+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:51 compute-0 ceph-mon[75677]: pgmap v2505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:51.937+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4412 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:52.356+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:52 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4412 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:52.985+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:53.371+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:53 compute-0 ceph-mon[75677]: pgmap v2506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:54.015+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:54 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:54.413+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:04:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:04:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:55.036+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:55 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:55.389+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:55 compute-0 ceph-mon[75677]: pgmap v2507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:56.026+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:56.363+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:56 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:56 compute-0 podman[317196]: 2025-11-24 21:04:56.844889573 +0000 UTC m=+0.075939269 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 24 21:04:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:57.007+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:04:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:57.376+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4417 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:57 compute-0 ceph-mon[75677]: pgmap v2508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:57.960+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:58.354+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:58 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4417 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:04:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:58.967+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:04:59.392+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:04:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:04:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:04:59 compute-0 ceph-mon[75677]: pgmap v2509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:04:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:04:59.991+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:04:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:00.342+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:00 compute-0 podman[317218]: 2025-11-24 21:05:00.924892853 +0000 UTC m=+0.140962351 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 21:05:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:01.031+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:01 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:01.348+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:01 compute-0 ceph-mon[75677]: pgmap v2510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:02.051+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:02.323+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:03.015+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:03.288+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:03 compute-0 ceph-mon[75677]: pgmap v2511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:03.999+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:04.327+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:04.965+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:05.304+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:05.919+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:05 compute-0 ceph-mon[75677]: pgmap v2512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:06.340+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:06.947+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4422 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:07.304+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:07.905+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:07 compute-0 ceph-mon[75677]: pgmap v2513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:07 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4422 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:08.267+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:08.943+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:09.295+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:05:09.424 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:05:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:05:09.424 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:05:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:05:09.425 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:05:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:09.947+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:09 compute-0 ceph-mon[75677]: pgmap v2514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:10.306+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:10.975+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:11.307+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:11.940+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:11 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:12 compute-0 ceph-mon[75677]: pgmap v2515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4432 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.231740) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018312231810, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1509, "num_deletes": 467, "total_data_size": 1489970, "memory_usage": 1526424, "flush_reason": "Manual Compaction"}
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018312245357, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 982674, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74728, "largest_seqno": 76236, "table_properties": {"data_size": 977098, "index_size": 2138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 22120, "raw_average_key_size": 23, "raw_value_size": 962184, "raw_average_value_size": 1037, "num_data_blocks": 92, "num_entries": 927, "num_filter_entries": 927, "num_deletions": 467, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764018231, "oldest_key_time": 1764018231, "file_creation_time": 1764018312, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 13707 microseconds, and 8001 cpu microseconds.
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.245443) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 982674 bytes OK
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.245477) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.248643) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.248663) EVENT_LOG_v1 {"time_micros": 1764018312248656, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.248688) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1482012, prev total WAL file size 1482012, number of live WAL files 2.
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.249676) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323536' seq:0, type:0; will stop at (end)
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(959KB)], [158(10MB)]
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018312249712, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12223752, "oldest_snapshot_seqno": -1}
Nov 24 21:05:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:12.346+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 13367 keys, 9259842 bytes, temperature: kUnknown
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018312355762, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 9259842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9188191, "index_size": 37106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33477, "raw_key_size": 368663, "raw_average_key_size": 27, "raw_value_size": 8960547, "raw_average_value_size": 670, "num_data_blocks": 1342, "num_entries": 13367, "num_filter_entries": 13367, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764018312, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.356542) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 9259842 bytes
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.370835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.8 rd, 87.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(21.9) write-amplify(9.4) OK, records in: 14278, records dropped: 911 output_compression: NoCompression
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.370885) EVENT_LOG_v1 {"time_micros": 1764018312370864, "job": 98, "event": "compaction_finished", "compaction_time_micros": 106461, "compaction_time_cpu_micros": 33004, "output_level": 6, "num_output_files": 1, "total_output_size": 9259842, "num_input_records": 14278, "num_output_records": 13367, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018312372415, "job": 98, "event": "table_file_deletion", "file_number": 160}
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018312376831, "job": 98, "event": "table_file_deletion", "file_number": 158}
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.249627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.377102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.377111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.377113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.377115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:05:12 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:05:12.377118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:05:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:12.975+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:13 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4432 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:13.306+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:13.966+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:14 compute-0 ceph-mon[75677]: pgmap v2516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:14.349+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:14.963+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:15.374+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:15 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:15.940+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:16 compute-0 ceph-mon[75677]: pgmap v2517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:16.372+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:05:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1915734586' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:05:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:05:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1915734586' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:05:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:16.941+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:16 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1915734586' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:05:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1915734586' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:05:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4437 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:17.324+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:17 compute-0 podman[317244]: 2025-11-24 21:05:17.83604851 +0000 UTC m=+0.063938865 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 21:05:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:17.961+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:18 compute-0 ceph-mon[75677]: pgmap v2518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:18 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4437 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:18.303+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:18.950+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:18 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:19.278+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:19.979+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:20 compute-0 ceph-mon[75677]: pgmap v2519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:20.275+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:21.013+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:21.271+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:22.024+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:22 compute-0 ceph-mon[75677]: pgmap v2520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4442 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:22.316+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:22.991+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:23 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4442 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:23.280+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:23.975+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:24 compute-0 ceph-mon[75677]: pgmap v2521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:24.319+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:05:24
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.log']
Nov 24 21:05:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:05:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:24.980+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:25.280+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:25.983+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:26 compute-0 ceph-mon[75677]: pgmap v2522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:26.301+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:27.005+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4447 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:27.297+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:27 compute-0 podman[317262]: 2025-11-24 21:05:27.866500152 +0000 UTC m=+0.077097429 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:05:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:28.012+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:28 compute-0 ceph-mon[75677]: pgmap v2523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:28 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4447 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:28.315+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:29.015+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:29.339+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:29 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:29.980+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:30 compute-0 ceph-mon[75677]: pgmap v2524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:30.314+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:31.003+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:31.323+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:31 compute-0 podman[317282]: 2025-11-24 21:05:31.944397109 +0000 UTC m=+0.168480724 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 21:05:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:31.990+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4452 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:32 compute-0 ceph-mon[75677]: pgmap v2525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:32.312+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:33.032+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:33 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4452 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:33.280+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:34.003+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:34 compute-0 ceph-mon[75677]: pgmap v2526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:34.296+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:34.967+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:35.269+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:05:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:05:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:35.964+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:36.227+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:36 compute-0 ceph-mon[75677]: pgmap v2527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:36.943+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:37.220+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:37 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4457 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:37.928+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:38.172+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:38 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:38 compute-0 ceph-mon[75677]: pgmap v2528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:38 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4457 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:38.923+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:39.187+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:39.884+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:40.225+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:40 compute-0 ceph-mon[75677]: pgmap v2529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:05:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:05:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:05:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:05:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:05:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:40.839+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:41.243+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:41 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:41.861+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:42.232+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:42 compute-0 ceph-mon[75677]: pgmap v2530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:42.831+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:43.230+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:43.876+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:43 compute-0 sudo[317309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:05:43 compute-0 sudo[317309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:43 compute-0 sudo[317309]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:44 compute-0 sudo[317334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:05:44 compute-0 sudo[317334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:44 compute-0 sudo[317334]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:44 compute-0 sudo[317359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:05:44 compute-0 sudo[317359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:44 compute-0 sudo[317359]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:44.187+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:44 compute-0 sudo[317384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:05:44 compute-0 sudo[317384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:44 compute-0 ceph-mon[75677]: pgmap v2531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:44 compute-0 sudo[317384]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:05:44 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:05:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:05:44 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:05:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:05:44 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:05:44 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a205fc3f-d251-4bb6-9841-bf6463e2aabf does not exist
Nov 24 21:05:44 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 55497ba7-0e03-4481-a293-615947d1e3dc does not exist
Nov 24 21:05:44 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 98435eee-74da-493c-83aa-a82f902ddb86 does not exist
Nov 24 21:05:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:05:44 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:05:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:05:44 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:05:44 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:05:44 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:05:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:44.895+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:44 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:44 compute-0 sudo[317439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:05:44 compute-0 sudo[317439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:44 compute-0 sudo[317439]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:45 compute-0 sudo[317464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:05:45 compute-0 sudo[317464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:45 compute-0 sudo[317464]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:45 compute-0 sudo[317489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:05:45 compute-0 sudo[317489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:45 compute-0 sudo[317489]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:45 compute-0 sudo[317514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:05:45 compute-0 sudo[317514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:45.179+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:45 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:05:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:05:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:05:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:05:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:05:45 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:05:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:45 compute-0 podman[317581]: 2025-11-24 21:05:45.469570834 +0000 UTC m=+0.046718400 container create 30df008a23761aba2506ee815ac0dc83c5ce51a3ef702266452bed13d1065b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 21:05:45 compute-0 systemd[1]: Started libpod-conmon-30df008a23761aba2506ee815ac0dc83c5ce51a3ef702266452bed13d1065b90.scope.
Nov 24 21:05:45 compute-0 podman[317581]: 2025-11-24 21:05:45.449308738 +0000 UTC m=+0.026456354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:05:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:05:45 compute-0 podman[317581]: 2025-11-24 21:05:45.574724879 +0000 UTC m=+0.151872465 container init 30df008a23761aba2506ee815ac0dc83c5ce51a3ef702266452bed13d1065b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 21:05:45 compute-0 podman[317581]: 2025-11-24 21:05:45.583568798 +0000 UTC m=+0.160716364 container start 30df008a23761aba2506ee815ac0dc83c5ce51a3ef702266452bed13d1065b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:05:45 compute-0 podman[317581]: 2025-11-24 21:05:45.586654631 +0000 UTC m=+0.163802197 container attach 30df008a23761aba2506ee815ac0dc83c5ce51a3ef702266452bed13d1065b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:05:45 compute-0 blissful_burnell[317597]: 167 167
Nov 24 21:05:45 compute-0 systemd[1]: libpod-30df008a23761aba2506ee815ac0dc83c5ce51a3ef702266452bed13d1065b90.scope: Deactivated successfully.
Nov 24 21:05:45 compute-0 podman[317581]: 2025-11-24 21:05:45.5929509 +0000 UTC m=+0.170098526 container died 30df008a23761aba2506ee815ac0dc83c5ce51a3ef702266452bed13d1065b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_burnell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-55fd6b9d70e194095711260847f43746b17c805b48ed86fbfbbf59b2c3cd45af-merged.mount: Deactivated successfully.
Nov 24 21:05:45 compute-0 podman[317581]: 2025-11-24 21:05:45.636098784 +0000 UTC m=+0.213246380 container remove 30df008a23761aba2506ee815ac0dc83c5ce51a3ef702266452bed13d1065b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 24 21:05:45 compute-0 systemd[1]: libpod-conmon-30df008a23761aba2506ee815ac0dc83c5ce51a3ef702266452bed13d1065b90.scope: Deactivated successfully.
Nov 24 21:05:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:45.854+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:45 compute-0 podman[317623]: 2025-11-24 21:05:45.865159919 +0000 UTC m=+0.052052724 container create ac87b4bb8c8ce86f45ccd628790e348a5bb0857b76f86be4061326a428e248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_meninsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:05:45 compute-0 systemd[1]: Started libpod-conmon-ac87b4bb8c8ce86f45ccd628790e348a5bb0857b76f86be4061326a428e248c1.scope.
Nov 24 21:05:45 compute-0 podman[317623]: 2025-11-24 21:05:45.8432743 +0000 UTC m=+0.030167125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:05:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59db9feb6f866a4c5ecd446171812957aca1136d86b1d1fec9166057931111f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59db9feb6f866a4c5ecd446171812957aca1136d86b1d1fec9166057931111f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59db9feb6f866a4c5ecd446171812957aca1136d86b1d1fec9166057931111f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59db9feb6f866a4c5ecd446171812957aca1136d86b1d1fec9166057931111f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59db9feb6f866a4c5ecd446171812957aca1136d86b1d1fec9166057931111f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:45 compute-0 podman[317623]: 2025-11-24 21:05:45.963535812 +0000 UTC m=+0.150428687 container init ac87b4bb8c8ce86f45ccd628790e348a5bb0857b76f86be4061326a428e248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_meninsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 21:05:45 compute-0 podman[317623]: 2025-11-24 21:05:45.972503014 +0000 UTC m=+0.159395859 container start ac87b4bb8c8ce86f45ccd628790e348a5bb0857b76f86be4061326a428e248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_meninsky, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 21:05:45 compute-0 podman[317623]: 2025-11-24 21:05:45.976228244 +0000 UTC m=+0.163121059 container attach ac87b4bb8c8ce86f45ccd628790e348a5bb0857b76f86be4061326a428e248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_meninsky, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:05:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:46.198+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:46 compute-0 ceph-mon[75677]: pgmap v2532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:46.880+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:47 compute-0 wonderful_meninsky[317639]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:05:47 compute-0 wonderful_meninsky[317639]: --> relative data size: 1.0
Nov 24 21:05:47 compute-0 wonderful_meninsky[317639]: --> All data devices are unavailable
Nov 24 21:05:47 compute-0 systemd[1]: libpod-ac87b4bb8c8ce86f45ccd628790e348a5bb0857b76f86be4061326a428e248c1.scope: Deactivated successfully.
Nov 24 21:05:47 compute-0 systemd[1]: libpod-ac87b4bb8c8ce86f45ccd628790e348a5bb0857b76f86be4061326a428e248c1.scope: Consumed 1.121s CPU time.
Nov 24 21:05:47 compute-0 podman[317623]: 2025-11-24 21:05:47.140513565 +0000 UTC m=+1.327406410 container died ac87b4bb8c8ce86f45ccd628790e348a5bb0857b76f86be4061326a428e248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 24 21:05:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:47.159+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f59db9feb6f866a4c5ecd446171812957aca1136d86b1d1fec9166057931111f-merged.mount: Deactivated successfully.
Nov 24 21:05:47 compute-0 podman[317623]: 2025-11-24 21:05:47.218113376 +0000 UTC m=+1.405006211 container remove ac87b4bb8c8ce86f45ccd628790e348a5bb0857b76f86be4061326a428e248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_meninsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:05:47 compute-0 systemd[1]: libpod-conmon-ac87b4bb8c8ce86f45ccd628790e348a5bb0857b76f86be4061326a428e248c1.scope: Deactivated successfully.
Nov 24 21:05:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4462 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:47 compute-0 sudo[317514]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:47 compute-0 sudo[317681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:05:47 compute-0 sudo[317681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:47 compute-0 sudo[317681]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:47 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4462 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:47 compute-0 sudo[317706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:05:47 compute-0 sudo[317706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:47 compute-0 sudo[317706]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:47 compute-0 sudo[317731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:05:47 compute-0 sudo[317731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:47 compute-0 sudo[317731]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:47 compute-0 sudo[317756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:05:47 compute-0 sudo[317756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:47.833+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:47 compute-0 podman[317821]: 2025-11-24 21:05:47.975725363 +0000 UTC m=+0.049729492 container create 0858039434c603921bfc672bd8fe3a4f4074d0e8e50713ef94a2b3b8d5a008e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 21:05:48 compute-0 systemd[1]: Started libpod-conmon-0858039434c603921bfc672bd8fe3a4f4074d0e8e50713ef94a2b3b8d5a008e8.scope.
Nov 24 21:05:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:05:48 compute-0 podman[317821]: 2025-11-24 21:05:47.951823688 +0000 UTC m=+0.025827837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:05:48 compute-0 podman[317821]: 2025-11-24 21:05:48.056928332 +0000 UTC m=+0.130932481 container init 0858039434c603921bfc672bd8fe3a4f4074d0e8e50713ef94a2b3b8d5a008e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 21:05:48 compute-0 podman[317821]: 2025-11-24 21:05:48.065476753 +0000 UTC m=+0.139480902 container start 0858039434c603921bfc672bd8fe3a4f4074d0e8e50713ef94a2b3b8d5a008e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:05:48 compute-0 podman[317821]: 2025-11-24 21:05:48.068890864 +0000 UTC m=+0.142895013 container attach 0858039434c603921bfc672bd8fe3a4f4074d0e8e50713ef94a2b3b8d5a008e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:05:48 compute-0 systemd[1]: libpod-0858039434c603921bfc672bd8fe3a4f4074d0e8e50713ef94a2b3b8d5a008e8.scope: Deactivated successfully.
Nov 24 21:05:48 compute-0 laughing_goldstine[317839]: 167 167
Nov 24 21:05:48 compute-0 conmon[317839]: conmon 0858039434c603921bfc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0858039434c603921bfc672bd8fe3a4f4074d0e8e50713ef94a2b3b8d5a008e8.scope/container/memory.events
Nov 24 21:05:48 compute-0 podman[317835]: 2025-11-24 21:05:48.070677533 +0000 UTC m=+0.055566739 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 21:05:48 compute-0 podman[317821]: 2025-11-24 21:05:48.071117765 +0000 UTC m=+0.145121894 container died 0858039434c603921bfc672bd8fe3a4f4074d0e8e50713ef94a2b3b8d5a008e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-46b54cfc97ea01d64633bb0321fb3854a6d0fdb3d1a80a997c179da3ca8ef183-merged.mount: Deactivated successfully.
Nov 24 21:05:48 compute-0 podman[317821]: 2025-11-24 21:05:48.107715581 +0000 UTC m=+0.181719730 container remove 0858039434c603921bfc672bd8fe3a4f4074d0e8e50713ef94a2b3b8d5a008e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 21:05:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:48.113+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:48 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:48 compute-0 systemd[1]: libpod-conmon-0858039434c603921bfc672bd8fe3a4f4074d0e8e50713ef94a2b3b8d5a008e8.scope: Deactivated successfully.
Nov 24 21:05:48 compute-0 podman[317880]: 2025-11-24 21:05:48.261898698 +0000 UTC m=+0.038358785 container create 4e01decc544d7ddd375ba2b8a335a03c52b2a7fed205d6ca4e27868f307d2481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 21:05:48 compute-0 systemd[1]: Started libpod-conmon-4e01decc544d7ddd375ba2b8a335a03c52b2a7fed205d6ca4e27868f307d2481.scope.
Nov 24 21:05:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92403a4e92066c1e7248b75cc4345c9cbfa4d76776c47b14f3581686b73b6c76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92403a4e92066c1e7248b75cc4345c9cbfa4d76776c47b14f3581686b73b6c76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92403a4e92066c1e7248b75cc4345c9cbfa4d76776c47b14f3581686b73b6c76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92403a4e92066c1e7248b75cc4345c9cbfa4d76776c47b14f3581686b73b6c76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:48 compute-0 podman[317880]: 2025-11-24 21:05:48.332725978 +0000 UTC m=+0.109186075 container init 4e01decc544d7ddd375ba2b8a335a03c52b2a7fed205d6ca4e27868f307d2481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:05:48 compute-0 podman[317880]: 2025-11-24 21:05:48.245904527 +0000 UTC m=+0.022364644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:05:48 compute-0 podman[317880]: 2025-11-24 21:05:48.342099461 +0000 UTC m=+0.118559558 container start 4e01decc544d7ddd375ba2b8a335a03c52b2a7fed205d6ca4e27868f307d2481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:05:48 compute-0 podman[317880]: 2025-11-24 21:05:48.344877316 +0000 UTC m=+0.121337423 container attach 4e01decc544d7ddd375ba2b8a335a03c52b2a7fed205d6ca4e27868f307d2481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bardeen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 21:05:48 compute-0 ceph-mon[75677]: pgmap v2533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:48.802+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]: {
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:     "0": [
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:         {
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "devices": [
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "/dev/loop3"
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             ],
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_name": "ceph_lv0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_size": "21470642176",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "name": "ceph_lv0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "tags": {
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.cluster_name": "ceph",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.crush_device_class": "",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.encrypted": "0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.osd_id": "0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.type": "block",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.vdo": "0"
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             },
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "type": "block",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "vg_name": "ceph_vg0"
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:         }
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:     ],
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:     "1": [
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:         {
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "devices": [
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "/dev/loop4"
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             ],
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_name": "ceph_lv1",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_size": "21470642176",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "name": "ceph_lv1",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "tags": {
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.cluster_name": "ceph",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.crush_device_class": "",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.encrypted": "0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.osd_id": "1",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.type": "block",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.vdo": "0"
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             },
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "type": "block",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "vg_name": "ceph_vg1"
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:         }
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:     ],
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:     "2": [
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:         {
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "devices": [
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "/dev/loop5"
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             ],
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_name": "ceph_lv2",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_size": "21470642176",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "name": "ceph_lv2",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "tags": {
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.cluster_name": "ceph",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.crush_device_class": "",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.encrypted": "0",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.osd_id": "2",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.type": "block",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:                 "ceph.vdo": "0"
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             },
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "type": "block",
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:             "vg_name": "ceph_vg2"
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:         }
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]:     ]
Nov 24 21:05:49 compute-0 reverent_bardeen[317897]: }
Nov 24 21:05:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:49.120+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:49 compute-0 systemd[1]: libpod-4e01decc544d7ddd375ba2b8a335a03c52b2a7fed205d6ca4e27868f307d2481.scope: Deactivated successfully.
Nov 24 21:05:49 compute-0 podman[317880]: 2025-11-24 21:05:49.136829117 +0000 UTC m=+0.913289214 container died 4e01decc544d7ddd375ba2b8a335a03c52b2a7fed205d6ca4e27868f307d2481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-92403a4e92066c1e7248b75cc4345c9cbfa4d76776c47b14f3581686b73b6c76-merged.mount: Deactivated successfully.
Nov 24 21:05:49 compute-0 podman[317880]: 2025-11-24 21:05:49.210408382 +0000 UTC m=+0.986868499 container remove 4e01decc544d7ddd375ba2b8a335a03c52b2a7fed205d6ca4e27868f307d2481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 21:05:49 compute-0 systemd[1]: libpod-conmon-4e01decc544d7ddd375ba2b8a335a03c52b2a7fed205d6ca4e27868f307d2481.scope: Deactivated successfully.
Nov 24 21:05:49 compute-0 sudo[317756]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:49 compute-0 sudo[317918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:05:49 compute-0 sudo[317918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:49 compute-0 sudo[317918]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:49 compute-0 sudo[317943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:05:49 compute-0 sudo[317943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:49 compute-0 sudo[317943]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:49 compute-0 sudo[317968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:05:49 compute-0 sudo[317968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:49 compute-0 sudo[317968]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:49 compute-0 sudo[317993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:05:49 compute-0 sudo[317993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:49 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:49.793+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:49 compute-0 podman[318060]: 2025-11-24 21:05:49.970790762 +0000 UTC m=+0.054730387 container create 1cdd38119123c00ee2348d9eb5bc32ce8f905baa40e0232d06d5621e5e94c3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hugle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 21:05:50 compute-0 systemd[1]: Started libpod-conmon-1cdd38119123c00ee2348d9eb5bc32ce8f905baa40e0232d06d5621e5e94c3fb.scope.
Nov 24 21:05:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:05:50 compute-0 podman[318060]: 2025-11-24 21:05:49.947499964 +0000 UTC m=+0.031439589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:05:50 compute-0 podman[318060]: 2025-11-24 21:05:50.052019152 +0000 UTC m=+0.135958787 container init 1cdd38119123c00ee2348d9eb5bc32ce8f905baa40e0232d06d5621e5e94c3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 21:05:50 compute-0 podman[318060]: 2025-11-24 21:05:50.059400801 +0000 UTC m=+0.143340396 container start 1cdd38119123c00ee2348d9eb5bc32ce8f905baa40e0232d06d5621e5e94c3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:05:50 compute-0 podman[318060]: 2025-11-24 21:05:50.063295346 +0000 UTC m=+0.147234971 container attach 1cdd38119123c00ee2348d9eb5bc32ce8f905baa40e0232d06d5621e5e94c3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hugle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 21:05:50 compute-0 wizardly_hugle[318076]: 167 167
Nov 24 21:05:50 compute-0 systemd[1]: libpod-1cdd38119123c00ee2348d9eb5bc32ce8f905baa40e0232d06d5621e5e94c3fb.scope: Deactivated successfully.
Nov 24 21:05:50 compute-0 podman[318060]: 2025-11-24 21:05:50.068582209 +0000 UTC m=+0.152521844 container died 1cdd38119123c00ee2348d9eb5bc32ce8f905baa40e0232d06d5621e5e94c3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-83a0e80f260353c8a72a5454e6c909bdb9c1d84d72c8d3bd88c3c52328b034f3-merged.mount: Deactivated successfully.
Nov 24 21:05:50 compute-0 podman[318060]: 2025-11-24 21:05:50.108516485 +0000 UTC m=+0.192456070 container remove 1cdd38119123c00ee2348d9eb5bc32ce8f905baa40e0232d06d5621e5e94c3fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 24 21:05:50 compute-0 systemd[1]: libpod-conmon-1cdd38119123c00ee2348d9eb5bc32ce8f905baa40e0232d06d5621e5e94c3fb.scope: Deactivated successfully.
Nov 24 21:05:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:50.123+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:50 compute-0 podman[318100]: 2025-11-24 21:05:50.327634472 +0000 UTC m=+0.051171470 container create ce6852e7c91af70082ff7d1fd750d7b88f748c4b3a64787721076340fbfbb73e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 21:05:50 compute-0 ceph-mon[75677]: pgmap v2534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:50 compute-0 systemd[1]: Started libpod-conmon-ce6852e7c91af70082ff7d1fd750d7b88f748c4b3a64787721076340fbfbb73e.scope.
Nov 24 21:05:50 compute-0 podman[318100]: 2025-11-24 21:05:50.306618246 +0000 UTC m=+0.030155224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:05:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449cbe7104569dcb966831167a7a2b785ddf0eb1231c326c12a899d1871217f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449cbe7104569dcb966831167a7a2b785ddf0eb1231c326c12a899d1871217f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449cbe7104569dcb966831167a7a2b785ddf0eb1231c326c12a899d1871217f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449cbe7104569dcb966831167a7a2b785ddf0eb1231c326c12a899d1871217f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:05:50 compute-0 podman[318100]: 2025-11-24 21:05:50.445575432 +0000 UTC m=+0.169112460 container init ce6852e7c91af70082ff7d1fd750d7b88f748c4b3a64787721076340fbfbb73e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:05:50 compute-0 podman[318100]: 2025-11-24 21:05:50.457627878 +0000 UTC m=+0.181164866 container start ce6852e7c91af70082ff7d1fd750d7b88f748c4b3a64787721076340fbfbb73e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:05:50 compute-0 podman[318100]: 2025-11-24 21:05:50.462055387 +0000 UTC m=+0.185592435 container attach ce6852e7c91af70082ff7d1fd750d7b88f748c4b3a64787721076340fbfbb73e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 21:05:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:50.833+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:51.144+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:51 compute-0 kind_swanson[318116]: {
Nov 24 21:05:51 compute-0 kind_swanson[318116]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "osd_id": 2,
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "type": "bluestore"
Nov 24 21:05:51 compute-0 kind_swanson[318116]:     },
Nov 24 21:05:51 compute-0 kind_swanson[318116]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "osd_id": 1,
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "type": "bluestore"
Nov 24 21:05:51 compute-0 kind_swanson[318116]:     },
Nov 24 21:05:51 compute-0 kind_swanson[318116]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "osd_id": 0,
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:05:51 compute-0 kind_swanson[318116]:         "type": "bluestore"
Nov 24 21:05:51 compute-0 kind_swanson[318116]:     }
Nov 24 21:05:51 compute-0 kind_swanson[318116]: }
Nov 24 21:05:51 compute-0 systemd[1]: libpod-ce6852e7c91af70082ff7d1fd750d7b88f748c4b3a64787721076340fbfbb73e.scope: Deactivated successfully.
Nov 24 21:05:51 compute-0 podman[318100]: 2025-11-24 21:05:51.471657777 +0000 UTC m=+1.195194735 container died ce6852e7c91af70082ff7d1fd750d7b88f748c4b3a64787721076340fbfbb73e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:05:51 compute-0 systemd[1]: libpod-ce6852e7c91af70082ff7d1fd750d7b88f748c4b3a64787721076340fbfbb73e.scope: Consumed 1.023s CPU time.
Nov 24 21:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2449cbe7104569dcb966831167a7a2b785ddf0eb1231c326c12a899d1871217f-merged.mount: Deactivated successfully.
Nov 24 21:05:51 compute-0 podman[318100]: 2025-11-24 21:05:51.531442969 +0000 UTC m=+1.254979957 container remove ce6852e7c91af70082ff7d1fd750d7b88f748c4b3a64787721076340fbfbb73e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 21:05:51 compute-0 systemd[1]: libpod-conmon-ce6852e7c91af70082ff7d1fd750d7b88f748c4b3a64787721076340fbfbb73e.scope: Deactivated successfully.
Nov 24 21:05:51 compute-0 sudo[317993]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:05:51 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:05:51 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:05:51 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:05:51 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ea93f34a-e247-4d9e-922c-694a8c62c340 does not exist
Nov 24 21:05:51 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9f8224a1-c1ed-4f64-a0b0-59b97637864c does not exist
Nov 24 21:05:51 compute-0 sudo[318160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:05:51 compute-0 sudo[318160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:51 compute-0 sudo[318160]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:51 compute-0 sudo[318185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:05:51 compute-0 sudo[318185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:05:51 compute-0 sudo[318185]: pam_unix(sudo:session): session closed for user root
Nov 24 21:05:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:51.823+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:52.101+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4472 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:52 compute-0 ceph-mon[75677]: pgmap v2535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:05:52 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:05:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:52 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4472 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:52.810+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:53.104+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:53.846+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:54.119+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:05:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:05:54 compute-0 ceph-mon[75677]: pgmap v2536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:54.849+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:54 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:55.115+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:55.846+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:55 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:56.119+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:56 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:56 compute-0 ceph-mon[75677]: pgmap v2537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:56.829+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:57.155+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:05:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4477 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:57.818+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:58.175+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:58 compute-0 ceph-mon[75677]: pgmap v2538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:58 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4477 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:05:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:58.787+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:05:58 compute-0 podman[318210]: 2025-11-24 21:05:58.874989439 +0000 UTC m=+0.093060000 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 21:05:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:05:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:05:59.222+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:05:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:05:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:05:59.819+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:05:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:00.273+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:00.845+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:00 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:01.242+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:01.857+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:01 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:02.229+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:02.808+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:02 compute-0 podman[318232]: 2025-11-24 21:06:02.958524485 +0000 UTC m=+0.176229252 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller)
Nov 24 21:06:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:03.220+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4482 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:03.786+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:04 compute-0 ceph-mon[75677]: pgmap v2539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:04 compute-0 ceph-mon[75677]: pgmap v2540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:04 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4482 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:04.252+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:04.774+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:05.213+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:05 compute-0 ceph-mon[75677]: pgmap v2541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:05.737+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:06.231+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:06 compute-0 ceph-mon[75677]: pgmap v2542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:06.762+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:07.235+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:07.744+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:08.263+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:08.788+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:09 compute-0 ceph-mon[75677]: pgmap v2543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:09.241+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:06:09.425 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:06:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:06:09.426 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:06:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:06:09.426 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:06:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:09.748+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:10.248+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:10.778+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:11 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4491 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:11.294+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:11 compute-0 ceph-mon[75677]: pgmap v2544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:11.785+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:11 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:12.336+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:12 compute-0 ceph-mon[75677]: pgmap v2545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:12 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4491 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:12.748+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:13.313+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:13.713+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:14.336+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:14.731+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:14 compute-0 ceph-mon[75677]: pgmap v2546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:15.354+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:15 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:15.780+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:16 compute-0 ceph-mon[75677]: pgmap v2547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:16.313+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:16.787+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:16 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4496 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:17.288+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:17.802+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:18.325+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:18 compute-0 ceph-mon[75677]: pgmap v2548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:18 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4496 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:18.753+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:18 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:18 compute-0 podman[318258]: 2025-11-24 21:06:18.860694936 +0000 UTC m=+0.086485893 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 21:06:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:19.311+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:19.729+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:20.267+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:20 compute-0 ceph-mon[75677]: pgmap v2549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:20.687+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:20 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:21.258+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:21.688+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:22.346+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4501 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:22 compute-0 ceph-mon[75677]: pgmap v2550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:22 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4501 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:22.730+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:23.318+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:23.769+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:24.361+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:06:24
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.log', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'backups']
Nov 24 21:06:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:06:24 compute-0 ceph-mon[75677]: pgmap v2551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:24.793+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:25.403+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:25.841+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:26.443+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:26 compute-0 ceph-mon[75677]: pgmap v2552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:26.835+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:27.478+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4507 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:27.805+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:28.482+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:28 compute-0 ceph-mon[75677]: pgmap v2553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:28 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4507 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:28.783+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:29.439+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:29 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:29.796+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:29 compute-0 podman[318277]: 2025-11-24 21:06:29.868684903 +0000 UTC m=+0.089491014 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 21:06:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:30.417+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:30.767+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:30 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:30 compute-0 ceph-mon[75677]: pgmap v2554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:31.385+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:31.798+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:32.427+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:32 compute-0 sshd-session[318297]: Invalid user ftpuser from 80.94.95.115 port 43356
Nov 24 21:06:32 compute-0 sshd-session[318297]: Connection closed by invalid user ftpuser 80.94.95.115 port 43356 [preauth]
Nov 24 21:06:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:32.779+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:32 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:32 compute-0 ceph-mon[75677]: pgmap v2555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:33.474+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:33.736+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:33 compute-0 podman[318299]: 2025-11-24 21:06:33.927989107 +0000 UTC m=+0.158949737 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 21:06:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:33 compute-0 ceph-mon[75677]: pgmap v2556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:34.465+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:34.727+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:06:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:06:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:35.454+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:35.684+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:35 compute-0 ceph-mon[75677]: pgmap v2557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:36.459+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:36.645+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4512 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:37.472+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:37 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:37.616+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:37 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Nov 24 21:06:37 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:37.998403) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 21:06:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Nov 24 21:06:37 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018397998469, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1508, "num_deletes": 480, "total_data_size": 1481227, "memory_usage": 1509264, "flush_reason": "Manual Compaction"}
Nov 24 21:06:37 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Nov 24 21:06:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:38 compute-0 ceph-mon[75677]: pgmap v2558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:38 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4512 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018398011061, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1455213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76237, "largest_seqno": 77744, "table_properties": {"data_size": 1448683, "index_size": 2963, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 22956, "raw_average_key_size": 23, "raw_value_size": 1432227, "raw_average_value_size": 1461, "num_data_blocks": 128, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 480, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764018313, "oldest_key_time": 1764018313, "file_creation_time": 1764018397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 12805 microseconds, and 8424 cpu microseconds.
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.011207) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1455213 bytes OK
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.011283) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.013464) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.013485) EVENT_LOG_v1 {"time_micros": 1764018398013477, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.013509) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1473143, prev total WAL file size 1473143, number of live WAL files 2.
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.014706) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1421KB)], [161(9042KB)]
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018398014803, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 10715055, "oldest_snapshot_seqno": -1}
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 13375 keys, 9164786 bytes, temperature: kUnknown
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018398095516, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9164786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9093061, "index_size": 37180, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33477, "raw_key_size": 368910, "raw_average_key_size": 27, "raw_value_size": 8865154, "raw_average_value_size": 662, "num_data_blocks": 1343, "num_entries": 13375, "num_filter_entries": 13375, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764018398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.096190) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9164786 bytes
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.098041) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.4 rd, 113.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.8 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(13.7) write-amplify(6.3) OK, records in: 14347, records dropped: 972 output_compression: NoCompression
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.098074) EVENT_LOG_v1 {"time_micros": 1764018398098058, "job": 100, "event": "compaction_finished", "compaction_time_micros": 80909, "compaction_time_cpu_micros": 52989, "output_level": 6, "num_output_files": 1, "total_output_size": 9164786, "num_input_records": 14347, "num_output_records": 13375, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018398098731, "job": 100, "event": "table_file_deletion", "file_number": 163}
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018398102138, "job": 100, "event": "table_file_deletion", "file_number": 161}
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.014522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.102285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.102295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.102298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.102301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:06:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:06:38.102304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:06:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:38.480+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:38 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:38.580+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:39.477+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:39.534+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:40 compute-0 ceph-mon[75677]: pgmap v2559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:40.492+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:40.578+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:06:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:06:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:06:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:06:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:06:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:41.446+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:41 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:41.567+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:41 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:06:41 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:06:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 21:06:41 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 77K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1850 writes, 11K keys, 1850 commit groups, 1.0 writes per commit group, ingest: 10.23 MB, 0.02 MB/s
                                           Interval WAL: 1850 writes, 1850 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     46.5      1.62              0.35        50    0.032       0      0       0.0       0.0
                                             L6      1/0    8.74 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.8    104.0     90.2      4.82              1.84        49    0.098    523K    31K       0.0       0.0
                                            Sum      1/0    8.74 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.8     77.8     79.2      6.44              2.19        99    0.065    523K    31K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.5     91.5     89.6      0.97              0.41        16    0.060    114K   7586       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    104.0     90.2      4.82              1.84        49    0.098    523K    31K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     46.5      1.62              0.35        49    0.033       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.074, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.50 GB write, 0.11 MB/s write, 0.49 GB read, 0.10 MB/s read, 6.4 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b73e09f1f0#2 capacity: 304.00 MB usage: 50.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.0005 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3310,47.03 MB,15.4691%) FilterBlock(100,1.44 MB,0.474463%) IndexBlock(100,1.80 MB,0.591348%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 21:06:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:42 compute-0 ceph-mon[75677]: pgmap v2560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4521 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:42.475+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:42.573+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:43 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4521 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:43.513+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:43.592+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:44 compute-0 ceph-mon[75677]: pgmap v2561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:44.530+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:44.572+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:44 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:45.553+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:45.559+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:45 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:46 compute-0 ceph-mon[75677]: pgmap v2562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:46.583+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:46.588+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:47.621+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:47.623+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:48 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 42 slow ops, oldest one blocked for 4526 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:48 compute-0 ceph-mon[75677]: pgmap v2563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:48.606+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:48 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:48.656+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:49 compute-0 ceph-mon[75677]: Health check update: 42 slow ops, oldest one blocked for 4526 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:49.584+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:49.623+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:49 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:49 compute-0 podman[318326]: 2025-11-24 21:06:49.880214246 +0000 UTC m=+0.110402425 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 21:06:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:50 compute-0 ceph-mon[75677]: pgmap v2564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:50.584+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:50.628+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:51.597+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:51.622+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:51 compute-0 sudo[318344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:06:51 compute-0 sudo[318344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:51 compute-0 sudo[318344]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:51 compute-0 sudo[318369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:06:51 compute-0 sudo[318369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:51 compute-0 sudo[318369]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:52 compute-0 sudo[318394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:06:52 compute-0 sudo[318394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:52 compute-0 sudo[318394]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:52 compute-0 ceph-mon[75677]: pgmap v2565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:06:52 compute-0 sudo[318419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:06:52 compute-0 sudo[318419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:52.589+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:52.627+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:52 compute-0 sudo[318419]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:06:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:06:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:06:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:06:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:06:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:06:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 1ab53750-d9ff-4138-bead-ff2875f82bfa does not exist
Nov 24 21:06:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev de6045ab-9eb2-4930-9ef5-f6dd3003d3a3 does not exist
Nov 24 21:06:52 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 7a703a3c-3086-4778-810f-5b15f61f49ed does not exist
Nov 24 21:06:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:06:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:06:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:06:52 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:06:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:06:52 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:06:52 compute-0 sudo[318476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:06:52 compute-0 sudo[318476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:52 compute-0 sudo[318476]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:53 compute-0 sudo[318501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:06:53 compute-0 sudo[318501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:53 compute-0 sudo[318501]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:53 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:06:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:06:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:06:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:06:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:06:53 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:06:53 compute-0 sudo[318526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:06:53 compute-0 sudo[318526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:53 compute-0 sudo[318526]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:53 compute-0 sudo[318551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:06:53 compute-0 sudo[318551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:53.559+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:53.580+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:53 compute-0 podman[318617]: 2025-11-24 21:06:53.829412155 +0000 UTC m=+0.072009210 container create 9e3be6ffef457bdee34c2be9e5571a63ef37c421df732d90722167987bc1dd79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:06:53 compute-0 systemd[1]: Started libpod-conmon-9e3be6ffef457bdee34c2be9e5571a63ef37c421df732d90722167987bc1dd79.scope.
Nov 24 21:06:53 compute-0 podman[318617]: 2025-11-24 21:06:53.799563132 +0000 UTC m=+0.042160227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:06:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:06:53 compute-0 podman[318617]: 2025-11-24 21:06:53.953780025 +0000 UTC m=+0.196377120 container init 9e3be6ffef457bdee34c2be9e5571a63ef37c421df732d90722167987bc1dd79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 21:06:53 compute-0 podman[318617]: 2025-11-24 21:06:53.965928691 +0000 UTC m=+0.208525746 container start 9e3be6ffef457bdee34c2be9e5571a63ef37c421df732d90722167987bc1dd79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:06:53 compute-0 podman[318617]: 2025-11-24 21:06:53.970348471 +0000 UTC m=+0.212945586 container attach 9e3be6ffef457bdee34c2be9e5571a63ef37c421df732d90722167987bc1dd79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 24 21:06:53 compute-0 laughing_chatterjee[318633]: 167 167
Nov 24 21:06:53 compute-0 systemd[1]: libpod-9e3be6ffef457bdee34c2be9e5571a63ef37c421df732d90722167987bc1dd79.scope: Deactivated successfully.
Nov 24 21:06:53 compute-0 conmon[318633]: conmon 9e3be6ffef457bdee34c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e3be6ffef457bdee34c2be9e5571a63ef37c421df732d90722167987bc1dd79.scope/container/memory.events
Nov 24 21:06:53 compute-0 podman[318617]: 2025-11-24 21:06:53.97775415 +0000 UTC m=+0.220351225 container died 9e3be6ffef457bdee34c2be9e5571a63ef37c421df732d90722167987bc1dd79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 21:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1de41afcb5f8a3244b858f3cb4341c3dfe87d049598e15d47642294565aced3a-merged.mount: Deactivated successfully.
Nov 24 21:06:54 compute-0 podman[318617]: 2025-11-24 21:06:54.037118738 +0000 UTC m=+0.279715783 container remove 9e3be6ffef457bdee34c2be9e5571a63ef37c421df732d90722167987bc1dd79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 21:06:54 compute-0 systemd[1]: libpod-conmon-9e3be6ffef457bdee34c2be9e5571a63ef37c421df732d90722167987bc1dd79.scope: Deactivated successfully.
Nov 24 21:06:54 compute-0 ceph-mon[75677]: pgmap v2566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:54 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:54 compute-0 podman[318657]: 2025-11-24 21:06:54.269515246 +0000 UTC m=+0.057449637 container create c7d80d209bfea518ab9d2fd8831f3f8badf936d3c90a39bf0c3f6fe0dc35ecc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lichterman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:06:54 compute-0 systemd[1]: Started libpod-conmon-c7d80d209bfea518ab9d2fd8831f3f8badf936d3c90a39bf0c3f6fe0dc35ecc6.scope.
Nov 24 21:06:54 compute-0 podman[318657]: 2025-11-24 21:06:54.242363136 +0000 UTC m=+0.030297587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:06:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53cf1997a8d134d008276230d737b685a6188fc882da912de7e985cae2bc2fd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53cf1997a8d134d008276230d737b685a6188fc882da912de7e985cae2bc2fd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53cf1997a8d134d008276230d737b685a6188fc882da912de7e985cae2bc2fd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53cf1997a8d134d008276230d737b685a6188fc882da912de7e985cae2bc2fd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53cf1997a8d134d008276230d737b685a6188fc882da912de7e985cae2bc2fd6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:54 compute-0 podman[318657]: 2025-11-24 21:06:54.396073165 +0000 UTC m=+0.184007606 container init c7d80d209bfea518ab9d2fd8831f3f8badf936d3c90a39bf0c3f6fe0dc35ecc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 21:06:54 compute-0 podman[318657]: 2025-11-24 21:06:54.410885264 +0000 UTC m=+0.198819665 container start c7d80d209bfea518ab9d2fd8831f3f8badf936d3c90a39bf0c3f6fe0dc35ecc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 21:06:54 compute-0 podman[318657]: 2025-11-24 21:06:54.41484761 +0000 UTC m=+0.202782071 container attach c7d80d209bfea518ab9d2fd8831f3f8badf936d3c90a39bf0c3f6fe0dc35ecc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lichterman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 21:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:06:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:06:54 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:54.563+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:54.602+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:55 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 4531 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:55 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:55 compute-0 pedantic_lichterman[318673]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:06:55 compute-0 pedantic_lichterman[318673]: --> relative data size: 1.0
Nov 24 21:06:55 compute-0 pedantic_lichterman[318673]: --> All data devices are unavailable
Nov 24 21:06:55 compute-0 systemd[1]: libpod-c7d80d209bfea518ab9d2fd8831f3f8badf936d3c90a39bf0c3f6fe0dc35ecc6.scope: Deactivated successfully.
Nov 24 21:06:55 compute-0 podman[318657]: 2025-11-24 21:06:55.543012342 +0000 UTC m=+1.330946743 container died c7d80d209bfea518ab9d2fd8831f3f8badf936d3c90a39bf0c3f6fe0dc35ecc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 21:06:55 compute-0 systemd[1]: libpod-c7d80d209bfea518ab9d2fd8831f3f8badf936d3c90a39bf0c3f6fe0dc35ecc6.scope: Consumed 1.088s CPU time.
Nov 24 21:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-53cf1997a8d134d008276230d737b685a6188fc882da912de7e985cae2bc2fd6-merged.mount: Deactivated successfully.
Nov 24 21:06:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:55.583+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:55 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:55.614+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:55 compute-0 podman[318657]: 2025-11-24 21:06:55.616129131 +0000 UTC m=+1.404063532 container remove c7d80d209bfea518ab9d2fd8831f3f8badf936d3c90a39bf0c3f6fe0dc35ecc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lichterman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 21:06:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:55 compute-0 systemd[1]: libpod-conmon-c7d80d209bfea518ab9d2fd8831f3f8badf936d3c90a39bf0c3f6fe0dc35ecc6.scope: Deactivated successfully.
Nov 24 21:06:55 compute-0 sudo[318551]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:55 compute-0 sudo[318714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:06:55 compute-0 sudo[318714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:55 compute-0 sudo[318714]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:55 compute-0 sudo[318739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:06:55 compute-0 sudo[318739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:55 compute-0 sudo[318739]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:55 compute-0 sudo[318764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:06:55 compute-0 sudo[318764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:55 compute-0 sudo[318764]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:56 compute-0 sudo[318789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:06:56 compute-0 sudo[318789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:56 compute-0 ceph-mon[75677]: pgmap v2567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:56 compute-0 ceph-mon[75677]: Health check update: 37 slow ops, oldest one blocked for 4531 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:06:56 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:56 compute-0 podman[318853]: 2025-11-24 21:06:56.448760063 +0000 UTC m=+0.057957191 container create 8686998d88d49b563c366be33147b0aa399e58a3ed4b8eb0200a88962b35919e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 21:06:56 compute-0 systemd[1]: Started libpod-conmon-8686998d88d49b563c366be33147b0aa399e58a3ed4b8eb0200a88962b35919e.scope.
Nov 24 21:06:56 compute-0 podman[318853]: 2025-11-24 21:06:56.427418829 +0000 UTC m=+0.036616007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:06:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:06:56 compute-0 podman[318853]: 2025-11-24 21:06:56.550463382 +0000 UTC m=+0.159660520 container init 8686998d88d49b563c366be33147b0aa399e58a3ed4b8eb0200a88962b35919e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:06:56 compute-0 podman[318853]: 2025-11-24 21:06:56.561559261 +0000 UTC m=+0.170756429 container start 8686998d88d49b563c366be33147b0aa399e58a3ed4b8eb0200a88962b35919e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 21:06:56 compute-0 podman[318853]: 2025-11-24 21:06:56.565430675 +0000 UTC m=+0.174627823 container attach 8686998d88d49b563c366be33147b0aa399e58a3ed4b8eb0200a88962b35919e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 21:06:56 compute-0 reverent_gould[318869]: 167 167
Nov 24 21:06:56 compute-0 systemd[1]: libpod-8686998d88d49b563c366be33147b0aa399e58a3ed4b8eb0200a88962b35919e.scope: Deactivated successfully.
Nov 24 21:06:56 compute-0 podman[318853]: 2025-11-24 21:06:56.570927423 +0000 UTC m=+0.180124581 container died 8686998d88d49b563c366be33147b0aa399e58a3ed4b8eb0200a88962b35919e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:06:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:56.595+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:56 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-91a6b0ada8a8ff874a01308dfd2620988de70bdeaeb15bc7dba031d76e739cde-merged.mount: Deactivated successfully.
Nov 24 21:06:56 compute-0 podman[318853]: 2025-11-24 21:06:56.625619495 +0000 UTC m=+0.234816663 container remove 8686998d88d49b563c366be33147b0aa399e58a3ed4b8eb0200a88962b35919e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 21:06:56 compute-0 systemd[1]: libpod-conmon-8686998d88d49b563c366be33147b0aa399e58a3ed4b8eb0200a88962b35919e.scope: Deactivated successfully.
Nov 24 21:06:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:56.657+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:56 compute-0 podman[318893]: 2025-11-24 21:06:56.87269055 +0000 UTC m=+0.066818611 container create 483e8113746a33ec879c8d6d1309024dee8e2c51be053a7444f998d205c9c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 21:06:56 compute-0 systemd[1]: Started libpod-conmon-483e8113746a33ec879c8d6d1309024dee8e2c51be053a7444f998d205c9c677.scope.
Nov 24 21:06:56 compute-0 podman[318893]: 2025-11-24 21:06:56.845473026 +0000 UTC m=+0.039601127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:06:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee4e7166f11909921fcb059acced5b025847e8b0d51599adfa2e5f7dd00a468/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee4e7166f11909921fcb059acced5b025847e8b0d51599adfa2e5f7dd00a468/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee4e7166f11909921fcb059acced5b025847e8b0d51599adfa2e5f7dd00a468/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee4e7166f11909921fcb059acced5b025847e8b0d51599adfa2e5f7dd00a468/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:56 compute-0 podman[318893]: 2025-11-24 21:06:56.981109099 +0000 UTC m=+0.175237200 container init 483e8113746a33ec879c8d6d1309024dee8e2c51be053a7444f998d205c9c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 21:06:56 compute-0 podman[318893]: 2025-11-24 21:06:56.996096243 +0000 UTC m=+0.190224304 container start 483e8113746a33ec879c8d6d1309024dee8e2c51be053a7444f998d205c9c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 21:06:57 compute-0 podman[318893]: 2025-11-24 21:06:57.001214101 +0000 UTC m=+0.195342132 container attach 483e8113746a33ec879c8d6d1309024dee8e2c51be053a7444f998d205c9c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:06:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:57 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:06:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:57.565+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:57.661+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:57 compute-0 romantic_hoover[318910]: {
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:     "0": [
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:         {
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "devices": [
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "/dev/loop3"
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             ],
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_name": "ceph_lv0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_size": "21470642176",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "name": "ceph_lv0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "tags": {
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.cluster_name": "ceph",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.crush_device_class": "",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.encrypted": "0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.osd_id": "0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.type": "block",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.vdo": "0"
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             },
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "type": "block",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "vg_name": "ceph_vg0"
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:         }
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:     ],
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:     "1": [
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:         {
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "devices": [
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "/dev/loop4"
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             ],
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_name": "ceph_lv1",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_size": "21470642176",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "name": "ceph_lv1",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "tags": {
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.cluster_name": "ceph",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.crush_device_class": "",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.encrypted": "0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.osd_id": "1",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.type": "block",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.vdo": "0"
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             },
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "type": "block",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "vg_name": "ceph_vg1"
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:         }
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:     ],
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:     "2": [
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:         {
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "devices": [
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "/dev/loop5"
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             ],
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_name": "ceph_lv2",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_size": "21470642176",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "name": "ceph_lv2",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "tags": {
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.cluster_name": "ceph",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.crush_device_class": "",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.encrypted": "0",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.osd_id": "2",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.type": "block",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:                 "ceph.vdo": "0"
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             },
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "type": "block",
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:             "vg_name": "ceph_vg2"
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:         }
Nov 24 21:06:57 compute-0 romantic_hoover[318910]:     ]
Nov 24 21:06:57 compute-0 romantic_hoover[318910]: }
Nov 24 21:06:57 compute-0 systemd[1]: libpod-483e8113746a33ec879c8d6d1309024dee8e2c51be053a7444f998d205c9c677.scope: Deactivated successfully.
Nov 24 21:06:57 compute-0 podman[318893]: 2025-11-24 21:06:57.800373201 +0000 UTC m=+0.994501272 container died 483e8113746a33ec879c8d6d1309024dee8e2c51be053a7444f998d205c9c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-cee4e7166f11909921fcb059acced5b025847e8b0d51599adfa2e5f7dd00a468-merged.mount: Deactivated successfully.
Nov 24 21:06:57 compute-0 podman[318893]: 2025-11-24 21:06:57.869671718 +0000 UTC m=+1.063799739 container remove 483e8113746a33ec879c8d6d1309024dee8e2c51be053a7444f998d205c9c677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:06:57 compute-0 systemd[1]: libpod-conmon-483e8113746a33ec879c8d6d1309024dee8e2c51be053a7444f998d205c9c677.scope: Deactivated successfully.
Nov 24 21:06:57 compute-0 sudo[318789]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:58 compute-0 sudo[318929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:06:58 compute-0 sudo[318929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:58 compute-0 sudo[318929]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:58 compute-0 sudo[318954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:06:58 compute-0 sudo[318954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:58 compute-0 sudo[318954]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:58 compute-0 ceph-mon[75677]: pgmap v2568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:58 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:58 compute-0 sudo[318979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:06:58 compute-0 sudo[318979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:58 compute-0 sudo[318979]: pam_unix(sudo:session): session closed for user root
Nov 24 21:06:58 compute-0 sudo[319004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:06:58 compute-0 sudo[319004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:06:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:58.566+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:58.670+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:58 compute-0 podman[319069]: 2025-11-24 21:06:58.747699622 +0000 UTC m=+0.066683956 container create c40c1583258c4bc839753c2f50795f7a9428f22c600a20bbf4d7aaa263c774b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_solomon, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 21:06:58 compute-0 systemd[1]: Started libpod-conmon-c40c1583258c4bc839753c2f50795f7a9428f22c600a20bbf4d7aaa263c774b0.scope.
Nov 24 21:06:58 compute-0 podman[319069]: 2025-11-24 21:06:58.723432199 +0000 UTC m=+0.042416523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:06:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:06:58 compute-0 podman[319069]: 2025-11-24 21:06:58.848801245 +0000 UTC m=+0.167785569 container init c40c1583258c4bc839753c2f50795f7a9428f22c600a20bbf4d7aaa263c774b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 21:06:58 compute-0 podman[319069]: 2025-11-24 21:06:58.860684875 +0000 UTC m=+0.179669199 container start c40c1583258c4bc839753c2f50795f7a9428f22c600a20bbf4d7aaa263c774b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_solomon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 21:06:58 compute-0 podman[319069]: 2025-11-24 21:06:58.864995211 +0000 UTC m=+0.183979555 container attach c40c1583258c4bc839753c2f50795f7a9428f22c600a20bbf4d7aaa263c774b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 24 21:06:58 compute-0 recursing_solomon[319085]: 167 167
Nov 24 21:06:58 compute-0 systemd[1]: libpod-c40c1583258c4bc839753c2f50795f7a9428f22c600a20bbf4d7aaa263c774b0.scope: Deactivated successfully.
Nov 24 21:06:58 compute-0 podman[319069]: 2025-11-24 21:06:58.866578843 +0000 UTC m=+0.185588658 container died c40c1583258c4bc839753c2f50795f7a9428f22c600a20bbf4d7aaa263c774b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_solomon, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:06:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-33424a30eb204127d7e307ecc26c35b34bc8bba517826ab113e714314e4284a7-merged.mount: Deactivated successfully.
Nov 24 21:06:58 compute-0 podman[319069]: 2025-11-24 21:06:58.913401385 +0000 UTC m=+0.232385719 container remove c40c1583258c4bc839753c2f50795f7a9428f22c600a20bbf4d7aaa263c774b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_solomon, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 21:06:58 compute-0 systemd[1]: libpod-conmon-c40c1583258c4bc839753c2f50795f7a9428f22c600a20bbf4d7aaa263c774b0.scope: Deactivated successfully.
Nov 24 21:06:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:06:59 compute-0 podman[319109]: 2025-11-24 21:06:59.164102836 +0000 UTC m=+0.069689107 container create e1794a3cd2e936a2418de86a091d6e35133c8e2c3a636ed04706b338e772c6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_darwin, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 21:06:59 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:06:59 compute-0 podman[319109]: 2025-11-24 21:06:59.130649215 +0000 UTC m=+0.036235556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:06:59 compute-0 systemd[1]: Started libpod-conmon-e1794a3cd2e936a2418de86a091d6e35133c8e2c3a636ed04706b338e772c6d9.scope.
Nov 24 21:06:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a014fbd66c8123ea9cb028d410fa3848f286d7dfa021c4865ff86831e494a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a014fbd66c8123ea9cb028d410fa3848f286d7dfa021c4865ff86831e494a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a014fbd66c8123ea9cb028d410fa3848f286d7dfa021c4865ff86831e494a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a014fbd66c8123ea9cb028d410fa3848f286d7dfa021c4865ff86831e494a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:06:59 compute-0 podman[319109]: 2025-11-24 21:06:59.274016836 +0000 UTC m=+0.179603137 container init e1794a3cd2e936a2418de86a091d6e35133c8e2c3a636ed04706b338e772c6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_darwin, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:06:59 compute-0 podman[319109]: 2025-11-24 21:06:59.292400371 +0000 UTC m=+0.197986672 container start e1794a3cd2e936a2418de86a091d6e35133c8e2c3a636ed04706b338e772c6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 21:06:59 compute-0 podman[319109]: 2025-11-24 21:06:59.297066047 +0000 UTC m=+0.202652338 container attach e1794a3cd2e936a2418de86a091d6e35133c8e2c3a636ed04706b338e772c6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_darwin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:06:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:06:59.595+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:06:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:06:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:06:59.684+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:06:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:00 compute-0 ceph-mon[75677]: pgmap v2569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:00 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]: {
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "osd_id": 2,
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "type": "bluestore"
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:     },
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "osd_id": 1,
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "type": "bluestore"
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:     },
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "osd_id": 0,
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:         "type": "bluestore"
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]:     }
Nov 24 21:07:00 compute-0 ecstatic_darwin[319126]: }
Nov 24 21:07:00 compute-0 systemd[1]: libpod-e1794a3cd2e936a2418de86a091d6e35133c8e2c3a636ed04706b338e772c6d9.scope: Deactivated successfully.
Nov 24 21:07:00 compute-0 systemd[1]: libpod-e1794a3cd2e936a2418de86a091d6e35133c8e2c3a636ed04706b338e772c6d9.scope: Consumed 1.237s CPU time.
Nov 24 21:07:00 compute-0 podman[319109]: 2025-11-24 21:07:00.520685338 +0000 UTC m=+1.426271649 container died e1794a3cd2e936a2418de86a091d6e35133c8e2c3a636ed04706b338e772c6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_darwin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:07:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3a014fbd66c8123ea9cb028d410fa3848f286d7dfa021c4865ff86831e494a2-merged.mount: Deactivated successfully.
Nov 24 21:07:00 compute-0 podman[319109]: 2025-11-24 21:07:00.581446415 +0000 UTC m=+1.487032686 container remove e1794a3cd2e936a2418de86a091d6e35133c8e2c3a636ed04706b338e772c6d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:07:00 compute-0 systemd[1]: libpod-conmon-e1794a3cd2e936a2418de86a091d6e35133c8e2c3a636ed04706b338e772c6d9.scope: Deactivated successfully.
Nov 24 21:07:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:00.603+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:00 compute-0 sudo[319004]: pam_unix(sudo:session): session closed for user root
Nov 24 21:07:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:07:00 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:07:00 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:07:00 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:07:00 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev d264c089-46cf-4295-bba9-ba067a7ee6f8 does not exist
Nov 24 21:07:00 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 41401076-7f45-4a09-be97-f94428c23249 does not exist
Nov 24 21:07:00 compute-0 podman[319160]: 2025-11-24 21:07:00.698054615 +0000 UTC m=+0.131153164 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 24 21:07:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:00.715+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:00 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:00 compute-0 sudo[319194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:07:00 compute-0 sudo[319194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:07:00 compute-0 sudo[319194]: pam_unix(sudo:session): session closed for user root
Nov 24 21:07:00 compute-0 sudo[319219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:07:00 compute-0 sudo[319219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:07:00 compute-0 sudo[319219]: pam_unix(sudo:session): session closed for user root
Nov 24 21:07:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:01.645+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:01 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:07:01 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:07:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:01 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 4541 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:01.737+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:01 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:02.616+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:02 compute-0 ceph-mon[75677]: pgmap v2570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:02 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:02 compute-0 ceph-mon[75677]: Health check update: 37 slow ops, oldest one blocked for 4541 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:02.754+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:03.630+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:03 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:03.749+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:04.645+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:04 compute-0 ceph-mon[75677]: pgmap v2571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:04 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:04.749+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:04 compute-0 podman[319244]: 2025-11-24 21:07:04.898626224 +0000 UTC m=+0.130167337 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 21:07:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:05.647+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:05 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:05.769+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:06.612+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:06 compute-0 ceph-mon[75677]: pgmap v2572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:06 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:06.725+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:07.585+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:07.705+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 4546 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:07 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:08.629+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:08 compute-0 ceph-mon[75677]: pgmap v2573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:08 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:08 compute-0 ceph-mon[75677]: Health check update: 37 slow ops, oldest one blocked for 4546 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:08.744+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:07:09.427 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:07:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:07:09.427 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:07:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:07:09.427 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:07:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:09.600+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:09.732+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:09 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:10.602+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:10.755+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:10 compute-0 ceph-mon[75677]: pgmap v2574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:10 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:11.584+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:11.719+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:11 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:11 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:12.540+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:12.704+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:12 compute-0 ceph-mon[75677]: pgmap v2575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:12 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:13.520+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:13.703+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:13 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:14.546+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:14.675+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:14 compute-0 ceph-mon[75677]: pgmap v2576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:14 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:15.541+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:15 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:15.676+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:15 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:07:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/359177200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:07:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:07:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/359177200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:07:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:16.523+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:16.704+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:16 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:16 compute-0 ceph-mon[75677]: pgmap v2577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:16 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/359177200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:07:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/359177200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:07:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 4551 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:17.563+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:17.751+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:17 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:17 compute-0 ceph-mon[75677]: pgmap v2578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:17 compute-0 ceph-mon[75677]: Health check update: 37 slow ops, oldest one blocked for 4551 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:18.593+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:18.754+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:18 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:18 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:19.598+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:19.796+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:19 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:19 compute-0 ceph-mon[75677]: pgmap v2579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:20.574+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:20.764+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:20 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:20 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:20 compute-0 podman[319270]: 2025-11-24 21:07:20.884531518 +0000 UTC m=+0.111810513 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 21:07:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:21.553+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:21.784+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:21 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:21 compute-0 ceph-mon[75677]: pgmap v2580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 37 slow ops, oldest one blocked for 4562 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:22.525+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:22.798+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:22 compute-0 ceph-mon[75677]: 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:07:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:22 compute-0 ceph-mon[75677]: Health check update: 37 slow ops, oldest one blocked for 4562 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:23.520+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:23.846+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:23 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:23 compute-0 ceph-mon[75677]: pgmap v2581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:07:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:24.535+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:07:24
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', 'default.rgw.log', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data']
Nov 24 21:07:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:07:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:24.850+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:24 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:25.530+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:25.896+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:25 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:25 compute-0 ceph-mon[75677]: pgmap v2582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:26.551+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:26.888+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:26 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 43 slow ops, oldest one blocked for 4562 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:27.589+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:27 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:27 compute-0 ceph-mon[75677]: pgmap v2583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:27 compute-0 ceph-mon[75677]: Health check update: 43 slow ops, oldest one blocked for 4562 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:27.937+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:28 compute-0 sshd-session[319289]: Invalid user sopuser from 182.93.7.194 port 56292
Nov 24 21:07:28 compute-0 sshd-session[319289]: Received disconnect from 182.93.7.194 port 56292:11: Bye Bye [preauth]
Nov 24 21:07:28 compute-0 sshd-session[319289]: Disconnected from invalid user sopuser 182.93.7.194 port 56292 [preauth]
Nov 24 21:07:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:28.595+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:28 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:28.935+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:29.645+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:29 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:29.959+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:29 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:29 compute-0 ceph-mon[75677]: pgmap v2584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:30.648+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:30 compute-0 podman[319291]: 2025-11-24 21:07:30.886873026 +0000 UTC m=+0.105067391 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:07:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:31.006+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:31 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:31.611+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:32.039+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:32 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:32 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:32 compute-0 ceph-mon[75677]: pgmap v2585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 43 slow ops, oldest one blocked for 4572 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:32.651+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:33.006+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:33 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:33 compute-0 ceph-mon[75677]: Health check update: 43 slow ops, oldest one blocked for 4572 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:33.700+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:33.990+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:34 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:34 compute-0 ceph-mon[75677]: pgmap v2586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:34.737+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:34.998+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:35 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:07:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:07:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:35.723+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:35 compute-0 podman[319311]: 2025-11-24 21:07:35.918855415 +0000 UTC m=+0.138681666 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:07:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:35.950+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:36 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:36 compute-0 ceph-mon[75677]: pgmap v2587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:36.676+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:36.921+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:37 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 43 slow ops, oldest one blocked for 4577 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:37.633+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:37 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:37.930+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:38 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:38 compute-0 ceph-mon[75677]: pgmap v2588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:38 compute-0 ceph-mon[75677]: Health check update: 43 slow ops, oldest one blocked for 4577 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:38 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:38.667+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:38 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:38.929+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:39 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:39.683+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:39.959+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:40 compute-0 ceph-mon[75677]: pgmap v2589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:40 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:40.694+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:07:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:07:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:07:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:07:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:07:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:40.920+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:41 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:41.689+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:41 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:41.902+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:42 compute-0 ceph-mon[75677]: pgmap v2590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:42 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 43 slow ops, oldest one blocked for 4582 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:42.671+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:42.923+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:43 compute-0 ceph-mon[75677]: Health check update: 43 slow ops, oldest one blocked for 4582 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:43 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:43.703+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:43.965+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:44 compute-0 ceph-mon[75677]: pgmap v2591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:44 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:44.725+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:44.999+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:45 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:45 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:45.722+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:46.007+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:46 compute-0 ceph-mon[75677]: pgmap v2592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:46 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:46.728+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:47.049+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 43 slow ops, oldest one blocked for 4587 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:47 compute-0 ceph-mon[75677]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:07:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:47.695+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:48.057+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:48 compute-0 ceph-mon[75677]: pgmap v2593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:48 compute-0 ceph-mon[75677]: Health check update: 43 slow ops, oldest one blocked for 4587 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:48 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:48 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:48.700+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:49.052+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:49 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:49 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:49.745+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:50.023+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:50 compute-0 ceph-mon[75677]: pgmap v2594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:50 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:50.724+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:51.023+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:51 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:51.687+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:51 compute-0 podman[319337]: 2025-11-24 21:07:51.856394995 +0000 UTC m=+0.086599783 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 21:07:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:51.982+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:52 compute-0 ceph-mon[75677]: pgmap v2595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:52 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:52.670+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:52.943+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:53 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:53.656+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:53.968+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:07:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:07:54 compute-0 ceph-mon[75677]: pgmap v2596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:54 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:54.694+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:54.973+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:54 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:55 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:55.649+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:56.004+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:56 compute-0 ceph-mon[75677]: pgmap v2597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:56 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:56.685+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:56 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:56.966+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 4592 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.375785) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018477375877, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1451, "num_deletes": 469, "total_data_size": 1373117, "memory_usage": 1405264, "flush_reason": "Manual Compaction"}
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018477394111, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1348506, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77745, "largest_seqno": 79195, "table_properties": {"data_size": 1342285, "index_size": 2783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 21552, "raw_average_key_size": 22, "raw_value_size": 1326589, "raw_average_value_size": 1411, "num_data_blocks": 121, "num_entries": 940, "num_filter_entries": 940, "num_deletions": 469, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764018398, "oldest_key_time": 1764018398, "file_creation_time": 1764018477, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 18390 microseconds, and 8228 cpu microseconds.
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.394177) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1348506 bytes OK
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.394208) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.396835) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.396859) EVENT_LOG_v1 {"time_micros": 1764018477396851, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.396890) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1365371, prev total WAL file size 1365371, number of live WAL files 2.
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.397837) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373737' seq:72057594037927935, type:22 .. '6C6F676D0034303332' seq:0, type:0; will stop at (end)
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1316KB)], [164(8949KB)]
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018477397905, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 10513292, "oldest_snapshot_seqno": -1}
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 13365 keys, 10256337 bytes, temperature: kUnknown
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018477501135, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 10256337, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10183207, "index_size": 38585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33477, "raw_key_size": 369036, "raw_average_key_size": 27, "raw_value_size": 9954151, "raw_average_value_size": 744, "num_data_blocks": 1401, "num_entries": 13365, "num_filter_entries": 13365, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764018477, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.501552) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10256337 bytes
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.507694) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.7 rd, 99.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.7 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(15.4) write-amplify(7.6) OK, records in: 14315, records dropped: 950 output_compression: NoCompression
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.507738) EVENT_LOG_v1 {"time_micros": 1764018477507716, "job": 102, "event": "compaction_finished", "compaction_time_micros": 103334, "compaction_time_cpu_micros": 58730, "output_level": 6, "num_output_files": 1, "total_output_size": 10256337, "num_input_records": 14315, "num_output_records": 13365, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018477508545, "job": 102, "event": "table_file_deletion", "file_number": 166}
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018477512549, "job": 102, "event": "table_file_deletion", "file_number": 164}
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.397668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.512650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.512658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.512661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.512664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:07:57 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:07:57.512667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:07:57 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:57 compute-0 ceph-mon[75677]: Health check update: 36 slow ops, oldest one blocked for 4592 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:07:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:57.719+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:57.977+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:58 compute-0 ceph-mon[75677]: pgmap v2598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:58 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:58.752+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:58.972+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:07:59 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:07:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:07:59.744+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:07:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:07:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:07:59.967+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:07:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:00 compute-0 ceph-mon[75677]: pgmap v2599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:00 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:00.714+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:00.938+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:00 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:00 compute-0 sudo[319356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:08:00 compute-0 sudo[319356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:00 compute-0 sudo[319356]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:01 compute-0 podman[319380]: 2025-11-24 21:08:01.05756724 +0000 UTC m=+0.077164500 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 21:08:01 compute-0 sudo[319387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:08:01 compute-0 sudo[319387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:01 compute-0 sudo[319387]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:01 compute-0 sudo[319425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:08:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:01 compute-0 sudo[319425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:01 compute-0 sudo[319425]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:01 compute-0 sudo[319450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:08:01 compute-0 sudo[319450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:01 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:01.674+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:01 compute-0 sudo[319450]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:08:01 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:08:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:08:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:08:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:08:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:08:01 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6409b001-2811-4f32-ad4c-a787024c16b9 does not exist
Nov 24 21:08:01 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f8961889-0894-4abf-88d9-5e940da216d4 does not exist
Nov 24 21:08:01 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 43a8cadb-3a41-4706-9741-75dfee1966ed does not exist
Nov 24 21:08:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:08:01 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:08:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:08:01 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:08:01 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:08:01 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:08:01 compute-0 sudo[319508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:08:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:01.965+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:01 compute-0 sudo[319508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:01 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:01 compute-0 sudo[319508]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:02 compute-0 sudo[319533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:08:02 compute-0 sudo[319533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:02 compute-0 sudo[319533]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:02 compute-0 sudo[319558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:08:02 compute-0 sudo[319558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:02 compute-0 sudo[319558]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:02 compute-0 sudo[319583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:08:02 compute-0 sudo[319583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 4602 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:02 compute-0 ceph-mon[75677]: pgmap v2600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:02 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:08:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:08:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:08:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:08:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:08:02 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:08:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:02 compute-0 ceph-mon[75677]: Health check update: 36 slow ops, oldest one blocked for 4602 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:02.713+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:02 compute-0 podman[319649]: 2025-11-24 21:08:02.752662668 +0000 UTC m=+0.084098346 container create ee6f6dff73a79e85b9e037e689ae4c6132c36cf18837d9fbcc8be1f951ba708d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 21:08:02 compute-0 systemd[1]: Started libpod-conmon-ee6f6dff73a79e85b9e037e689ae4c6132c36cf18837d9fbcc8be1f951ba708d.scope.
Nov 24 21:08:02 compute-0 podman[319649]: 2025-11-24 21:08:02.724305573 +0000 UTC m=+0.055741351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:08:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:08:02 compute-0 podman[319649]: 2025-11-24 21:08:02.855545758 +0000 UTC m=+0.186981466 container init ee6f6dff73a79e85b9e037e689ae4c6132c36cf18837d9fbcc8be1f951ba708d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:08:02 compute-0 podman[319649]: 2025-11-24 21:08:02.872121594 +0000 UTC m=+0.203557312 container start ee6f6dff73a79e85b9e037e689ae4c6132c36cf18837d9fbcc8be1f951ba708d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:08:02 compute-0 podman[319649]: 2025-11-24 21:08:02.877012096 +0000 UTC m=+0.208447804 container attach ee6f6dff73a79e85b9e037e689ae4c6132c36cf18837d9fbcc8be1f951ba708d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 21:08:02 compute-0 happy_dubinsky[319665]: 167 167
Nov 24 21:08:02 compute-0 systemd[1]: libpod-ee6f6dff73a79e85b9e037e689ae4c6132c36cf18837d9fbcc8be1f951ba708d.scope: Deactivated successfully.
Nov 24 21:08:02 compute-0 podman[319649]: 2025-11-24 21:08:02.883052199 +0000 UTC m=+0.214487897 container died ee6f6dff73a79e85b9e037e689ae4c6132c36cf18837d9fbcc8be1f951ba708d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 21:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-cebb0b240548dadae48c0fc12445ab16ee52081cd1ba39f406dd08fc39a97afa-merged.mount: Deactivated successfully.
Nov 24 21:08:02 compute-0 podman[319649]: 2025-11-24 21:08:02.932971423 +0000 UTC m=+0.264407121 container remove ee6f6dff73a79e85b9e037e689ae4c6132c36cf18837d9fbcc8be1f951ba708d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 24 21:08:02 compute-0 systemd[1]: libpod-conmon-ee6f6dff73a79e85b9e037e689ae4c6132c36cf18837d9fbcc8be1f951ba708d.scope: Deactivated successfully.
Nov 24 21:08:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:02.979+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:03 compute-0 podman[319690]: 2025-11-24 21:08:03.19467262 +0000 UTC m=+0.068980878 container create b822764f9ae0277cb21d29e609729f052f972925d6453c588790035386ad293f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 21:08:03 compute-0 systemd[1]: Started libpod-conmon-b822764f9ae0277cb21d29e609729f052f972925d6453c588790035386ad293f.scope.
Nov 24 21:08:03 compute-0 podman[319690]: 2025-11-24 21:08:03.16681521 +0000 UTC m=+0.041123498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:08:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fd1307f4b17c611b2a6b233f6c184432e2e17d72b770e8917a167001378d83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fd1307f4b17c611b2a6b233f6c184432e2e17d72b770e8917a167001378d83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fd1307f4b17c611b2a6b233f6c184432e2e17d72b770e8917a167001378d83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fd1307f4b17c611b2a6b233f6c184432e2e17d72b770e8917a167001378d83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fd1307f4b17c611b2a6b233f6c184432e2e17d72b770e8917a167001378d83/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:03 compute-0 podman[319690]: 2025-11-24 21:08:03.310066948 +0000 UTC m=+0.184375246 container init b822764f9ae0277cb21d29e609729f052f972925d6453c588790035386ad293f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 21:08:03 compute-0 podman[319690]: 2025-11-24 21:08:03.323666354 +0000 UTC m=+0.197974582 container start b822764f9ae0277cb21d29e609729f052f972925d6453c588790035386ad293f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:08:03 compute-0 podman[319690]: 2025-11-24 21:08:03.32758177 +0000 UTC m=+0.201890078 container attach b822764f9ae0277cb21d29e609729f052f972925d6453c588790035386ad293f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 21:08:03 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:03.718+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:03.954+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:04 compute-0 pensive_heyrovsky[319707]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:08:04 compute-0 pensive_heyrovsky[319707]: --> relative data size: 1.0
Nov 24 21:08:04 compute-0 pensive_heyrovsky[319707]: --> All data devices are unavailable
Nov 24 21:08:04 compute-0 systemd[1]: libpod-b822764f9ae0277cb21d29e609729f052f972925d6453c588790035386ad293f.scope: Deactivated successfully.
Nov 24 21:08:04 compute-0 systemd[1]: libpod-b822764f9ae0277cb21d29e609729f052f972925d6453c588790035386ad293f.scope: Consumed 1.193s CPU time.
Nov 24 21:08:04 compute-0 podman[319690]: 2025-11-24 21:08:04.567577212 +0000 UTC m=+1.441885470 container died b822764f9ae0277cb21d29e609729f052f972925d6453c588790035386ad293f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:08:04 compute-0 ceph-mon[75677]: pgmap v2601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:04 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-50fd1307f4b17c611b2a6b233f6c184432e2e17d72b770e8917a167001378d83-merged.mount: Deactivated successfully.
Nov 24 21:08:04 compute-0 podman[319690]: 2025-11-24 21:08:04.643366444 +0000 UTC m=+1.517674662 container remove b822764f9ae0277cb21d29e609729f052f972925d6453c588790035386ad293f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 21:08:04 compute-0 systemd[1]: libpod-conmon-b822764f9ae0277cb21d29e609729f052f972925d6453c588790035386ad293f.scope: Deactivated successfully.
Nov 24 21:08:04 compute-0 sudo[319583]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:04.732+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:04 compute-0 sudo[319751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:08:04 compute-0 sudo[319751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:04 compute-0 sudo[319751]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:04 compute-0 sudo[319776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:08:04 compute-0 sudo[319776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:04 compute-0 sudo[319776]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:04.934+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:04 compute-0 sudo[319801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:08:04 compute-0 sudo[319801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:04 compute-0 sudo[319801]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:05 compute-0 sudo[319826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:08:05 compute-0 sudo[319826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:05 compute-0 podman[319891]: 2025-11-24 21:08:05.542567789 +0000 UTC m=+0.047102800 container create c79a733ca9d28df37c3b50dcd6a7b8016c9e84bba02cc2c65a6751272c155dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:08:05 compute-0 systemd[1]: Started libpod-conmon-c79a733ca9d28df37c3b50dcd6a7b8016c9e84bba02cc2c65a6751272c155dee.scope.
Nov 24 21:08:05 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:05 compute-0 podman[319891]: 2025-11-24 21:08:05.520101254 +0000 UTC m=+0.024636265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:08:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:08:05 compute-0 podman[319891]: 2025-11-24 21:08:05.653570158 +0000 UTC m=+0.158105159 container init c79a733ca9d28df37c3b50dcd6a7b8016c9e84bba02cc2c65a6751272c155dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:08:05 compute-0 podman[319891]: 2025-11-24 21:08:05.663225898 +0000 UTC m=+0.167760889 container start c79a733ca9d28df37c3b50dcd6a7b8016c9e84bba02cc2c65a6751272c155dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:08:05 compute-0 podman[319891]: 2025-11-24 21:08:05.669162328 +0000 UTC m=+0.173697389 container attach c79a733ca9d28df37c3b50dcd6a7b8016c9e84bba02cc2c65a6751272c155dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:08:05 compute-0 nervous_margulis[319907]: 167 167
Nov 24 21:08:05 compute-0 systemd[1]: libpod-c79a733ca9d28df37c3b50dcd6a7b8016c9e84bba02cc2c65a6751272c155dee.scope: Deactivated successfully.
Nov 24 21:08:05 compute-0 podman[319891]: 2025-11-24 21:08:05.672365424 +0000 UTC m=+0.176900435 container died c79a733ca9d28df37c3b50dcd6a7b8016c9e84bba02cc2c65a6751272c155dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 21:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-78c4be921256fa32a7d50889ecfb74ca88de8111c5ddcaf6593a9006d8633444-merged.mount: Deactivated successfully.
Nov 24 21:08:05 compute-0 podman[319891]: 2025-11-24 21:08:05.730074147 +0000 UTC m=+0.234609158 container remove c79a733ca9d28df37c3b50dcd6a7b8016c9e84bba02cc2c65a6751272c155dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:08:05 compute-0 systemd[1]: libpod-conmon-c79a733ca9d28df37c3b50dcd6a7b8016c9e84bba02cc2c65a6751272c155dee.scope: Deactivated successfully.
Nov 24 21:08:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:05.775+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:05.892+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:05 compute-0 podman[319930]: 2025-11-24 21:08:05.92808565 +0000 UTC m=+0.039867354 container create 1d03ea691a4f61f9f3ba70a61da9812d062b0754ed269b6f1ebd48e25390248f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 21:08:05 compute-0 systemd[1]: Started libpod-conmon-1d03ea691a4f61f9f3ba70a61da9812d062b0754ed269b6f1ebd48e25390248f.scope.
Nov 24 21:08:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aede94900eabdb4013e33627211197f3d05ee75e74ea8e8e68b9a6df9a75a8a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aede94900eabdb4013e33627211197f3d05ee75e74ea8e8e68b9a6df9a75a8a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aede94900eabdb4013e33627211197f3d05ee75e74ea8e8e68b9a6df9a75a8a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aede94900eabdb4013e33627211197f3d05ee75e74ea8e8e68b9a6df9a75a8a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:05 compute-0 podman[319930]: 2025-11-24 21:08:05.998449615 +0000 UTC m=+0.110231359 container init 1d03ea691a4f61f9f3ba70a61da9812d062b0754ed269b6f1ebd48e25390248f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 21:08:06 compute-0 podman[319930]: 2025-11-24 21:08:06.007655203 +0000 UTC m=+0.119436907 container start 1d03ea691a4f61f9f3ba70a61da9812d062b0754ed269b6f1ebd48e25390248f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:08:06 compute-0 podman[319930]: 2025-11-24 21:08:05.912342437 +0000 UTC m=+0.024124171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:08:06 compute-0 podman[319930]: 2025-11-24 21:08:06.012037171 +0000 UTC m=+0.123818915 container attach 1d03ea691a4f61f9f3ba70a61da9812d062b0754ed269b6f1ebd48e25390248f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 21:08:06 compute-0 podman[319944]: 2025-11-24 21:08:06.081347068 +0000 UTC m=+0.113287903 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 24 21:08:06 compute-0 ceph-mon[75677]: pgmap v2602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:06 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]: {
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:     "0": [
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:         {
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "devices": [
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "/dev/loop3"
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             ],
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_name": "ceph_lv0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_size": "21470642176",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "name": "ceph_lv0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "tags": {
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.cluster_name": "ceph",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.crush_device_class": "",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.encrypted": "0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.osd_id": "0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.type": "block",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.vdo": "0"
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             },
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "type": "block",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "vg_name": "ceph_vg0"
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:         }
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:     ],
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:     "1": [
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:         {
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "devices": [
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "/dev/loop4"
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             ],
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_name": "ceph_lv1",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_size": "21470642176",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "name": "ceph_lv1",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "tags": {
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.cluster_name": "ceph",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.crush_device_class": "",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.encrypted": "0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.osd_id": "1",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.type": "block",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.vdo": "0"
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             },
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "type": "block",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "vg_name": "ceph_vg1"
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:         }
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:     ],
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:     "2": [
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:         {
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "devices": [
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "/dev/loop5"
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             ],
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_name": "ceph_lv2",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_size": "21470642176",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "name": "ceph_lv2",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "tags": {
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.cluster_name": "ceph",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.crush_device_class": "",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.encrypted": "0",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.osd_id": "2",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.type": "block",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:                 "ceph.vdo": "0"
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             },
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "type": "block",
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:             "vg_name": "ceph_vg2"
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:         }
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]:     ]
Nov 24 21:08:06 compute-0 xenodochial_colden[319948]: }
Nov 24 21:08:06 compute-0 systemd[1]: libpod-1d03ea691a4f61f9f3ba70a61da9812d062b0754ed269b6f1ebd48e25390248f.scope: Deactivated successfully.
Nov 24 21:08:06 compute-0 conmon[319948]: conmon 1d03ea691a4f61f9f3ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d03ea691a4f61f9f3ba70a61da9812d062b0754ed269b6f1ebd48e25390248f.scope/container/memory.events
Nov 24 21:08:06 compute-0 podman[319930]: 2025-11-24 21:08:06.815850627 +0000 UTC m=+0.927632371 container died 1d03ea691a4f61f9f3ba70a61da9812d062b0754ed269b6f1ebd48e25390248f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 21:08:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:06.823+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:06.846+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-aede94900eabdb4013e33627211197f3d05ee75e74ea8e8e68b9a6df9a75a8a3-merged.mount: Deactivated successfully.
Nov 24 21:08:06 compute-0 podman[319930]: 2025-11-24 21:08:06.891816073 +0000 UTC m=+1.003597777 container remove 1d03ea691a4f61f9f3ba70a61da9812d062b0754ed269b6f1ebd48e25390248f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:08:06 compute-0 systemd[1]: libpod-conmon-1d03ea691a4f61f9f3ba70a61da9812d062b0754ed269b6f1ebd48e25390248f.scope: Deactivated successfully.
Nov 24 21:08:06 compute-0 sudo[319826]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:07 compute-0 sudo[319994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:08:07 compute-0 sudo[319994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:07 compute-0 sudo[319994]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:07 compute-0 sudo[320019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:08:07 compute-0 sudo[320019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:07 compute-0 sudo[320019]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:07 compute-0 sudo[320044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:08:07 compute-0 sudo[320044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:07 compute-0 sudo[320044]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:07 compute-0 sudo[320069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:08:07 compute-0 sudo[320069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 4607 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:07 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:07 compute-0 podman[320134]: 2025-11-24 21:08:07.692343231 +0000 UTC m=+0.055270160 container create 5dea7ada61e566878ab56ea7b5bcefd904be04d2a2c7f5ac49ed45468d13970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 21:08:07 compute-0 systemd[1]: Started libpod-conmon-5dea7ada61e566878ab56ea7b5bcefd904be04d2a2c7f5ac49ed45468d13970b.scope.
Nov 24 21:08:07 compute-0 podman[320134]: 2025-11-24 21:08:07.667994715 +0000 UTC m=+0.030921694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:08:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:08:07 compute-0 podman[320134]: 2025-11-24 21:08:07.804393569 +0000 UTC m=+0.167320538 container init 5dea7ada61e566878ab56ea7b5bcefd904be04d2a2c7f5ac49ed45468d13970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 21:08:07 compute-0 podman[320134]: 2025-11-24 21:08:07.810725209 +0000 UTC m=+0.173652128 container start 5dea7ada61e566878ab56ea7b5bcefd904be04d2a2c7f5ac49ed45468d13970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 24 21:08:07 compute-0 vibrant_goodall[320151]: 167 167
Nov 24 21:08:07 compute-0 systemd[1]: libpod-5dea7ada61e566878ab56ea7b5bcefd904be04d2a2c7f5ac49ed45468d13970b.scope: Deactivated successfully.
Nov 24 21:08:07 compute-0 podman[320134]: 2025-11-24 21:08:07.814793109 +0000 UTC m=+0.177720038 container attach 5dea7ada61e566878ab56ea7b5bcefd904be04d2a2c7f5ac49ed45468d13970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 21:08:07 compute-0 podman[320134]: 2025-11-24 21:08:07.815985211 +0000 UTC m=+0.178912140 container died 5dea7ada61e566878ab56ea7b5bcefd904be04d2a2c7f5ac49ed45468d13970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 21:08:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:07.816+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-41ec2e79c4eb5adc65f4faf53f06c668cec646cd813a1bdb6cfdc318311ed3f9-merged.mount: Deactivated successfully.
Nov 24 21:08:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:07.863+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:07 compute-0 podman[320134]: 2025-11-24 21:08:07.868355051 +0000 UTC m=+0.231281960 container remove 5dea7ada61e566878ab56ea7b5bcefd904be04d2a2c7f5ac49ed45468d13970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:08:07 compute-0 systemd[1]: libpod-conmon-5dea7ada61e566878ab56ea7b5bcefd904be04d2a2c7f5ac49ed45468d13970b.scope: Deactivated successfully.
Nov 24 21:08:08 compute-0 podman[320177]: 2025-11-24 21:08:08.104574562 +0000 UTC m=+0.068250528 container create d956d05f6000398d602e1f67aae3f916f65216d9658510a9db8b6141577fc4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:08:08 compute-0 systemd[1]: Started libpod-conmon-d956d05f6000398d602e1f67aae3f916f65216d9658510a9db8b6141577fc4d8.scope.
Nov 24 21:08:08 compute-0 podman[320177]: 2025-11-24 21:08:08.077744669 +0000 UTC m=+0.041420675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:08:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8035ce8126eba246fbc4dde93e576eef9e3d04cf061fbd731397d274dda14f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8035ce8126eba246fbc4dde93e576eef9e3d04cf061fbd731397d274dda14f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8035ce8126eba246fbc4dde93e576eef9e3d04cf061fbd731397d274dda14f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8035ce8126eba246fbc4dde93e576eef9e3d04cf061fbd731397d274dda14f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:08:08 compute-0 podman[320177]: 2025-11-24 21:08:08.203796284 +0000 UTC m=+0.167472250 container init d956d05f6000398d602e1f67aae3f916f65216d9658510a9db8b6141577fc4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:08:08 compute-0 podman[320177]: 2025-11-24 21:08:08.220090873 +0000 UTC m=+0.183766859 container start d956d05f6000398d602e1f67aae3f916f65216d9658510a9db8b6141577fc4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 21:08:08 compute-0 podman[320177]: 2025-11-24 21:08:08.224648016 +0000 UTC m=+0.188323972 container attach d956d05f6000398d602e1f67aae3f916f65216d9658510a9db8b6141577fc4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 21:08:08 compute-0 ceph-mon[75677]: pgmap v2603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:08 compute-0 ceph-mon[75677]: Health check update: 36 slow ops, oldest one blocked for 4607 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:08 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:08.769+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:08.857+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:09 compute-0 tender_perlman[320193]: {
Nov 24 21:08:09 compute-0 tender_perlman[320193]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "osd_id": 2,
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "type": "bluestore"
Nov 24 21:08:09 compute-0 tender_perlman[320193]:     },
Nov 24 21:08:09 compute-0 tender_perlman[320193]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "osd_id": 1,
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "type": "bluestore"
Nov 24 21:08:09 compute-0 tender_perlman[320193]:     },
Nov 24 21:08:09 compute-0 tender_perlman[320193]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "osd_id": 0,
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:08:09 compute-0 tender_perlman[320193]:         "type": "bluestore"
Nov 24 21:08:09 compute-0 tender_perlman[320193]:     }
Nov 24 21:08:09 compute-0 tender_perlman[320193]: }
Nov 24 21:08:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:08:09.428 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:08:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:08:09.428 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:08:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:08:09.428 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:08:09 compute-0 systemd[1]: libpod-d956d05f6000398d602e1f67aae3f916f65216d9658510a9db8b6141577fc4d8.scope: Deactivated successfully.
Nov 24 21:08:09 compute-0 podman[320177]: 2025-11-24 21:08:09.431563698 +0000 UTC m=+1.395239634 container died d956d05f6000398d602e1f67aae3f916f65216d9658510a9db8b6141577fc4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_perlman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 21:08:09 compute-0 systemd[1]: libpod-d956d05f6000398d602e1f67aae3f916f65216d9658510a9db8b6141577fc4d8.scope: Consumed 1.213s CPU time.
Nov 24 21:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8035ce8126eba246fbc4dde93e576eef9e3d04cf061fbd731397d274dda14f9-merged.mount: Deactivated successfully.
Nov 24 21:08:09 compute-0 podman[320177]: 2025-11-24 21:08:09.500838742 +0000 UTC m=+1.464514678 container remove d956d05f6000398d602e1f67aae3f916f65216d9658510a9db8b6141577fc4d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 21:08:09 compute-0 systemd[1]: libpod-conmon-d956d05f6000398d602e1f67aae3f916f65216d9658510a9db8b6141577fc4d8.scope: Deactivated successfully.
Nov 24 21:08:09 compute-0 sudo[320069]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:08:09 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:08:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:08:09 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:08:09 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 5b13f22a-2116-4aeb-ac01-e815f89fca90 does not exist
Nov 24 21:08:09 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev eb49ab9a-d4aa-4bfb-9c2e-2023e02e3eb2 does not exist
Nov 24 21:08:09 compute-0 sudo[320238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:08:09 compute-0 sudo[320238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:09 compute-0 sudo[320238]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:09 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:08:09 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:08:09 compute-0 sudo[320263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:08:09 compute-0 sudo[320263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:08:09 compute-0 sudo[320263]: pam_unix(sudo:session): session closed for user root
Nov 24 21:08:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:09.740+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:09.886+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:10 compute-0 ceph-mon[75677]: pgmap v2604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:10 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:10.783+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:10.897+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:11 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:11.825+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:11 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:11.944+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:12 compute-0 ceph-mon[75677]: pgmap v2605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:12 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:12.831+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:12.988+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:13 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:13.803+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:13.979+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:14 compute-0 ceph-mon[75677]: pgmap v2606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:14 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:14.830+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:14.990+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:15 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:15.876+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:15 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:15.974+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:08:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2925475213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:08:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:08:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2925475213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:08:16 compute-0 ceph-mon[75677]: pgmap v2607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:16 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2925475213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:08:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2925475213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:08:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:16.924+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:16 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:16.995+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 36 slow ops, oldest one blocked for 4612 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:17 compute-0 ceph-mon[75677]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:08:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:17 compute-0 ceph-mon[75677]: Health check update: 36 slow ops, oldest one blocked for 4612 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:17.912+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:17.987+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:18 compute-0 ceph-mon[75677]: pgmap v2608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:18 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:18.960+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:19.005+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:19 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:19.958+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:20 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:20.008+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:20 compute-0 ceph-mon[75677]: pgmap v2609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:20 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:20.992+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:21.039+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:21 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:21.952+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:22.075+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 4622 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:22 compute-0 ceph-mon[75677]: pgmap v2610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:22 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:22 compute-0 ceph-mon[75677]: Health check update: 44 slow ops, oldest one blocked for 4622 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:22 compute-0 podman[320288]: 2025-11-24 21:08:22.860912715 +0000 UTC m=+0.079649507 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 21:08:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:22.905+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:23.094+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 21:08:23 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 8612 writes, 33K keys, 8612 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 8612 writes, 2117 syncs, 4.07 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 137 writes, 395 keys, 137 commit groups, 1.0 writes per commit group, ingest: 0.23 MB, 0.00 MB/s
                                           Interval WAL: 137 writes, 64 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 21:08:23 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:23.943+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:24.127+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:08:24
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data']
Nov 24 21:08:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:08:24 compute-0 ceph-mon[75677]: pgmap v2611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:24 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:24.940+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:25.126+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:25 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:25 compute-0 ceph-mon[75677]: pgmap v2612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:25.950+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:26.159+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:26 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:26.918+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:27.181+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 4627 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:27 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:27 compute-0 ceph-mon[75677]: pgmap v2613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:27.968+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:28.145+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 21:08:28 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 9713 writes, 37K keys, 9713 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9713 writes, 2484 syncs, 3.91 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 109 writes, 367 keys, 109 commit groups, 1.0 writes per commit group, ingest: 0.19 MB, 0.00 MB/s
                                           Interval WAL: 109 writes, 46 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 21:08:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:28 compute-0 ceph-mon[75677]: Health check update: 44 slow ops, oldest one blocked for 4627 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:28 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:28.998+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:29.111+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:29 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:29 compute-0 ceph-mon[75677]: pgmap v2614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:30.016+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:30 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:30.151+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:30 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:31.031+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:31.135+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:31 compute-0 podman[320309]: 2025-11-24 21:08:31.87904736 +0000 UTC m=+0.093354466 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:08:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:31 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:31 compute-0 ceph-mon[75677]: pgmap v2615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:32.048+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:32 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:32.166+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:32 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:33.060+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:33.212+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:33 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:33 compute-0 ceph-mon[75677]: pgmap v2616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:34.019+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:34.207+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 21:08:34 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 8323 writes, 32K keys, 8323 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8323 writes, 1994 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 147 writes, 412 keys, 147 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 147 writes, 65 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 21:08:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:34.990+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:34 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:35.254+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:08:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:08:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:36.007+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:36 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:36 compute-0 ceph-mon[75677]: pgmap v2617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:36.263+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:36 compute-0 podman[320329]: 2025-11-24 21:08:36.878981487 +0000 UTC m=+0.099653916 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:08:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:36.971+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:37 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:37.218+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 4632 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:38.004+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:38 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:38 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:38 compute-0 ceph-mon[75677]: pgmap v2618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:38 compute-0 ceph-mon[75677]: Health check update: 44 slow ops, oldest one blocked for 4632 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:38.226+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:39.033+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:39 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:39.202+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:39 compute-0 sshd-session[315511]: Connection closed by 192.168.122.30 port 56840
Nov 24 21:08:39 compute-0 sshd-session[315508]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:08:39 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 24 21:08:39 compute-0 systemd[1]: session-53.scope: Consumed 1.473s CPU time.
Nov 24 21:08:39 compute-0 systemd-logind[795]: Session 53 logged out. Waiting for processes to exit.
Nov 24 21:08:39 compute-0 systemd-logind[795]: Removed session 53.
Nov 24 21:08:39 compute-0 ceph-mgr[75975]: [devicehealth INFO root] Check health
Nov 24 21:08:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:40.034+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:40 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:40 compute-0 ceph-mon[75677]: pgmap v2619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:40 compute-0 sshd-session[315766]: Connection closed by 192.168.122.30 port 56846
Nov 24 21:08:40 compute-0 sshd-session[315763]: pam_unix(sshd:session): session closed for user zuul
Nov 24 21:08:40 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Nov 24 21:08:40 compute-0 systemd-logind[795]: Session 54 logged out. Waiting for processes to exit.
Nov 24 21:08:40 compute-0 systemd-logind[795]: Removed session 54.
Nov 24 21:08:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:40.169+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:08:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:08:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:08:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:08:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:08:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:41.061+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:41 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:41 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:41.154+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:42.034+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:42 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:42 compute-0 ceph-mon[75677]: pgmap v2620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:42.182+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 4642 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:43.002+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:43 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:43 compute-0 ceph-mon[75677]: Health check update: 44 slow ops, oldest one blocked for 4642 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:43.146+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:44.017+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:44 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:44.117+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:44 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:44 compute-0 ceph-mon[75677]: pgmap v2621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:45.019+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:45 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:45 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:45.154+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:46.067+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:46 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:46 compute-0 ceph-mon[75677]: pgmap v2622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:46.133+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:47.070+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:47.092+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:47 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:48.066+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:48 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:48.110+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:48 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 44 slow ops, oldest one blocked for 4647 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:48 compute-0 ceph-mon[75677]: 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:08:48 compute-0 ceph-mon[75677]: pgmap v2623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:49.093+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:49 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:49.112+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:49 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:49 compute-0 ceph-mon[75677]: Health check update: 44 slow ops, oldest one blocked for 4647 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:50.061+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:50 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:50 compute-0 ceph-mon[75677]: pgmap v2624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:50.144+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:51.060+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:51.156+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:51 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:52.106+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:52.132+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:52 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:52 compute-0 ceph-mon[75677]: pgmap v2625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:52 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:53.084+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:53.140+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:53 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:53 compute-0 podman[320355]: 2025-11-24 21:08:53.872742052 +0000 UTC m=+0.097865537 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 21:08:54 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:54.075+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:54.185+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:54 compute-0 ceph-mon[75677]: pgmap v2626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:08:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:08:55 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:55.048+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:55.193+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:55 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:56.060+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:56 compute-0 ceph-mon[75677]: pgmap v2627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:56 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:56.230+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:56 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:57.039+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4657 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:57 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:57.248+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:08:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:58.071+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:58 compute-0 ceph-mon[75677]: pgmap v2628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:58 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4657 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:08:58 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:58.262+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:08:59.077+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:08:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:08:59 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:08:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:08:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:08:59.259+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:08:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:00.102+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:00 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:00 compute-0 ceph-mon[75677]: pgmap v2629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:00 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:00.264+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:01 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:01.133+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:01 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:01.281+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:02.104+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:02 compute-0 ceph-mon[75677]: pgmap v2630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:02 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:02.296+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4662 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:02 compute-0 podman[320374]: 2025-11-24 21:09:02.829516583 +0000 UTC m=+0.069076372 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 21:09:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:03.113+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:03.258+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:03 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:03 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4662 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:04.128+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:04.244+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:04 compute-0 ceph-mon[75677]: pgmap v2631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:04 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:05.176+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:05.276+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:05 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:06.224+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:06.279+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:06 compute-0 ceph-mon[75677]: pgmap v2632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:06 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:07.238+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:07.328+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:07 compute-0 podman[320394]: 2025-11-24 21:09:07.878578312 +0000 UTC m=+0.103907380 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:09:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4667 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:07 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:07 compute-0 ceph-mon[75677]: pgmap v2633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:08.266+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:08.304+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:08 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:08 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4667 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:09.228+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:09.349+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:09:09.429 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:09:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:09:09.430 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:09:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:09:09.430 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:09:09 compute-0 sudo[320421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:09:09 compute-0 sudo[320421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:09 compute-0 sudo[320421]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:09 compute-0 sudo[320446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:09:09 compute-0 sudo[320446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:09 compute-0 sudo[320446]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:09 compute-0 sudo[320471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:09:09 compute-0 sudo[320471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:09 compute-0 sudo[320471]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:09 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:09 compute-0 ceph-mon[75677]: pgmap v2634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:10 compute-0 sudo[320496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:09:10 compute-0 sudo[320496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:10.201+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:10.339+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:10 compute-0 sudo[320496]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:09:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:09:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:09:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:09:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:09:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:09:10 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9549b928-1fa0-414d-b1eb-fbb9164cd9a7 does not exist
Nov 24 21:09:10 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 9d6c0e67-55d4-44a4-bacf-7e7f1ba54ad7 does not exist
Nov 24 21:09:10 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 82d69735-aeef-4cbf-859d-d1b1aa970521 does not exist
Nov 24 21:09:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:09:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:09:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:09:10 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:09:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:09:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:09:10 compute-0 sudo[320553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:09:10 compute-0 sudo[320553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:10 compute-0 sudo[320553]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:10 compute-0 sudo[320578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:09:10 compute-0 sudo[320578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:10 compute-0 sudo[320578]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:10 compute-0 sudo[320603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:09:10 compute-0 sudo[320603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:10 compute-0 sudo[320603]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:10 compute-0 sudo[320628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:09:10 compute-0 sudo[320628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:10 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:09:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:09:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:09:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:09:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:09:10 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:09:11 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:11.164+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:11 compute-0 podman[320693]: 2025-11-24 21:09:11.341704603 +0000 UTC m=+0.046903424 container create b0888084d18f6c6ab3a411f417fbf049ea1381b7b485ef136ecd04f652b6df29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 21:09:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:11.346+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:11 compute-0 systemd[1]: Started libpod-conmon-b0888084d18f6c6ab3a411f417fbf049ea1381b7b485ef136ecd04f652b6df29.scope.
Nov 24 21:09:11 compute-0 podman[320693]: 2025-11-24 21:09:11.319741892 +0000 UTC m=+0.024940763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:09:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:09:11 compute-0 podman[320693]: 2025-11-24 21:09:11.44188622 +0000 UTC m=+0.147085061 container init b0888084d18f6c6ab3a411f417fbf049ea1381b7b485ef136ecd04f652b6df29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:09:11 compute-0 podman[320693]: 2025-11-24 21:09:11.451435748 +0000 UTC m=+0.156634569 container start b0888084d18f6c6ab3a411f417fbf049ea1381b7b485ef136ecd04f652b6df29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:09:11 compute-0 podman[320693]: 2025-11-24 21:09:11.455445936 +0000 UTC m=+0.160644767 container attach b0888084d18f6c6ab3a411f417fbf049ea1381b7b485ef136ecd04f652b6df29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:09:11 compute-0 priceless_morse[320710]: 167 167
Nov 24 21:09:11 compute-0 systemd[1]: libpod-b0888084d18f6c6ab3a411f417fbf049ea1381b7b485ef136ecd04f652b6df29.scope: Deactivated successfully.
Nov 24 21:09:11 compute-0 podman[320693]: 2025-11-24 21:09:11.459487395 +0000 UTC m=+0.164686216 container died b0888084d18f6c6ab3a411f417fbf049ea1381b7b485ef136ecd04f652b6df29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 24 21:09:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ed9c07ba35544922e3cef201c7978ebc39c3356b59332c3997a071c68267431-merged.mount: Deactivated successfully.
Nov 24 21:09:11 compute-0 podman[320693]: 2025-11-24 21:09:11.503096259 +0000 UTC m=+0.208295080 container remove b0888084d18f6c6ab3a411f417fbf049ea1381b7b485ef136ecd04f652b6df29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:09:11 compute-0 systemd[1]: libpod-conmon-b0888084d18f6c6ab3a411f417fbf049ea1381b7b485ef136ecd04f652b6df29.scope: Deactivated successfully.
Nov 24 21:09:11 compute-0 podman[320734]: 2025-11-24 21:09:11.677836575 +0000 UTC m=+0.047942203 container create f5a425e3624be5b96229989bacdee5253c38e4fbe0e17961612b68c03542d842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 21:09:11 compute-0 systemd[1]: Started libpod-conmon-f5a425e3624be5b96229989bacdee5253c38e4fbe0e17961612b68c03542d842.scope.
Nov 24 21:09:11 compute-0 podman[320734]: 2025-11-24 21:09:11.655818362 +0000 UTC m=+0.025924020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:09:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edf7557aaa6f96e0c2d8c857f73750cacd7d63d166fbfb7f43c5f5898519b0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edf7557aaa6f96e0c2d8c857f73750cacd7d63d166fbfb7f43c5f5898519b0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edf7557aaa6f96e0c2d8c857f73750cacd7d63d166fbfb7f43c5f5898519b0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edf7557aaa6f96e0c2d8c857f73750cacd7d63d166fbfb7f43c5f5898519b0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edf7557aaa6f96e0c2d8c857f73750cacd7d63d166fbfb7f43c5f5898519b0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:11 compute-0 podman[320734]: 2025-11-24 21:09:11.781924468 +0000 UTC m=+0.152030116 container init f5a425e3624be5b96229989bacdee5253c38e4fbe0e17961612b68c03542d842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 24 21:09:11 compute-0 podman[320734]: 2025-11-24 21:09:11.795265618 +0000 UTC m=+0.165371286 container start f5a425e3624be5b96229989bacdee5253c38e4fbe0e17961612b68c03542d842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:09:11 compute-0 podman[320734]: 2025-11-24 21:09:11.802156713 +0000 UTC m=+0.172262371 container attach f5a425e3624be5b96229989bacdee5253c38e4fbe0e17961612b68c03542d842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 21:09:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:11 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:11 compute-0 ceph-mon[75677]: pgmap v2635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:12.171+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:12.391+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:12 compute-0 friendly_mendel[320751]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:09:12 compute-0 friendly_mendel[320751]: --> relative data size: 1.0
Nov 24 21:09:12 compute-0 friendly_mendel[320751]: --> All data devices are unavailable
Nov 24 21:09:12 compute-0 systemd[1]: libpod-f5a425e3624be5b96229989bacdee5253c38e4fbe0e17961612b68c03542d842.scope: Deactivated successfully.
Nov 24 21:09:12 compute-0 systemd[1]: libpod-f5a425e3624be5b96229989bacdee5253c38e4fbe0e17961612b68c03542d842.scope: Consumed 1.138s CPU time.
Nov 24 21:09:12 compute-0 podman[320734]: 2025-11-24 21:09:12.973142357 +0000 UTC m=+1.343248025 container died f5a425e3624be5b96229989bacdee5253c38e4fbe0e17961612b68c03542d842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0edf7557aaa6f96e0c2d8c857f73750cacd7d63d166fbfb7f43c5f5898519b0a-merged.mount: Deactivated successfully.
Nov 24 21:09:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:13 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:13 compute-0 podman[320734]: 2025-11-24 21:09:13.102267804 +0000 UTC m=+1.472373432 container remove f5a425e3624be5b96229989bacdee5253c38e4fbe0e17961612b68c03542d842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 21:09:13 compute-0 systemd[1]: libpod-conmon-f5a425e3624be5b96229989bacdee5253c38e4fbe0e17961612b68c03542d842.scope: Deactivated successfully.
Nov 24 21:09:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:13.134+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:13 compute-0 sudo[320628]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:13 compute-0 sudo[320795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:09:13 compute-0 sudo[320795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:13 compute-0 sudo[320795]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:13 compute-0 sudo[320820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:09:13 compute-0 sudo[320820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:13 compute-0 sudo[320820]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:13 compute-0 sudo[320845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:09:13 compute-0 sudo[320845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:13.364+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:13 compute-0 sudo[320845]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:13 compute-0 sudo[320870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:09:13 compute-0 sudo[320870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:13 compute-0 podman[320935]: 2025-11-24 21:09:13.888763545 +0000 UTC m=+0.068827525 container create 5d5051650571e5d51c7774b9c55fb5f80f63f70d9947c56046291438a43b4cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:09:13 compute-0 systemd[1]: Started libpod-conmon-5d5051650571e5d51c7774b9c55fb5f80f63f70d9947c56046291438a43b4cf0.scope.
Nov 24 21:09:13 compute-0 podman[320935]: 2025-11-24 21:09:13.862109266 +0000 UTC m=+0.042173226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:09:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:09:14 compute-0 podman[320935]: 2025-11-24 21:09:14.012105966 +0000 UTC m=+0.192169946 container init 5d5051650571e5d51c7774b9c55fb5f80f63f70d9947c56046291438a43b4cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:09:14 compute-0 podman[320935]: 2025-11-24 21:09:14.022222598 +0000 UTC m=+0.202286538 container start 5d5051650571e5d51c7774b9c55fb5f80f63f70d9947c56046291438a43b4cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:09:14 compute-0 quirky_golick[320951]: 167 167
Nov 24 21:09:14 compute-0 systemd[1]: libpod-5d5051650571e5d51c7774b9c55fb5f80f63f70d9947c56046291438a43b4cf0.scope: Deactivated successfully.
Nov 24 21:09:14 compute-0 podman[320935]: 2025-11-24 21:09:14.029507364 +0000 UTC m=+0.209571324 container attach 5d5051650571e5d51c7774b9c55fb5f80f63f70d9947c56046291438a43b4cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:09:14 compute-0 podman[320935]: 2025-11-24 21:09:14.029975347 +0000 UTC m=+0.210039287 container died 5d5051650571e5d51c7774b9c55fb5f80f63f70d9947c56046291438a43b4cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 21:09:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:14 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:14 compute-0 ceph-mon[75677]: pgmap v2636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ede70db2559936a72df8fe15d483a9e64142922a18eb45dee782f2b54e355d86-merged.mount: Deactivated successfully.
Nov 24 21:09:14 compute-0 podman[320935]: 2025-11-24 21:09:14.147900972 +0000 UTC m=+0.327964922 container remove 5d5051650571e5d51c7774b9c55fb5f80f63f70d9947c56046291438a43b4cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 21:09:14 compute-0 systemd[1]: libpod-conmon-5d5051650571e5d51c7774b9c55fb5f80f63f70d9947c56046291438a43b4cf0.scope: Deactivated successfully.
Nov 24 21:09:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:14.165+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:14.353+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:14 compute-0 podman[320975]: 2025-11-24 21:09:14.377192287 +0000 UTC m=+0.095522203 container create da85315c18ec6dee5f685b0ad532f386d2cc2f3b4b0d79a8880799ba380fd2db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:09:14 compute-0 podman[320975]: 2025-11-24 21:09:14.311035906 +0000 UTC m=+0.029365832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:09:14 compute-0 systemd[1]: Started libpod-conmon-da85315c18ec6dee5f685b0ad532f386d2cc2f3b4b0d79a8880799ba380fd2db.scope.
Nov 24 21:09:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987f78b52b9ea92d1619581b73fb3b1ab6775de33c61cb0b06f495889af51f64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987f78b52b9ea92d1619581b73fb3b1ab6775de33c61cb0b06f495889af51f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987f78b52b9ea92d1619581b73fb3b1ab6775de33c61cb0b06f495889af51f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987f78b52b9ea92d1619581b73fb3b1ab6775de33c61cb0b06f495889af51f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:14 compute-0 podman[320975]: 2025-11-24 21:09:14.569155566 +0000 UTC m=+0.287485522 container init da85315c18ec6dee5f685b0ad532f386d2cc2f3b4b0d79a8880799ba380fd2db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 21:09:14 compute-0 podman[320975]: 2025-11-24 21:09:14.576509244 +0000 UTC m=+0.294839190 container start da85315c18ec6dee5f685b0ad532f386d2cc2f3b4b0d79a8880799ba380fd2db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 21:09:14 compute-0 podman[320975]: 2025-11-24 21:09:14.587792478 +0000 UTC m=+0.306122474 container attach da85315c18ec6dee5f685b0ad532f386d2cc2f3b4b0d79a8880799ba380fd2db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:09:15 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:15.145+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:15.354+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:15 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:15 compute-0 charming_neumann[320991]: {
Nov 24 21:09:15 compute-0 charming_neumann[320991]:     "0": [
Nov 24 21:09:15 compute-0 charming_neumann[320991]:         {
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "devices": [
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "/dev/loop3"
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             ],
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_name": "ceph_lv0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_size": "21470642176",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "name": "ceph_lv0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "tags": {
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.cluster_name": "ceph",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.crush_device_class": "",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.encrypted": "0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.osd_id": "0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.type": "block",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.vdo": "0"
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             },
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "type": "block",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "vg_name": "ceph_vg0"
Nov 24 21:09:15 compute-0 charming_neumann[320991]:         }
Nov 24 21:09:15 compute-0 charming_neumann[320991]:     ],
Nov 24 21:09:15 compute-0 charming_neumann[320991]:     "1": [
Nov 24 21:09:15 compute-0 charming_neumann[320991]:         {
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "devices": [
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "/dev/loop4"
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             ],
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_name": "ceph_lv1",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_size": "21470642176",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "name": "ceph_lv1",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "tags": {
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.cluster_name": "ceph",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.crush_device_class": "",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.encrypted": "0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.osd_id": "1",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.type": "block",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.vdo": "0"
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             },
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "type": "block",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "vg_name": "ceph_vg1"
Nov 24 21:09:15 compute-0 charming_neumann[320991]:         }
Nov 24 21:09:15 compute-0 charming_neumann[320991]:     ],
Nov 24 21:09:15 compute-0 charming_neumann[320991]:     "2": [
Nov 24 21:09:15 compute-0 charming_neumann[320991]:         {
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "devices": [
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "/dev/loop5"
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             ],
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_name": "ceph_lv2",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_size": "21470642176",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "name": "ceph_lv2",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "tags": {
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.cluster_name": "ceph",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.crush_device_class": "",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.encrypted": "0",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.osd_id": "2",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.type": "block",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:                 "ceph.vdo": "0"
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             },
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "type": "block",
Nov 24 21:09:15 compute-0 charming_neumann[320991]:             "vg_name": "ceph_vg2"
Nov 24 21:09:15 compute-0 charming_neumann[320991]:         }
Nov 24 21:09:15 compute-0 charming_neumann[320991]:     ]
Nov 24 21:09:15 compute-0 charming_neumann[320991]: }
Nov 24 21:09:15 compute-0 systemd[1]: libpod-da85315c18ec6dee5f685b0ad532f386d2cc2f3b4b0d79a8880799ba380fd2db.scope: Deactivated successfully.
Nov 24 21:09:15 compute-0 podman[320975]: 2025-11-24 21:09:15.414731258 +0000 UTC m=+1.133061164 container died da85315c18ec6dee5f685b0ad532f386d2cc2f3b4b0d79a8880799ba380fd2db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 21:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-987f78b52b9ea92d1619581b73fb3b1ab6775de33c61cb0b06f495889af51f64-merged.mount: Deactivated successfully.
Nov 24 21:09:15 compute-0 podman[320975]: 2025-11-24 21:09:15.485579686 +0000 UTC m=+1.203909592 container remove da85315c18ec6dee5f685b0ad532f386d2cc2f3b4b0d79a8880799ba380fd2db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 21:09:15 compute-0 systemd[1]: libpod-conmon-da85315c18ec6dee5f685b0ad532f386d2cc2f3b4b0d79a8880799ba380fd2db.scope: Deactivated successfully.
Nov 24 21:09:15 compute-0 sudo[320870]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:15 compute-0 sudo[321013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:09:15 compute-0 sudo[321013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:15 compute-0 sudo[321013]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:15 compute-0 sudo[321038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:09:15 compute-0 sudo[321038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:15 compute-0 sudo[321038]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:15 compute-0 sudo[321063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:09:15 compute-0 sudo[321063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:15 compute-0 sudo[321063]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:15 compute-0 sudo[321088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:09:15 compute-0 sudo[321088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:16 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:16 compute-0 ceph-mon[75677]: pgmap v2637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:16 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:16.186+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:16 compute-0 podman[321153]: 2025-11-24 21:09:16.343638172 +0000 UTC m=+0.059787470 container create 13d9b321499a0b480d6a6fe74d54a186514c2879384185b4ba05edf80c3ec86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 21:09:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:16.380+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:16 compute-0 systemd[1]: Started libpod-conmon-13d9b321499a0b480d6a6fe74d54a186514c2879384185b4ba05edf80c3ec86d.scope.
Nov 24 21:09:16 compute-0 podman[321153]: 2025-11-24 21:09:16.314521979 +0000 UTC m=+0.030671327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:09:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:09:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:09:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/62044254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:09:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:09:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/62044254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:09:16 compute-0 podman[321153]: 2025-11-24 21:09:16.445870256 +0000 UTC m=+0.162019604 container init 13d9b321499a0b480d6a6fe74d54a186514c2879384185b4ba05edf80c3ec86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 21:09:16 compute-0 podman[321153]: 2025-11-24 21:09:16.457653453 +0000 UTC m=+0.173802711 container start 13d9b321499a0b480d6a6fe74d54a186514c2879384185b4ba05edf80c3ec86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:09:16 compute-0 podman[321153]: 2025-11-24 21:09:16.462195715 +0000 UTC m=+0.178345033 container attach 13d9b321499a0b480d6a6fe74d54a186514c2879384185b4ba05edf80c3ec86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pascal, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 21:09:16 compute-0 peaceful_pascal[321170]: 167 167
Nov 24 21:09:16 compute-0 systemd[1]: libpod-13d9b321499a0b480d6a6fe74d54a186514c2879384185b4ba05edf80c3ec86d.scope: Deactivated successfully.
Nov 24 21:09:16 compute-0 podman[321153]: 2025-11-24 21:09:16.468176807 +0000 UTC m=+0.184326085 container died 13d9b321499a0b480d6a6fe74d54a186514c2879384185b4ba05edf80c3ec86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 21:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3a5de6b36d91f042449d7874f2a7fd7365f868aa24d9d25e6ad95571e393270-merged.mount: Deactivated successfully.
Nov 24 21:09:16 compute-0 podman[321153]: 2025-11-24 21:09:16.545416456 +0000 UTC m=+0.261565724 container remove 13d9b321499a0b480d6a6fe74d54a186514c2879384185b4ba05edf80c3ec86d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 21:09:16 compute-0 systemd[1]: libpod-conmon-13d9b321499a0b480d6a6fe74d54a186514c2879384185b4ba05edf80c3ec86d.scope: Deactivated successfully.
Nov 24 21:09:16 compute-0 podman[321194]: 2025-11-24 21:09:16.745070623 +0000 UTC m=+0.049665388 container create 9cabec175b21a20c847353edd788f1acdd1d2f5e6e73a0f7271206c3d90884a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:09:16 compute-0 systemd[1]: Started libpod-conmon-9cabec175b21a20c847353edd788f1acdd1d2f5e6e73a0f7271206c3d90884a6.scope.
Nov 24 21:09:16 compute-0 podman[321194]: 2025-11-24 21:09:16.723464041 +0000 UTC m=+0.028058816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:09:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65eea5b9c27a75eb2618ab265ece9732a895b6d7df71810c03c1504efeec20e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65eea5b9c27a75eb2618ab265ece9732a895b6d7df71810c03c1504efeec20e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65eea5b9c27a75eb2618ab265ece9732a895b6d7df71810c03c1504efeec20e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65eea5b9c27a75eb2618ab265ece9732a895b6d7df71810c03c1504efeec20e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:09:16 compute-0 podman[321194]: 2025-11-24 21:09:16.846231888 +0000 UTC m=+0.150826723 container init 9cabec175b21a20c847353edd788f1acdd1d2f5e6e73a0f7271206c3d90884a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 21:09:16 compute-0 podman[321194]: 2025-11-24 21:09:16.862895685 +0000 UTC m=+0.167490480 container start 9cabec175b21a20c847353edd788f1acdd1d2f5e6e73a0f7271206c3d90884a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:09:16 compute-0 podman[321194]: 2025-11-24 21:09:16.867353816 +0000 UTC m=+0.171948651 container attach 9cabec175b21a20c847353edd788f1acdd1d2f5e6e73a0f7271206c3d90884a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:09:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:17 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/62044254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:09:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/62044254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:09:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:17.165+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4672 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:17.405+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:17 compute-0 romantic_swartz[321210]: {
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "osd_id": 2,
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "type": "bluestore"
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:     },
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "osd_id": 1,
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "type": "bluestore"
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:     },
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "osd_id": 0,
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:         "type": "bluestore"
Nov 24 21:09:17 compute-0 romantic_swartz[321210]:     }
Nov 24 21:09:17 compute-0 romantic_swartz[321210]: }
Nov 24 21:09:17 compute-0 systemd[1]: libpod-9cabec175b21a20c847353edd788f1acdd1d2f5e6e73a0f7271206c3d90884a6.scope: Deactivated successfully.
Nov 24 21:09:17 compute-0 systemd[1]: libpod-9cabec175b21a20c847353edd788f1acdd1d2f5e6e73a0f7271206c3d90884a6.scope: Consumed 1.123s CPU time.
Nov 24 21:09:17 compute-0 podman[321194]: 2025-11-24 21:09:17.98150383 +0000 UTC m=+1.286098595 container died 9cabec175b21a20c847353edd788f1acdd1d2f5e6e73a0f7271206c3d90884a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-65eea5b9c27a75eb2618ab265ece9732a895b6d7df71810c03c1504efeec20e8-merged.mount: Deactivated successfully.
Nov 24 21:09:18 compute-0 podman[321194]: 2025-11-24 21:09:18.065542583 +0000 UTC m=+1.370137368 container remove 9cabec175b21a20c847353edd788f1acdd1d2f5e6e73a0f7271206c3d90884a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:09:18 compute-0 systemd[1]: libpod-conmon-9cabec175b21a20c847353edd788f1acdd1d2f5e6e73a0f7271206c3d90884a6.scope: Deactivated successfully.
Nov 24 21:09:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:18 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:18 compute-0 ceph-mon[75677]: pgmap v2638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:18 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4672 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:18 compute-0 sudo[321088]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:09:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:09:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:09:18 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:09:18 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 013d3c49-b81a-4705-9fbd-e3d11e51a610 does not exist
Nov 24 21:09:18 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 8eabaeee-3777-4bd8-b5e6-d19217c3a197 does not exist
Nov 24 21:09:18 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:18.169+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:18 compute-0 sudo[321255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:09:18 compute-0 sudo[321255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:18 compute-0 sudo[321255]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:18 compute-0 sudo[321280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:09:18 compute-0 sudo[321280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:09:18 compute-0 sudo[321280]: pam_unix(sudo:session): session closed for user root
Nov 24 21:09:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:18.363+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:19 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:09:19 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:09:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:19.208+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:19.385+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:20 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:20 compute-0 ceph-mon[75677]: pgmap v2639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:20 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:20.159+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.163228) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018560163310, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1546, "num_deletes": 470, "total_data_size": 1525481, "memory_usage": 1563904, "flush_reason": "Manual Compaction"}
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018560179809, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 1487874, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79196, "largest_seqno": 80741, "table_properties": {"data_size": 1481125, "index_size": 3182, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 22871, "raw_average_key_size": 23, "raw_value_size": 1464417, "raw_average_value_size": 1488, "num_data_blocks": 139, "num_entries": 984, "num_filter_entries": 984, "num_deletions": 470, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764018477, "oldest_key_time": 1764018477, "file_creation_time": 1764018560, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 16627 microseconds, and 8052 cpu microseconds.
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.179863) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 1487874 bytes OK
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.179890) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.182163) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.182184) EVENT_LOG_v1 {"time_micros": 1764018560182177, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.182210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 1517333, prev total WAL file size 1517333, number of live WAL files 2.
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.183162) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(1453KB)], [167(10015KB)]
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018560183217, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 11744211, "oldest_snapshot_seqno": -1}
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 13397 keys, 10214907 bytes, temperature: kUnknown
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018560250210, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10214907, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10141460, "index_size": 38838, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33541, "raw_key_size": 369778, "raw_average_key_size": 27, "raw_value_size": 9911670, "raw_average_value_size": 739, "num_data_blocks": 1410, "num_entries": 13397, "num_filter_entries": 13397, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764018560, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.250632) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10214907 bytes
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.253123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.8 rd, 152.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.8 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(14.8) write-amplify(6.9) OK, records in: 14349, records dropped: 952 output_compression: NoCompression
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.253140) EVENT_LOG_v1 {"time_micros": 1764018560253131, "job": 104, "event": "compaction_finished", "compaction_time_micros": 67169, "compaction_time_cpu_micros": 26509, "output_level": 6, "num_output_files": 1, "total_output_size": 10214907, "num_input_records": 14349, "num_output_records": 13397, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018560253769, "job": 104, "event": "table_file_deletion", "file_number": 169}
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018560255819, "job": 104, "event": "table_file_deletion", "file_number": 167}
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.183102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.255943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.255949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.255951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.255954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:09:20 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:09:20.255956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:09:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:20.368+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:21 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:21.205+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:21.412+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:22.252+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:22.376+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4682 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:22 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:22 compute-0 ceph-mon[75677]: pgmap v2640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:23.203+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:23.411+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:23 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:23 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:23 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4682 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:23 compute-0 ceph-mon[75677]: pgmap v2641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:24.157+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:24.420+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:09:24
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta']
Nov 24 21:09:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:09:24 compute-0 podman[321305]: 2025-11-24 21:09:24.846353175 +0000 UTC m=+0.072302338 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:09:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:25 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:25.134+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 21:09:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:25.402+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:26.149+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:26 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:26 compute-0 ceph-mon[75677]: pgmap v2642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 24 21:09:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:26.379+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:27.160+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 21:09:27 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:27 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:27.387+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4687 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:28.208+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:28.391+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:28 compute-0 ceph-mon[75677]: pgmap v2643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 21:09:28 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:28 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4687 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 21:09:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:29.219+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:29.357+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:29 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:29 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:30.246+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:30 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:30.324+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:30 compute-0 ceph-mon[75677]: pgmap v2644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 21:09:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:30 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 21:09:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:31.222+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:31.337+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:32 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:32 compute-0 ceph-mon[75677]: pgmap v2645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 21:09:32 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:32.263+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:32.383+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4692 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 21:09:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:33 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:33 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4692 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:33.287+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:33.368+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:33 compute-0 podman[321326]: 2025-11-24 21:09:33.882101174 +0000 UTC m=+0.102041120 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 21:09:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:34 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:34 compute-0 ceph-mon[75677]: pgmap v2646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 21:09:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:34 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:34.309+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:34.338+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 21:09:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:35.325+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:35.329+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:35 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:09:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:09:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:36.369+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:36.369+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:36 compute-0 ceph-mon[75677]: pgmap v2647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 21:09:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:36 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 24 21:09:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:37.344+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:37.415+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:37 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4697 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:38 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:38 compute-0 ceph-mon[75677]: pgmap v2648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 24 21:09:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:38.361+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:38.366+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:38 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:38 compute-0 podman[321346]: 2025-11-24 21:09:38.920031693 +0000 UTC m=+0.144152252 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 24 21:09:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:39 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:39 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4697 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:39.323+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:39.396+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:40.297+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:40.356+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:40 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:40 compute-0 ceph-mon[75677]: pgmap v2649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:40 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:09:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:09:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:09:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:09:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:09:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:41.259+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:41.408+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:41 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:41 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:42.258+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:42.431+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:42 compute-0 ceph-mon[75677]: pgmap v2650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:42 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4702 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:43.240+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:43.423+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:43 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:43 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4702 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:44.228+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:44 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:44.402+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:44 compute-0 ceph-mon[75677]: pgmap v2651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:44 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:45.244+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:45.431+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:45 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:45 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:46.200+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:46.451+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:47 compute-0 ceph-mon[75677]: pgmap v2652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:47 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:47.242+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:47.437+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:48 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4707 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:48 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:48 compute-0 ceph-mon[75677]: pgmap v2653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:48.209+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:48.390+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:48 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:49.193+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:49 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:49 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:49 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4707 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:49 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:49.425+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:50.241+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:50 compute-0 ceph-mon[75677]: pgmap v2654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:50 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:50.436+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:51.237+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:51 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:51.411+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:52.200+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:52 compute-0 ceph-mon[75677]: pgmap v2655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:52 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:52.415+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:53.195+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:53.377+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:53 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:54.161+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:54 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:54.415+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:09:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:09:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:54 compute-0 ceph-mon[75677]: pgmap v2656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:54 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:55.204+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:55 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:55.414+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:55 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:55 compute-0 podman[321372]: 2025-11-24 21:09:55.84252599 +0000 UTC m=+0.064372365 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 21:09:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:56.192+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:56.433+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:56 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:56 compute-0 ceph-mon[75677]: pgmap v2657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:56 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:57.212+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:57 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:57.389+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4717 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:57 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:09:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:58.189+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:58.392+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:58 compute-0 ceph-mon[75677]: pgmap v2658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:58 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:58 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4717 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:09:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:09:59.163+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:09:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:09:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:09:59.397+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:09:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:09:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:09:59 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:00.123+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:00 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:00.442+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:00 compute-0 ceph-mon[75677]: pgmap v2659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:00 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:01.145+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:01 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:01.427+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:01 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:02.124+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:02.457+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:02 compute-0 ceph-mon[75677]: pgmap v2660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:02 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4722 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:03.090+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:03.435+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:03 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:03 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4722 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:04.072+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:04.476+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:04 compute-0 ceph-mon[75677]: pgmap v2661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:04 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:04 compute-0 podman[321392]: 2025-11-24 21:10:04.875920115 +0000 UTC m=+0.099272565 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 21:10:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:05.096+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:05.455+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:05 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:06.128+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:06.427+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:06 compute-0 ceph-mon[75677]: pgmap v2662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:06 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:07.178+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:07.401+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:07 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4727 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:08.178+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:08.413+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:08 compute-0 ceph-mon[75677]: pgmap v2663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:08 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:08 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4727 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:09.141+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:10:09.430 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:10:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:10:09.430 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:10:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:10:09.431 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:10:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:09.461+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:09 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:09 compute-0 ceph-mon[75677]: pgmap v2664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:09 compute-0 podman[321412]: 2025-11-24 21:10:09.969289838 +0000 UTC m=+0.192799993 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:10:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:10.115+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:10.440+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:10 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:11 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:11.105+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:11.449+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:11 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:11 compute-0 ceph-mon[75677]: pgmap v2665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:12.147+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:12.420+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4732 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:13 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:13 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4732 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:13.190+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:13.441+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:14 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:14 compute-0 ceph-mon[75677]: pgmap v2666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:14.145+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:14.456+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:15 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:15.195+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:15.440+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:15 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:16 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:16 compute-0 ceph-mon[75677]: pgmap v2667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:16 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:16.216+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:16.431+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:10:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/693363087' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:10:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:10:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/693363087' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:10:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:17 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/693363087' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:10:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/693363087' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:10:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:17.170+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:17.453+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:18 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:18.133+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:18 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4737 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:18 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:18 compute-0 ceph-mon[75677]: pgmap v2668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:18 compute-0 sudo[321438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:10:18 compute-0 sudo[321438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:18 compute-0 sudo[321438]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:18 compute-0 sudo[321463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:10:18 compute-0 sudo[321463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:18 compute-0 sudo[321463]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:18.478+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:18 compute-0 sudo[321488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:10:18 compute-0 sudo[321488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:18 compute-0 sudo[321488]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:18 compute-0 sudo[321513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:10:18 compute-0 sudo[321513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:19.132+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:19 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:19 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4737 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:19 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:19 compute-0 sudo[321513]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:10:19 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:10:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:10:19 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:10:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:10:19 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:10:19 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a3d28098-adae-49b8-a4f4-11cf9ff4d48d does not exist
Nov 24 21:10:19 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ce6e9f30-ebaf-4164-bc02-d8e58f234cf0 does not exist
Nov 24 21:10:19 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e09e8799-575e-4134-84f3-9744363e5c79 does not exist
Nov 24 21:10:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:10:19 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:10:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:10:19 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:10:19 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:10:19 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:10:19 compute-0 sudo[321569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:10:19 compute-0 sudo[321569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:19 compute-0 sudo[321569]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:19 compute-0 sudo[321594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:10:19 compute-0 sudo[321594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:19 compute-0 sudo[321594]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:19.496+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:19 compute-0 sudo[321619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:10:19 compute-0 sudo[321619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:19 compute-0 sudo[321619]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:19 compute-0 sudo[321644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:10:19 compute-0 sudo[321644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:20 compute-0 podman[321709]: 2025-11-24 21:10:20.02593251 +0000 UTC m=+0.070435199 container create 93b6bc46eebef70898673d9766f5a9bb1a5342bd96f57e3556a081fd4de1f937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:10:20 compute-0 systemd[1]: Started libpod-conmon-93b6bc46eebef70898673d9766f5a9bb1a5342bd96f57e3556a081fd4de1f937.scope.
Nov 24 21:10:20 compute-0 podman[321709]: 2025-11-24 21:10:19.993011293 +0000 UTC m=+0.037513972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:10:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:10:20 compute-0 podman[321709]: 2025-11-24 21:10:20.139502287 +0000 UTC m=+0.184005026 container init 93b6bc46eebef70898673d9766f5a9bb1a5342bd96f57e3556a081fd4de1f937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:10:20 compute-0 podman[321709]: 2025-11-24 21:10:20.148000776 +0000 UTC m=+0.192503465 container start 93b6bc46eebef70898673d9766f5a9bb1a5342bd96f57e3556a081fd4de1f937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:10:20 compute-0 podman[321709]: 2025-11-24 21:10:20.153755881 +0000 UTC m=+0.198258590 container attach 93b6bc46eebef70898673d9766f5a9bb1a5342bd96f57e3556a081fd4de1f937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_burnell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 21:10:20 compute-0 bold_burnell[321725]: 167 167
Nov 24 21:10:20 compute-0 systemd[1]: libpod-93b6bc46eebef70898673d9766f5a9bb1a5342bd96f57e3556a081fd4de1f937.scope: Deactivated successfully.
Nov 24 21:10:20 compute-0 conmon[321725]: conmon 93b6bc46eebef7089867 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93b6bc46eebef70898673d9766f5a9bb1a5342bd96f57e3556a081fd4de1f937.scope/container/memory.events
Nov 24 21:10:20 compute-0 podman[321709]: 2025-11-24 21:10:20.156978758 +0000 UTC m=+0.201481417 container died 93b6bc46eebef70898673d9766f5a9bb1a5342bd96f57e3556a081fd4de1f937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 21:10:20 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:20.167+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6218b34093f78ff1138f5d5ae6795931754ea467f9126be8588c0c284b518878-merged.mount: Deactivated successfully.
Nov 24 21:10:20 compute-0 ceph-mon[75677]: pgmap v2669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:10:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:10:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:10:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:10:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:10:20 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:10:20 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:20 compute-0 podman[321709]: 2025-11-24 21:10:20.211745803 +0000 UTC m=+0.256248452 container remove 93b6bc46eebef70898673d9766f5a9bb1a5342bd96f57e3556a081fd4de1f937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_burnell, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:10:20 compute-0 systemd[1]: libpod-conmon-93b6bc46eebef70898673d9766f5a9bb1a5342bd96f57e3556a081fd4de1f937.scope: Deactivated successfully.
Nov 24 21:10:20 compute-0 podman[321749]: 2025-11-24 21:10:20.383046706 +0000 UTC m=+0.049397382 container create beffd394433eac55f2c1cf4c069058700bb3b189bfcafd207f414dd765ce413e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:10:20 compute-0 systemd[1]: Started libpod-conmon-beffd394433eac55f2c1cf4c069058700bb3b189bfcafd207f414dd765ce413e.scope.
Nov 24 21:10:20 compute-0 podman[321749]: 2025-11-24 21:10:20.359928894 +0000 UTC m=+0.026279580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:10:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ee683799da7e216cd67f5725649aae3d57a070c710b242e657fcaaa1396c0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ee683799da7e216cd67f5725649aae3d57a070c710b242e657fcaaa1396c0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ee683799da7e216cd67f5725649aae3d57a070c710b242e657fcaaa1396c0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ee683799da7e216cd67f5725649aae3d57a070c710b242e657fcaaa1396c0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ee683799da7e216cd67f5725649aae3d57a070c710b242e657fcaaa1396c0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:20 compute-0 podman[321749]: 2025-11-24 21:10:20.484812956 +0000 UTC m=+0.151163662 container init beffd394433eac55f2c1cf4c069058700bb3b189bfcafd207f414dd765ce413e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:10:20 compute-0 podman[321749]: 2025-11-24 21:10:20.499113461 +0000 UTC m=+0.165464137 container start beffd394433eac55f2c1cf4c069058700bb3b189bfcafd207f414dd765ce413e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 24 21:10:20 compute-0 podman[321749]: 2025-11-24 21:10:20.505109183 +0000 UTC m=+0.171460029 container attach beffd394433eac55f2c1cf4c069058700bb3b189bfcafd207f414dd765ce413e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:10:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:20.528+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:21.165+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:21 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:21.567+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:21 compute-0 nervous_gould[321765]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:10:21 compute-0 nervous_gould[321765]: --> relative data size: 1.0
Nov 24 21:10:21 compute-0 nervous_gould[321765]: --> All data devices are unavailable
Nov 24 21:10:21 compute-0 systemd[1]: libpod-beffd394433eac55f2c1cf4c069058700bb3b189bfcafd207f414dd765ce413e.scope: Deactivated successfully.
Nov 24 21:10:21 compute-0 systemd[1]: libpod-beffd394433eac55f2c1cf4c069058700bb3b189bfcafd207f414dd765ce413e.scope: Consumed 1.184s CPU time.
Nov 24 21:10:21 compute-0 podman[321749]: 2025-11-24 21:10:21.752863615 +0000 UTC m=+1.419214301 container died beffd394433eac55f2c1cf4c069058700bb3b189bfcafd207f414dd765ce413e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 21:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ee683799da7e216cd67f5725649aae3d57a070c710b242e657fcaaa1396c0a-merged.mount: Deactivated successfully.
Nov 24 21:10:21 compute-0 podman[321749]: 2025-11-24 21:10:21.838384658 +0000 UTC m=+1.504735344 container remove beffd394433eac55f2c1cf4c069058700bb3b189bfcafd207f414dd765ce413e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 21:10:21 compute-0 systemd[1]: libpod-conmon-beffd394433eac55f2c1cf4c069058700bb3b189bfcafd207f414dd765ce413e.scope: Deactivated successfully.
Nov 24 21:10:21 compute-0 sudo[321644]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:21 compute-0 sudo[321808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:10:21 compute-0 sudo[321808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:21 compute-0 sudo[321808]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:22 compute-0 sudo[321833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:10:22 compute-0 sudo[321833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:22 compute-0 sudo[321833]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:22 compute-0 sudo[321858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:10:22 compute-0 sudo[321858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:22 compute-0 sudo[321858]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:22.144+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:22 compute-0 sudo[321883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:10:22 compute-0 sudo[321883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:22 compute-0 podman[321948]: 2025-11-24 21:10:22.577847341 +0000 UTC m=+0.025577980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:10:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:22.713+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:22 compute-0 podman[321948]: 2025-11-24 21:10:22.722170758 +0000 UTC m=+0.169901337 container create dc7b7fa0520e0d3407d96dcdb26e6da5d8efa9dd01fb0c8aa6e40515bd2c0d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 21:10:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:22 compute-0 ceph-mon[75677]: pgmap v2670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:22 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:22 compute-0 systemd[1]: Started libpod-conmon-dc7b7fa0520e0d3407d96dcdb26e6da5d8efa9dd01fb0c8aa6e40515bd2c0d59.scope.
Nov 24 21:10:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:10:22 compute-0 podman[321948]: 2025-11-24 21:10:22.839764224 +0000 UTC m=+0.287494803 container init dc7b7fa0520e0d3407d96dcdb26e6da5d8efa9dd01fb0c8aa6e40515bd2c0d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:10:22 compute-0 podman[321948]: 2025-11-24 21:10:22.854877661 +0000 UTC m=+0.302608260 container start dc7b7fa0520e0d3407d96dcdb26e6da5d8efa9dd01fb0c8aa6e40515bd2c0d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 21:10:22 compute-0 podman[321948]: 2025-11-24 21:10:22.860340128 +0000 UTC m=+0.308070687 container attach dc7b7fa0520e0d3407d96dcdb26e6da5d8efa9dd01fb0c8aa6e40515bd2c0d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:10:22 compute-0 sweet_visvesvaraya[321965]: 167 167
Nov 24 21:10:22 compute-0 systemd[1]: libpod-dc7b7fa0520e0d3407d96dcdb26e6da5d8efa9dd01fb0c8aa6e40515bd2c0d59.scope: Deactivated successfully.
Nov 24 21:10:22 compute-0 podman[321948]: 2025-11-24 21:10:22.862286741 +0000 UTC m=+0.310017300 container died dc7b7fa0520e0d3407d96dcdb26e6da5d8efa9dd01fb0c8aa6e40515bd2c0d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 21:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7438eb8a2cd6dd73cb6b5577a254befd3335d52ec843d583b17f011169bc6e3-merged.mount: Deactivated successfully.
Nov 24 21:10:22 compute-0 podman[321948]: 2025-11-24 21:10:22.916247524 +0000 UTC m=+0.363978073 container remove dc7b7fa0520e0d3407d96dcdb26e6da5d8efa9dd01fb0c8aa6e40515bd2c0d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 21:10:22 compute-0 systemd[1]: libpod-conmon-dc7b7fa0520e0d3407d96dcdb26e6da5d8efa9dd01fb0c8aa6e40515bd2c0d59.scope: Deactivated successfully.
Nov 24 21:10:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:23.095+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:23 compute-0 podman[321988]: 2025-11-24 21:10:23.144259215 +0000 UTC m=+0.064449527 container create 24349b7fb614e2f664b5401c0375a535b378101182a1559e73760fb3cd8921fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 24 21:10:23 compute-0 systemd[1]: Started libpod-conmon-24349b7fb614e2f664b5401c0375a535b378101182a1559e73760fb3cd8921fe.scope.
Nov 24 21:10:23 compute-0 podman[321988]: 2025-11-24 21:10:23.109508148 +0000 UTC m=+0.029698540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:10:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d32da85d8f749c728aa4b536c6aeca0f5ef58318fc6a8c8e5c003b72c4ee64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d32da85d8f749c728aa4b536c6aeca0f5ef58318fc6a8c8e5c003b72c4ee64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d32da85d8f749c728aa4b536c6aeca0f5ef58318fc6a8c8e5c003b72c4ee64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d32da85d8f749c728aa4b536c6aeca0f5ef58318fc6a8c8e5c003b72c4ee64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:23 compute-0 podman[321988]: 2025-11-24 21:10:23.257068272 +0000 UTC m=+0.177258594 container init 24349b7fb614e2f664b5401c0375a535b378101182a1559e73760fb3cd8921fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 21:10:23 compute-0 podman[321988]: 2025-11-24 21:10:23.265756706 +0000 UTC m=+0.185947038 container start 24349b7fb614e2f664b5401c0375a535b378101182a1559e73760fb3cd8921fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:10:23 compute-0 podman[321988]: 2025-11-24 21:10:23.279706012 +0000 UTC m=+0.199896464 container attach 24349b7fb614e2f664b5401c0375a535b378101182a1559e73760fb3cd8921fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:10:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:23.715+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:23 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:24.079+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:24 compute-0 cranky_clarke[322004]: {
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:     "0": [
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:         {
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "devices": [
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "/dev/loop3"
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             ],
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_name": "ceph_lv0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_size": "21470642176",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "name": "ceph_lv0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "tags": {
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.cluster_name": "ceph",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.crush_device_class": "",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.encrypted": "0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.osd_id": "0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.type": "block",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.vdo": "0"
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             },
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "type": "block",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "vg_name": "ceph_vg0"
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:         }
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:     ],
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:     "1": [
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:         {
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "devices": [
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "/dev/loop4"
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             ],
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_name": "ceph_lv1",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_size": "21470642176",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "name": "ceph_lv1",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "tags": {
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.cluster_name": "ceph",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.crush_device_class": "",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.encrypted": "0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.osd_id": "1",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.type": "block",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.vdo": "0"
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             },
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "type": "block",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "vg_name": "ceph_vg1"
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:         }
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:     ],
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:     "2": [
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:         {
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "devices": [
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "/dev/loop5"
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             ],
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_name": "ceph_lv2",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_size": "21470642176",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "name": "ceph_lv2",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "tags": {
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.cluster_name": "ceph",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.crush_device_class": "",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.encrypted": "0",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.osd_id": "2",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.type": "block",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:                 "ceph.vdo": "0"
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             },
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "type": "block",
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:             "vg_name": "ceph_vg2"
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:         }
Nov 24 21:10:24 compute-0 cranky_clarke[322004]:     ]
Nov 24 21:10:24 compute-0 cranky_clarke[322004]: }
Nov 24 21:10:24 compute-0 systemd[1]: libpod-24349b7fb614e2f664b5401c0375a535b378101182a1559e73760fb3cd8921fe.scope: Deactivated successfully.
Nov 24 21:10:24 compute-0 podman[321988]: 2025-11-24 21:10:24.153932384 +0000 UTC m=+1.074122696 container died 24349b7fb614e2f664b5401c0375a535b378101182a1559e73760fb3cd8921fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-82d32da85d8f749c728aa4b536c6aeca0f5ef58318fc6a8c8e5c003b72c4ee64-merged.mount: Deactivated successfully.
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:10:24
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'backups']
Nov 24 21:10:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:10:24 compute-0 podman[321988]: 2025-11-24 21:10:24.688451759 +0000 UTC m=+1.608642061 container remove 24349b7fb614e2f664b5401c0375a535b378101182a1559e73760fb3cd8921fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 21:10:24 compute-0 systemd[1]: libpod-conmon-24349b7fb614e2f664b5401c0375a535b378101182a1559e73760fb3cd8921fe.scope: Deactivated successfully.
Nov 24 21:10:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:24.702+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:24 compute-0 sudo[321883]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:24 compute-0 sudo[322027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:10:24 compute-0 ceph-mon[75677]: pgmap v2671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:24 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:24 compute-0 sudo[322027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:24 compute-0 sudo[322027]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:24 compute-0 sudo[322052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:10:24 compute-0 sudo[322052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:24 compute-0 sudo[322052]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:25 compute-0 sudo[322077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:10:25 compute-0 sudo[322077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:25 compute-0 sudo[322077]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:25.050+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:25 compute-0 sudo[322102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:10:25 compute-0 sudo[322102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:25 compute-0 podman[322169]: 2025-11-24 21:10:25.50614879 +0000 UTC m=+0.036123084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:10:25 compute-0 podman[322169]: 2025-11-24 21:10:25.602446182 +0000 UTC m=+0.132420426 container create 9821296aff2624eaeb3140fd614a5a0445c0b3918090267bd8c5ad9ce4e57a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:10:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:25.653+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:25 compute-0 systemd[1]: Started libpod-conmon-9821296aff2624eaeb3140fd614a5a0445c0b3918090267bd8c5ad9ce4e57a02.scope.
Nov 24 21:10:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:10:25 compute-0 podman[322169]: 2025-11-24 21:10:25.781140635 +0000 UTC m=+0.311114849 container init 9821296aff2624eaeb3140fd614a5a0445c0b3918090267bd8c5ad9ce4e57a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 21:10:25 compute-0 podman[322169]: 2025-11-24 21:10:25.796397815 +0000 UTC m=+0.326372029 container start 9821296aff2624eaeb3140fd614a5a0445c0b3918090267bd8c5ad9ce4e57a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 21:10:25 compute-0 infallible_pascal[322185]: 167 167
Nov 24 21:10:25 compute-0 podman[322169]: 2025-11-24 21:10:25.804581246 +0000 UTC m=+0.334555490 container attach 9821296aff2624eaeb3140fd614a5a0445c0b3918090267bd8c5ad9ce4e57a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 24 21:10:25 compute-0 systemd[1]: libpod-9821296aff2624eaeb3140fd614a5a0445c0b3918090267bd8c5ad9ce4e57a02.scope: Deactivated successfully.
Nov 24 21:10:25 compute-0 podman[322169]: 2025-11-24 21:10:25.80581618 +0000 UTC m=+0.335790394 container died 9821296aff2624eaeb3140fd614a5a0445c0b3918090267bd8c5ad9ce4e57a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:10:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:26.080+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:26 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aeb8ee29d395474c0c9c3e801e01056630ebe2076c49425daa36108c72e34df-merged.mount: Deactivated successfully.
Nov 24 21:10:26 compute-0 podman[322169]: 2025-11-24 21:10:26.114065571 +0000 UTC m=+0.644039785 container remove 9821296aff2624eaeb3140fd614a5a0445c0b3918090267bd8c5ad9ce4e57a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_pascal, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:10:26 compute-0 systemd[1]: libpod-conmon-9821296aff2624eaeb3140fd614a5a0445c0b3918090267bd8c5ad9ce4e57a02.scope: Deactivated successfully.
Nov 24 21:10:26 compute-0 podman[322201]: 2025-11-24 21:10:26.230913408 +0000 UTC m=+0.102617745 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:10:26 compute-0 podman[322225]: 2025-11-24 21:10:26.326882492 +0000 UTC m=+0.047871620 container create bd107441609580f952921408ec69fdb81857a599249129bf575182accbfed9f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 21:10:26 compute-0 systemd[1]: Started libpod-conmon-bd107441609580f952921408ec69fdb81857a599249129bf575182accbfed9f8.scope.
Nov 24 21:10:26 compute-0 podman[322225]: 2025-11-24 21:10:26.304123409 +0000 UTC m=+0.025112567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:10:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a3b451a511d503b7e795274f4c806b64705bc64a296b8a499e9b123944c604/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a3b451a511d503b7e795274f4c806b64705bc64a296b8a499e9b123944c604/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a3b451a511d503b7e795274f4c806b64705bc64a296b8a499e9b123944c604/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a3b451a511d503b7e795274f4c806b64705bc64a296b8a499e9b123944c604/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:10:26 compute-0 podman[322225]: 2025-11-24 21:10:26.443493361 +0000 UTC m=+0.164482579 container init bd107441609580f952921408ec69fdb81857a599249129bf575182accbfed9f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:10:26 compute-0 podman[322225]: 2025-11-24 21:10:26.454750104 +0000 UTC m=+0.175739242 container start bd107441609580f952921408ec69fdb81857a599249129bf575182accbfed9f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_beaver, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 21:10:26 compute-0 podman[322225]: 2025-11-24 21:10:26.47239963 +0000 UTC m=+0.193388788 container attach bd107441609580f952921408ec69fdb81857a599249129bf575182accbfed9f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 21:10:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:26.614+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:27.032+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:27 compute-0 ceph-mon[75677]: pgmap v2672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:27 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]: {
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "osd_id": 2,
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "type": "bluestore"
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:     },
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "osd_id": 1,
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "type": "bluestore"
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:     },
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "osd_id": 0,
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:         "type": "bluestore"
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]:     }
Nov 24 21:10:27 compute-0 mystifying_beaver[322242]: }
Nov 24 21:10:27 compute-0 systemd[1]: libpod-bd107441609580f952921408ec69fdb81857a599249129bf575182accbfed9f8.scope: Deactivated successfully.
Nov 24 21:10:27 compute-0 systemd[1]: libpod-bd107441609580f952921408ec69fdb81857a599249129bf575182accbfed9f8.scope: Consumed 1.051s CPU time.
Nov 24 21:10:27 compute-0 podman[322225]: 2025-11-24 21:10:27.495847251 +0000 UTC m=+1.216836369 container died bd107441609580f952921408ec69fdb81857a599249129bf575182accbfed9f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_beaver, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-80a3b451a511d503b7e795274f4c806b64705bc64a296b8a499e9b123944c604-merged.mount: Deactivated successfully.
Nov 24 21:10:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:27.579+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:27 compute-0 podman[322225]: 2025-11-24 21:10:27.696114795 +0000 UTC m=+1.417103943 container remove bd107441609580f952921408ec69fdb81857a599249129bf575182accbfed9f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_beaver, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:10:27 compute-0 systemd[1]: libpod-conmon-bd107441609580f952921408ec69fdb81857a599249129bf575182accbfed9f8.scope: Deactivated successfully.
Nov 24 21:10:27 compute-0 sudo[322102]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:10:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:10:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:10:27 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:10:27 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 949b4de2-7493-40a3-ac5f-8d91e34f7a59 does not exist
Nov 24 21:10:27 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev f2cea19b-6190-40ac-a33c-6c0c96ced926 does not exist
Nov 24 21:10:27 compute-0 sudo[322287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:10:27 compute-0 sudo[322287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:27 compute-0 sudo[322287]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4742 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:28 compute-0 sudo[322312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:10:28 compute-0 sudo[322312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:10:28 compute-0 sudo[322312]: pam_unix(sudo:session): session closed for user root
Nov 24 21:10:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:28.058+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:28 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:28 compute-0 ceph-mon[75677]: pgmap v2673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:10:28 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:10:28 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4742 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:28.595+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:29.050+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:29 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:29 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:29.566+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:29 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:30 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:30.067+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:30 compute-0 sshd-session[322337]: Invalid user ir from 182.93.7.194 port 43636
Nov 24 21:10:30 compute-0 ceph-mon[75677]: pgmap v2674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:30 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:30 compute-0 sshd-session[322337]: Received disconnect from 182.93.7.194 port 43636:11: Bye Bye [preauth]
Nov 24 21:10:30 compute-0 sshd-session[322337]: Disconnected from invalid user ir 182.93.7.194 port 43636 [preauth]
Nov 24 21:10:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:30.546+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:31 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:31.090+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:31 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:31.508+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:31 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:32.135+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:32 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:32.487+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:32 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:32 compute-0 ceph-mon[75677]: pgmap v2675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:32 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4752 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:33.135+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:33 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:33.476+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:33 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:33 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:33 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4752 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:34 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:34.137+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:34.466+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:34 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:34 compute-0 ceph-mon[75677]: pgmap v2676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:34 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:35 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:35.161+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:35.475+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:35 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:10:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:10:35 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:35 compute-0 podman[322339]: 2025-11-24 21:10:35.868796132 +0000 UTC m=+0.093228862 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 21:10:36 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:36.193+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:36.442+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:36 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:37 compute-0 ceph-mon[75677]: pgmap v2677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:37 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:37 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:37.177+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:37.453+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:37 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:38 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4757 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:38 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:38 compute-0 ceph-mon[75677]: pgmap v2678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:38 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:38.137+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:38.453+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:38 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:39 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:39.116+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:39 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:39 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4757 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:39.430+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:39 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:40 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:40.132+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:40 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:40 compute-0 ceph-mon[75677]: pgmap v2679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:40 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:40.454+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:40 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:10:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:10:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:10:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:10:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:10:40 compute-0 podman[322360]: 2025-11-24 21:10:40.907875882 +0000 UTC m=+0.130234399 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 24 21:10:41 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:41.091+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:41.408+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:41 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:41 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:42 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:42.073+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:42.366+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:42 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:42 compute-0 ceph-mon[75677]: pgmap v2680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:42 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:43 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:43.062+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:43.331+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:43 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:43 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:44 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:44.087+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:44.328+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:44 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:44 compute-0 ceph-mon[75677]: pgmap v2681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:44 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:45 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:45.065+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:45.360+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:45 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:46.028+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:46 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:46 compute-0 ceph-mon[75677]: pgmap v2682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:46.358+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:46 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:46 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:46.983+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:47 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:47.313+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:47 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:47 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4767 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:47 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:47.938+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:48.268+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:48 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:48 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:48 compute-0 ceph-mon[75677]: pgmap v2683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:48 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:48 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4767 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:48 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:48.986+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:49.256+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:49 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:49 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:50 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:50.011+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:50.296+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:50 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:50 compute-0 ceph-mon[75677]: pgmap v2684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:50 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:51 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:51.007+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:51.289+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:51 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:51 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:52 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:52.050+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:52.287+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:52 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:52 compute-0 ceph-mon[75677]: pgmap v2685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:52 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:52 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4772 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:53 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:53.027+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:53.291+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:53 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:53 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:53 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4772 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:54 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:54.063+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:54.298+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:54 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:10:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:10:54 compute-0 ceph-mon[75677]: pgmap v2686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:54 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:55 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:55.027+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:55.326+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:55 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:55 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:56.019+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:56.314+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:56 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:56 compute-0 ceph-mon[75677]: pgmap v2687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:56 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:56 compute-0 podman[322387]: 2025-11-24 21:10:56.844793315 +0000 UTC m=+0.071154937 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 24 21:10:56 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:56.998+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:57.315+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:57 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:57 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4777 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:57 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:10:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:58.012+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:58.366+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:58 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:58 compute-0 ceph-mon[75677]: pgmap v2688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:58 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:58 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4777 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:10:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:58 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:58.988+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:10:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:10:59.341+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:59 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:10:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:59 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:10:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:59 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:10:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:10:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:10:59.999+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:00.354+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:00 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:00 compute-0 ceph-mon[75677]: pgmap v2689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:00 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:00 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:00.998+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:01.393+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:01 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:01 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:02 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:02.047+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:02.350+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:02 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:02 compute-0 ceph-mon[75677]: pgmap v2690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:02 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:02 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4782 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:02 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:03 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:03.058+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:03.312+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:03 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:03 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:03 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4782 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:04 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:04.059+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:04.299+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:04 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:04 compute-0 ceph-mon[75677]: pgmap v2691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:04 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:05 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:05.030+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:05.328+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:05 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:05 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:06 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:06.028+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:06.361+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:06 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:06 compute-0 podman[322403]: 2025-11-24 21:11:06.830116916 +0000 UTC m=+0.061117307 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 21:11:06 compute-0 ceph-mon[75677]: pgmap v2692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:06 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:07 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:07.036+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:07.410+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:07 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:07 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:07 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4787 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:08 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:08.063+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:08.397+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:08 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:08 compute-0 ceph-mon[75677]: pgmap v2693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:08 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:08 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4787 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:09 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:09.074+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:09.393+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:09 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:11:09.431 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:11:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:11:09.431 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:11:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:11:09.432 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:11:09 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:09 compute-0 ceph-mon[75677]: pgmap v2694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:10 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:10.079+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:10.421+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:10 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:11 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:11 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:11.076+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:11.385+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:11 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:11 compute-0 podman[322423]: 2025-11-24 21:11:11.868865417 +0000 UTC m=+0.106822977 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 24 21:11:12 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:12 compute-0 ceph-mon[75677]: pgmap v2695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:12 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:12.063+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:12.382+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:12 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:12 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4792 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:13 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:13 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4792 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:13 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:13.080+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:13.375+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:13 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:14 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:14 compute-0 ceph-mon[75677]: pgmap v2696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:14 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:14.093+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:14.329+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:14 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:15 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:15.129+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:15.370+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:15 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:16 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:16.146+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:16 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:16 compute-0 ceph-mon[75677]: pgmap v2697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:16.406+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:16 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:11:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3305971505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:11:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:11:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3305971505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:11:17 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:17.171+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:17.380+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:17 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:17 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:17 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3305971505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:11:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/3305971505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:11:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4797 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:18 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:18.132+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:18.403+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:18 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:18 compute-0 ceph-mon[75677]: pgmap v2698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:18 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:18 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4797 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:19 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:19.165+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:19.416+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:19 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:19 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:20 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:20.167+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:20.459+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:20 compute-0 ceph-mon[75677]: pgmap v2699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:20 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:21 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:21.142+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:21.429+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:21 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:21 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:22 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:22.100+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:22.471+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:22 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:22 compute-0 ceph-mon[75677]: pgmap v2700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:22 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:22 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4802 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:23 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:23.149+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:23.481+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:23 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:23 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:23 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4802 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:24 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:24.166+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:11:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:24.488+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:24 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:24 compute-0 ceph-mon[75677]: pgmap v2701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:24 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:11:24
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.data']
Nov 24 21:11:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:11:25 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:25.173+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:25.479+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:25 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:25 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:26 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:26.149+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:26.449+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:26 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:26 compute-0 ceph-mon[75677]: pgmap v2702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:26 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:27 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:27.168+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:27.433+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:27 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:27 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:27 compute-0 podman[322450]: 2025-11-24 21:11:27.832417428 +0000 UTC m=+0.057895979 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Nov 24 21:11:27 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4807 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:28 compute-0 sudo[322470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:28 compute-0 sudo[322470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:28 compute-0 sudo[322470]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:28 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:28.160+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:28 compute-0 sudo[322495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:11:28 compute-0 sudo[322495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:28 compute-0 sudo[322495]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:28 compute-0 sudo[322520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:28 compute-0 sudo[322520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:28 compute-0 sudo[322520]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:28 compute-0 sudo[322545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 21:11:28 compute-0 sudo[322545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:28.452+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:28 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:28 compute-0 ceph-mon[75677]: pgmap v2703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:28 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:28 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4807 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:28 compute-0 sudo[322545]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:11:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:11:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:11:28 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:11:28 compute-0 sudo[322590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:28 compute-0 sudo[322590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:28 compute-0 sudo[322590]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:28 compute-0 sudo[322615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:11:28 compute-0 sudo[322615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:28 compute-0 sudo[322615]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:29 compute-0 sudo[322640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:29 compute-0 sudo[322640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:29 compute-0 sudo[322640]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:29 compute-0 sudo[322665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:11:29 compute-0 sudo[322665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:29 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:29.181+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:29.499+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:29 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:29 compute-0 sudo[322665]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:29 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:11:29 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:11:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:11:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:11:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:11:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:11:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:11:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:11:29 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 6870de89-9f5f-42bd-b68a-146a89d933b5 does not exist
Nov 24 21:11:29 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev e475dd92-8d5c-4326-a11a-db14a060fc63 does not exist
Nov 24 21:11:29 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 963673a1-65c4-4eda-846c-da2be3e07e49 does not exist
Nov 24 21:11:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:11:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:11:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:11:29 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:11:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:11:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:11:29 compute-0 sudo[322721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:29 compute-0 sudo[322721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:29 compute-0 sudo[322721]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:30 compute-0 sudo[322746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:11:30 compute-0 sudo[322746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:30 compute-0 sudo[322746]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:30 compute-0 sudo[322771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:30 compute-0 sudo[322771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:30 compute-0 sudo[322771]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:30 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:30.222+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:30 compute-0 sudo[322796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:11:30 compute-0 sudo[322796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:30.484+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:30 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:30 compute-0 podman[322861]: 2025-11-24 21:11:30.620639214 +0000 UTC m=+0.048321712 container create 9d368f291967fa23eefca7c58a848ee08b7d95b7ae21cc29051104d1358609f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_black, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:11:30 compute-0 systemd[1]: Started libpod-conmon-9d368f291967fa23eefca7c58a848ee08b7d95b7ae21cc29051104d1358609f4.scope.
Nov 24 21:11:30 compute-0 podman[322861]: 2025-11-24 21:11:30.593506054 +0000 UTC m=+0.021188622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:11:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:11:30 compute-0 podman[322861]: 2025-11-24 21:11:30.735311212 +0000 UTC m=+0.162993780 container init 9d368f291967fa23eefca7c58a848ee08b7d95b7ae21cc29051104d1358609f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_black, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 21:11:30 compute-0 ceph-mon[75677]: pgmap v2704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:30 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:11:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:11:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:11:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:11:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:11:30 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:11:30 compute-0 podman[322861]: 2025-11-24 21:11:30.745300451 +0000 UTC m=+0.172982979 container start 9d368f291967fa23eefca7c58a848ee08b7d95b7ae21cc29051104d1358609f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_black, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 21:11:30 compute-0 podman[322861]: 2025-11-24 21:11:30.750378328 +0000 UTC m=+0.178060866 container attach 9d368f291967fa23eefca7c58a848ee08b7d95b7ae21cc29051104d1358609f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:11:30 compute-0 practical_black[322878]: 167 167
Nov 24 21:11:30 compute-0 systemd[1]: libpod-9d368f291967fa23eefca7c58a848ee08b7d95b7ae21cc29051104d1358609f4.scope: Deactivated successfully.
Nov 24 21:11:30 compute-0 podman[322861]: 2025-11-24 21:11:30.754297243 +0000 UTC m=+0.181979781 container died 9d368f291967fa23eefca7c58a848ee08b7d95b7ae21cc29051104d1358609f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_black, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 21:11:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5257e63e508b57c354b6196fd7b2c68200a56ee2f591f2d1145982ef3944f28-merged.mount: Deactivated successfully.
Nov 24 21:11:30 compute-0 podman[322861]: 2025-11-24 21:11:30.806471819 +0000 UTC m=+0.234154357 container remove 9d368f291967fa23eefca7c58a848ee08b7d95b7ae21cc29051104d1358609f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_black, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 21:11:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 24 21:11:30 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 24 21:11:30 compute-0 systemd[1]: libpod-conmon-9d368f291967fa23eefca7c58a848ee08b7d95b7ae21cc29051104d1358609f4.scope: Deactivated successfully.
Nov 24 21:11:30 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 24 21:11:31 compute-0 podman[322901]: 2025-11-24 21:11:31.054667472 +0000 UTC m=+0.064692633 container create 6ee62844dbec922f4a18d1214751db98d2aa87d102eea444b9c63ed73a0ea3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:11:31 compute-0 systemd[1]: Started libpod-conmon-6ee62844dbec922f4a18d1214751db98d2aa87d102eea444b9c63ed73a0ea3e9.scope.
Nov 24 21:11:31 compute-0 podman[322901]: 2025-11-24 21:11:31.029713401 +0000 UTC m=+0.039738552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:11:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247745b68d52f0eebee7a5156902e4566e42679e7351a584cd03ec6558a29e5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247745b68d52f0eebee7a5156902e4566e42679e7351a584cd03ec6558a29e5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247745b68d52f0eebee7a5156902e4566e42679e7351a584cd03ec6558a29e5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247745b68d52f0eebee7a5156902e4566e42679e7351a584cd03ec6558a29e5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247745b68d52f0eebee7a5156902e4566e42679e7351a584cd03ec6558a29e5d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:31 compute-0 podman[322901]: 2025-11-24 21:11:31.163441781 +0000 UTC m=+0.173466932 container init 6ee62844dbec922f4a18d1214751db98d2aa87d102eea444b9c63ed73a0ea3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:11:31 compute-0 podman[322901]: 2025-11-24 21:11:31.180379177 +0000 UTC m=+0.190404318 container start 6ee62844dbec922f4a18d1214751db98d2aa87d102eea444b9c63ed73a0ea3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 21:11:31 compute-0 podman[322901]: 2025-11-24 21:11:31.190411038 +0000 UTC m=+0.200436169 container attach 6ee62844dbec922f4a18d1214751db98d2aa87d102eea444b9c63ed73a0ea3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sutherland, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:11:31 compute-0 ceph-osd[89640]: osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:31.220+0000 7f1a67169640 -1 osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 511 B/s wr, 7 op/s
Nov 24 21:11:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:31.504+0000 7f2ca3ee7640 -1 osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:31 compute-0 ceph-osd[88624]: osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:31 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:31 compute-0 ceph-mon[75677]: osdmap e193: 3 total, 3 up, 3 in
Nov 24 21:11:32 compute-0 ceph-osd[89640]: osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:32.263+0000 7f1a67169640 -1 osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:32 compute-0 friendly_sutherland[322919]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:11:32 compute-0 friendly_sutherland[322919]: --> relative data size: 1.0
Nov 24 21:11:32 compute-0 friendly_sutherland[322919]: --> All data devices are unavailable
Nov 24 21:11:32 compute-0 systemd[1]: libpod-6ee62844dbec922f4a18d1214751db98d2aa87d102eea444b9c63ed73a0ea3e9.scope: Deactivated successfully.
Nov 24 21:11:32 compute-0 systemd[1]: libpod-6ee62844dbec922f4a18d1214751db98d2aa87d102eea444b9c63ed73a0ea3e9.scope: Consumed 1.088s CPU time.
Nov 24 21:11:32 compute-0 podman[322901]: 2025-11-24 21:11:32.312812134 +0000 UTC m=+1.322837335 container died 6ee62844dbec922f4a18d1214751db98d2aa87d102eea444b9c63ed73a0ea3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-247745b68d52f0eebee7a5156902e4566e42679e7351a584cd03ec6558a29e5d-merged.mount: Deactivated successfully.
Nov 24 21:11:32 compute-0 podman[322901]: 2025-11-24 21:11:32.388168903 +0000 UTC m=+1.398194034 container remove 6ee62844dbec922f4a18d1214751db98d2aa87d102eea444b9c63ed73a0ea3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:11:32 compute-0 systemd[1]: libpod-conmon-6ee62844dbec922f4a18d1214751db98d2aa87d102eea444b9c63ed73a0ea3e9.scope: Deactivated successfully.
Nov 24 21:11:32 compute-0 sudo[322796]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:32.490+0000 7f2ca3ee7640 -1 osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:32 compute-0 ceph-osd[88624]: osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:32 compute-0 sudo[322961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:32 compute-0 sudo[322961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:32 compute-0 sudo[322961]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:32 compute-0 sudo[322986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:11:32 compute-0 sudo[322986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:32 compute-0 sudo[322986]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:32 compute-0 sudo[323011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:32 compute-0 sudo[323011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:32 compute-0 sudo[323011]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:32 compute-0 sudo[323036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:11:32 compute-0 sudo[323036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:32 compute-0 ceph-mon[75677]: pgmap v2706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 511 B/s wr, 7 op/s
Nov 24 21:11:32 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:32 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4812 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:32 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:33 compute-0 podman[323103]: 2025-11-24 21:11:33.188697011 +0000 UTC m=+0.058456246 container create c52a5e53b0984fa0442c99481400bcfdae312cda63ab862301ca0b1aea5a65a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 21:11:33 compute-0 systemd[1]: Started libpod-conmon-c52a5e53b0984fa0442c99481400bcfdae312cda63ab862301ca0b1aea5a65a5.scope.
Nov 24 21:11:33 compute-0 podman[323103]: 2025-11-24 21:11:33.162707761 +0000 UTC m=+0.032467006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:11:33 compute-0 ceph-osd[89640]: osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:33.272+0000 7f1a67169640 -1 osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 511 B/s wr, 7 op/s
Nov 24 21:11:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:11:33 compute-0 podman[323103]: 2025-11-24 21:11:33.295423225 +0000 UTC m=+0.165182440 container init c52a5e53b0984fa0442c99481400bcfdae312cda63ab862301ca0b1aea5a65a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 21:11:33 compute-0 podman[323103]: 2025-11-24 21:11:33.307267204 +0000 UTC m=+0.177026409 container start c52a5e53b0984fa0442c99481400bcfdae312cda63ab862301ca0b1aea5a65a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:11:33 compute-0 podman[323103]: 2025-11-24 21:11:33.3111942 +0000 UTC m=+0.180953405 container attach c52a5e53b0984fa0442c99481400bcfdae312cda63ab862301ca0b1aea5a65a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:11:33 compute-0 boring_ellis[323120]: 167 167
Nov 24 21:11:33 compute-0 systemd[1]: libpod-c52a5e53b0984fa0442c99481400bcfdae312cda63ab862301ca0b1aea5a65a5.scope: Deactivated successfully.
Nov 24 21:11:33 compute-0 podman[323103]: 2025-11-24 21:11:33.312987828 +0000 UTC m=+0.182747043 container died c52a5e53b0984fa0442c99481400bcfdae312cda63ab862301ca0b1aea5a65a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 21:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5664da27673eb9587b057f98799666c1147f522cbab12b4e12d38a4c3d0079c1-merged.mount: Deactivated successfully.
Nov 24 21:11:33 compute-0 podman[323103]: 2025-11-24 21:11:33.35169522 +0000 UTC m=+0.221454425 container remove c52a5e53b0984fa0442c99481400bcfdae312cda63ab862301ca0b1aea5a65a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:11:33 compute-0 systemd[1]: libpod-conmon-c52a5e53b0984fa0442c99481400bcfdae312cda63ab862301ca0b1aea5a65a5.scope: Deactivated successfully.
Nov 24 21:11:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:33.445+0000 7f2ca3ee7640 -1 osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:33 compute-0 ceph-osd[88624]: osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:33 compute-0 podman[323143]: 2025-11-24 21:11:33.547659367 +0000 UTC m=+0.050486110 container create d9398da848889a3e79faabdc10a200da858abe73aba314377fe7e017eeb084f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_borg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:11:33 compute-0 systemd[1]: Started libpod-conmon-d9398da848889a3e79faabdc10a200da858abe73aba314377fe7e017eeb084f5.scope.
Nov 24 21:11:33 compute-0 podman[323143]: 2025-11-24 21:11:33.530724621 +0000 UTC m=+0.033551364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:11:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8930e248375ad221675ee6ba012606825bdb2c3c4b751276ff115b8100ccb29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8930e248375ad221675ee6ba012606825bdb2c3c4b751276ff115b8100ccb29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8930e248375ad221675ee6ba012606825bdb2c3c4b751276ff115b8100ccb29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8930e248375ad221675ee6ba012606825bdb2c3c4b751276ff115b8100ccb29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:33 compute-0 podman[323143]: 2025-11-24 21:11:33.665531632 +0000 UTC m=+0.168358385 container init d9398da848889a3e79faabdc10a200da858abe73aba314377fe7e017eeb084f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_borg, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:11:33 compute-0 podman[323143]: 2025-11-24 21:11:33.679960321 +0000 UTC m=+0.182787094 container start d9398da848889a3e79faabdc10a200da858abe73aba314377fe7e017eeb084f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 21:11:33 compute-0 podman[323143]: 2025-11-24 21:11:33.684621826 +0000 UTC m=+0.187448569 container attach d9398da848889a3e79faabdc10a200da858abe73aba314377fe7e017eeb084f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_borg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 21:11:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 24 21:11:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:33 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:33 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4812 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 24 21:11:33 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 24 21:11:34 compute-0 ceph-osd[89640]: osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:34.287+0000 7f1a67169640 -1 osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:34.495+0000 7f2ca3ee7640 -1 osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:34 compute-0 ceph-osd[88624]: osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:34 compute-0 relaxed_borg[323159]: {
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:     "0": [
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:         {
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "devices": [
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "/dev/loop3"
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             ],
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_name": "ceph_lv0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_size": "21470642176",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "name": "ceph_lv0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "tags": {
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.cluster_name": "ceph",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.crush_device_class": "",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.encrypted": "0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.osd_id": "0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.type": "block",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.vdo": "0"
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             },
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "type": "block",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "vg_name": "ceph_vg0"
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:         }
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:     ],
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:     "1": [
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:         {
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "devices": [
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "/dev/loop4"
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             ],
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_name": "ceph_lv1",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_size": "21470642176",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "name": "ceph_lv1",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "tags": {
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.cluster_name": "ceph",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.crush_device_class": "",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.encrypted": "0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.osd_id": "1",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.type": "block",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.vdo": "0"
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             },
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "type": "block",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "vg_name": "ceph_vg1"
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:         }
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:     ],
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:     "2": [
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:         {
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "devices": [
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "/dev/loop5"
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             ],
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_name": "ceph_lv2",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_size": "21470642176",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "name": "ceph_lv2",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "tags": {
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.cluster_name": "ceph",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.crush_device_class": "",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.encrypted": "0",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.osd_id": "2",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.type": "block",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:                 "ceph.vdo": "0"
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             },
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "type": "block",
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:             "vg_name": "ceph_vg2"
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:         }
Nov 24 21:11:34 compute-0 relaxed_borg[323159]:     ]
Nov 24 21:11:34 compute-0 relaxed_borg[323159]: }
Nov 24 21:11:34 compute-0 podman[323143]: 2025-11-24 21:11:34.591780075 +0000 UTC m=+1.094606868 container died d9398da848889a3e79faabdc10a200da858abe73aba314377fe7e017eeb084f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_borg, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:11:34 compute-0 systemd[1]: libpod-d9398da848889a3e79faabdc10a200da858abe73aba314377fe7e017eeb084f5.scope: Deactivated successfully.
Nov 24 21:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8930e248375ad221675ee6ba012606825bdb2c3c4b751276ff115b8100ccb29-merged.mount: Deactivated successfully.
Nov 24 21:11:34 compute-0 podman[323143]: 2025-11-24 21:11:34.664766621 +0000 UTC m=+1.167593374 container remove d9398da848889a3e79faabdc10a200da858abe73aba314377fe7e017eeb084f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_borg, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:11:34 compute-0 systemd[1]: libpod-conmon-d9398da848889a3e79faabdc10a200da858abe73aba314377fe7e017eeb084f5.scope: Deactivated successfully.
Nov 24 21:11:34 compute-0 sudo[323036]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:34 compute-0 sudo[323183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:34 compute-0 sudo[323183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:34 compute-0 sudo[323183]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:34 compute-0 ceph-mon[75677]: pgmap v2707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 511 B/s wr, 7 op/s
Nov 24 21:11:34 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:34 compute-0 ceph-mon[75677]: osdmap e194: 3 total, 3 up, 3 in
Nov 24 21:11:34 compute-0 sudo[323208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:11:34 compute-0 sudo[323208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:34 compute-0 sudo[323208]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:35 compute-0 sudo[323233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:35 compute-0 sudo[323233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:35 compute-0 sudo[323233]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:35 compute-0 sudo[323258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:11:35 compute-0 sudo[323258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:35.257+0000 7f1a67169640 -1 osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:35 compute-0 ceph-osd[89640]: osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 2 active+clean+laggy, 297 active+clean; 153 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 KiB/s wr, 17 op/s
Nov 24 21:11:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:35.489+0000 7f2ca3ee7640 -1 osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:35 compute-0 ceph-osd[88624]: osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00040532717164810836 of space, bias 1.0, pg target 0.12159815149443251 quantized to 32 (current 32)
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:11:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:11:35 compute-0 podman[323324]: 2025-11-24 21:11:35.56002436 +0000 UTC m=+0.053054540 container create 0e8bfbc2341e7c36b44b0e84c76e672a72e6cb0404b76f7d4f0d025241b504db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:11:35 compute-0 systemd[1]: Started libpod-conmon-0e8bfbc2341e7c36b44b0e84c76e672a72e6cb0404b76f7d4f0d025241b504db.scope.
Nov 24 21:11:35 compute-0 podman[323324]: 2025-11-24 21:11:35.535220702 +0000 UTC m=+0.028250932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:11:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:11:35 compute-0 podman[323324]: 2025-11-24 21:11:35.676454585 +0000 UTC m=+0.169484815 container init 0e8bfbc2341e7c36b44b0e84c76e672a72e6cb0404b76f7d4f0d025241b504db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:11:35 compute-0 podman[323324]: 2025-11-24 21:11:35.688729346 +0000 UTC m=+0.181759526 container start 0e8bfbc2341e7c36b44b0e84c76e672a72e6cb0404b76f7d4f0d025241b504db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 21:11:35 compute-0 podman[323324]: 2025-11-24 21:11:35.694987134 +0000 UTC m=+0.188017374 container attach 0e8bfbc2341e7c36b44b0e84c76e672a72e6cb0404b76f7d4f0d025241b504db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:11:35 compute-0 xenodochial_lamport[323340]: 167 167
Nov 24 21:11:35 compute-0 systemd[1]: libpod-0e8bfbc2341e7c36b44b0e84c76e672a72e6cb0404b76f7d4f0d025241b504db.scope: Deactivated successfully.
Nov 24 21:11:35 compute-0 podman[323324]: 2025-11-24 21:11:35.697893803 +0000 UTC m=+0.190923983 container died 0e8bfbc2341e7c36b44b0e84c76e672a72e6cb0404b76f7d4f0d025241b504db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f3a020f002aa14f7871c191483c44f1ae555c2ba9ff83b4c72fa04390de33cb-merged.mount: Deactivated successfully.
Nov 24 21:11:35 compute-0 podman[323324]: 2025-11-24 21:11:35.750689695 +0000 UTC m=+0.243719845 container remove 0e8bfbc2341e7c36b44b0e84c76e672a72e6cb0404b76f7d4f0d025241b504db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:11:35 compute-0 systemd[1]: libpod-conmon-0e8bfbc2341e7c36b44b0e84c76e672a72e6cb0404b76f7d4f0d025241b504db.scope: Deactivated successfully.
Nov 24 21:11:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:35 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:35 compute-0 podman[323363]: 2025-11-24 21:11:35.939313924 +0000 UTC m=+0.044605542 container create ce31715e29c8cfeb7f9e6837bc12170caf63bb093364953299bcfbab13565286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 21:11:35 compute-0 systemd[1]: Started libpod-conmon-ce31715e29c8cfeb7f9e6837bc12170caf63bb093364953299bcfbab13565286.scope.
Nov 24 21:11:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a34644169d66e9f1afb6783ffb6d48d41410832512a21b9a6356d9bbf90a76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a34644169d66e9f1afb6783ffb6d48d41410832512a21b9a6356d9bbf90a76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:36 compute-0 podman[323363]: 2025-11-24 21:11:35.92095465 +0000 UTC m=+0.026246278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a34644169d66e9f1afb6783ffb6d48d41410832512a21b9a6356d9bbf90a76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a34644169d66e9f1afb6783ffb6d48d41410832512a21b9a6356d9bbf90a76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:11:36 compute-0 podman[323363]: 2025-11-24 21:11:36.184390734 +0000 UTC m=+0.289682422 container init ce31715e29c8cfeb7f9e6837bc12170caf63bb093364953299bcfbab13565286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 21:11:36 compute-0 podman[323363]: 2025-11-24 21:11:36.197874786 +0000 UTC m=+0.303166424 container start ce31715e29c8cfeb7f9e6837bc12170caf63bb093364953299bcfbab13565286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:11:36 compute-0 podman[323363]: 2025-11-24 21:11:36.209961403 +0000 UTC m=+0.315253051 container attach ce31715e29c8cfeb7f9e6837bc12170caf63bb093364953299bcfbab13565286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:11:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:36.219+0000 7f1a67169640 -1 osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:36 compute-0 ceph-osd[89640]: osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:36.487+0000 7f2ca3ee7640 -1 osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:36 compute-0 ceph-osd[88624]: osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:36 compute-0 ceph-mon[75677]: pgmap v2709: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 2 active+clean+laggy, 297 active+clean; 153 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 KiB/s wr, 17 op/s
Nov 24 21:11:36 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:37.173+0000 7f1a67169640 -1 osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:37 compute-0 ceph-osd[89640]: osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:37 compute-0 admiring_jang[323380]: {
Nov 24 21:11:37 compute-0 admiring_jang[323380]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "osd_id": 2,
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "type": "bluestore"
Nov 24 21:11:37 compute-0 admiring_jang[323380]:     },
Nov 24 21:11:37 compute-0 admiring_jang[323380]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "osd_id": 1,
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "type": "bluestore"
Nov 24 21:11:37 compute-0 admiring_jang[323380]:     },
Nov 24 21:11:37 compute-0 admiring_jang[323380]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "osd_id": 0,
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:11:37 compute-0 admiring_jang[323380]:         "type": "bluestore"
Nov 24 21:11:37 compute-0 admiring_jang[323380]:     }
Nov 24 21:11:37 compute-0 admiring_jang[323380]: }
Nov 24 21:11:37 compute-0 systemd[1]: libpod-ce31715e29c8cfeb7f9e6837bc12170caf63bb093364953299bcfbab13565286.scope: Deactivated successfully.
Nov 24 21:11:37 compute-0 systemd[1]: libpod-ce31715e29c8cfeb7f9e6837bc12170caf63bb093364953299bcfbab13565286.scope: Consumed 1.056s CPU time.
Nov 24 21:11:37 compute-0 podman[323363]: 2025-11-24 21:11:37.247676197 +0000 UTC m=+1.352967805 container died ce31715e29c8cfeb7f9e6837bc12170caf63bb093364953299bcfbab13565286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 21:11:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 2 active+clean+laggy, 297 active+clean; 128 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 24 21:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-04a34644169d66e9f1afb6783ffb6d48d41410832512a21b9a6356d9bbf90a76-merged.mount: Deactivated successfully.
Nov 24 21:11:37 compute-0 podman[323363]: 2025-11-24 21:11:37.324330991 +0000 UTC m=+1.429622639 container remove ce31715e29c8cfeb7f9e6837bc12170caf63bb093364953299bcfbab13565286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 24 21:11:37 compute-0 systemd[1]: libpod-conmon-ce31715e29c8cfeb7f9e6837bc12170caf63bb093364953299bcfbab13565286.scope: Deactivated successfully.
Nov 24 21:11:37 compute-0 sudo[323258]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:11:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:11:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:11:37 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:11:37 compute-0 podman[323414]: 2025-11-24 21:11:37.402287311 +0000 UTC m=+0.109146011 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 24 21:11:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 8481c2bc-12cb-4744-8e19-b7b143aa31f6 does not exist
Nov 24 21:11:37 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 555f80e3-2c92-4e62-a8bc-530aa3360512 does not exist
Nov 24 21:11:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:37.476+0000 7f2ca3ee7640 -1 osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:37 compute-0 ceph-osd[88624]: osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:37 compute-0 sudo[323448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:11:37 compute-0 sudo[323448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:37 compute-0 sudo[323448]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:37 compute-0 sudo[323473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:11:37 compute-0 sudo[323473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:11:37 compute-0 sudo[323473]: pam_unix(sudo:session): session closed for user root
Nov 24 21:11:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:37 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:11:37 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:11:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4817 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 24 21:11:37 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 24 21:11:38 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 24 21:11:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:38.132+0000 7f1a67169640 -1 osd.1 195 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:38 compute-0 ceph-osd[89640]: osd.1 195 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:38.523+0000 7f2ca3ee7640 -1 osd.0 195 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:38 compute-0 ceph-osd[88624]: osd.0 195 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:38 compute-0 ceph-mon[75677]: pgmap v2710: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 2 active+clean+laggy, 297 active+clean; 128 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 24 21:11:38 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:38 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4817 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:38 compute-0 ceph-mon[75677]: osdmap e195: 3 total, 3 up, 3 in
Nov 24 21:11:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:39.159+0000 7f1a67169640 -1 osd.1 195 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:39 compute-0 ceph-osd[89640]: osd.1 195 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 132 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 572 KiB/s wr, 61 op/s
Nov 24 21:11:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:39.522+0000 7f2ca3ee7640 -1 osd.0 195 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:39 compute-0 ceph-osd[88624]: osd.0 195 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 24 21:11:39 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:39 compute-0 ceph-mon[75677]: pgmap v2712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 132 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 572 KiB/s wr, 61 op/s
Nov 24 21:11:39 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 24 21:11:39 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 24 21:11:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:40.197+0000 7f1a67169640 -1 osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:40 compute-0 ceph-osd[89640]: osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:40.569+0000 7f2ca3ee7640 -1 osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:40 compute-0 ceph-osd[88624]: osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:11:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:11:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:11:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:11:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:11:40 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:40 compute-0 ceph-mon[75677]: osdmap e196: 3 total, 3 up, 3 in
Nov 24 21:11:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:41.201+0000 7f1a67169640 -1 osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:41 compute-0 ceph-osd[89640]: osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.8 MiB/s wr, 68 op/s
Nov 24 21:11:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:41.561+0000 7f2ca3ee7640 -1 osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:41 compute-0 ceph-osd[88624]: osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:41 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:41 compute-0 ceph-mon[75677]: pgmap v2714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.8 MiB/s wr, 68 op/s
Nov 24 21:11:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:42.207+0000 7f1a67169640 -1 osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:42 compute-0 ceph-osd[89640]: osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:42.552+0000 7f2ca3ee7640 -1 osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:42 compute-0 ceph-osd[88624]: osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:42 compute-0 podman[323498]: 2025-11-24 21:11:42.877191939 +0000 UTC m=+0.101382732 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:11:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:42 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:42 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4822 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:42 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 24 21:11:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 24 21:11:43 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 24 21:11:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:43.168+0000 7f1a67169640 -1 osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:43 compute-0 ceph-osd[89640]: osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 3.4 MiB/s wr, 12 op/s
Nov 24 21:11:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:43.503+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:43 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:43 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:43 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4822 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:43 compute-0 ceph-mon[75677]: osdmap e197: 3 total, 3 up, 3 in
Nov 24 21:11:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:43 compute-0 ceph-mon[75677]: pgmap v2716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 3.4 MiB/s wr, 12 op/s
Nov 24 21:11:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:44.197+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:44 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:44.477+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:44 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:45 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:45.195+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:45 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.8 MiB/s wr, 16 op/s
Nov 24 21:11:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:45.449+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:45 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:46 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:46 compute-0 ceph-mon[75677]: pgmap v2717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.8 MiB/s wr, 16 op/s
Nov 24 21:11:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:46.148+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:46 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:46.482+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:46 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:47 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:47.125+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:47 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 2.0 MiB/s wr, 6 op/s
Nov 24 21:11:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:47.443+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:47 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:48 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4827 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:48 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:48 compute-0 ceph-mon[75677]: pgmap v2718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 2.0 MiB/s wr, 6 op/s
Nov 24 21:11:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:48.168+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:48 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:48.400+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:48 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:49 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:49 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4827 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:49.204+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:49 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 1.7 MiB/s wr, 5 op/s
Nov 24 21:11:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:49.414+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:49 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:50 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:50 compute-0 ceph-mon[75677]: pgmap v2719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 1.7 MiB/s wr, 5 op/s
Nov 24 21:11:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:50.251+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:50 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:50.423+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:50 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:51 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 409 B/s wr, 4 op/s
Nov 24 21:11:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:51.292+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:51 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:51.404+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:51 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:52 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:52 compute-0 ceph-mon[75677]: pgmap v2720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 409 B/s wr, 4 op/s
Nov 24 21:11:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:52.322+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:52 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:52.412+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:52 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:52 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:53 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 398 B/s wr, 4 op/s
Nov 24 21:11:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:53.337+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:53 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:53.433+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:53 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:54 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:54 compute-0 ceph-mon[75677]: pgmap v2721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 398 B/s wr, 4 op/s
Nov 24 21:11:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:54.342+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:54 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:54.442+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:54 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:11:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:11:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:55 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 341 B/s wr, 3 op/s
Nov 24 21:11:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:55.334+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:55 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:55.473+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:55 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:56 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:56 compute-0 ceph-mon[75677]: pgmap v2722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 341 B/s wr, 3 op/s
Nov 24 21:11:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:56.350+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:56 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:56.477+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:56 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:57 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:57.318+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:57 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:57.527+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:57 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:58 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:11:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:58 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:58 compute-0 ceph-mon[75677]: pgmap v2723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:58 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4832 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:11:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:58.365+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:58 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:58 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:58.477+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:58 compute-0 podman[323526]: 2025-11-24 21:11:58.838977082 +0000 UTC m=+0.061673752 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:11:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:59 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:11:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:11:59.335+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:59 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:11:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:11:59 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:11:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:11:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:11:59.507+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:00 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:00 compute-0 ceph-mon[75677]: pgmap v2724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:00.369+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:00 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:00 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:00.470+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:01 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:01.412+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:01 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:01 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:01.511+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:02 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:02 compute-0 ceph-mon[75677]: pgmap v2725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:02.451+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:02 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:02 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:02.523+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4842 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:03 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:03 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4842 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:03.405+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:03 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:03.497+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:03 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:04 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:04 compute-0 ceph-mon[75677]: pgmap v2726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:04.437+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:04 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:04.450+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:04 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:05 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:05.411+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:05 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:05.456+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:05 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:06 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:06 compute-0 ceph-mon[75677]: pgmap v2727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.210077) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018726210186, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 2807, "num_deletes": 704, "total_data_size": 3028762, "memory_usage": 3080720, "flush_reason": "Manual Compaction"}
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018726229360, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2953116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80742, "largest_seqno": 83548, "table_properties": {"data_size": 2941608, "index_size": 6145, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4293, "raw_key_size": 41177, "raw_average_key_size": 24, "raw_value_size": 2912000, "raw_average_value_size": 1707, "num_data_blocks": 265, "num_entries": 1705, "num_filter_entries": 1705, "num_deletions": 704, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764018561, "oldest_key_time": 1764018561, "file_creation_time": 1764018726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 19336 microseconds, and 11122 cpu microseconds.
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.229428) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2953116 bytes OK
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.229457) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.231057) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.231079) EVENT_LOG_v1 {"time_micros": 1764018726231071, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.231102) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 3014493, prev total WAL file size 3014493, number of live WAL files 2.
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.232565) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2883KB)], [170(9975KB)]
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018726232657, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 13168023, "oldest_snapshot_seqno": -1}
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 13680 keys, 11542713 bytes, temperature: kUnknown
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018726326255, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11542713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11465556, "index_size": 41844, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34245, "raw_key_size": 375096, "raw_average_key_size": 27, "raw_value_size": 11229084, "raw_average_value_size": 820, "num_data_blocks": 1540, "num_entries": 13680, "num_filter_entries": 13680, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764018726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.326646) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11542713 bytes
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.328390) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.6 rd, 123.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.7 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 15102, records dropped: 1422 output_compression: NoCompression
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.328447) EVENT_LOG_v1 {"time_micros": 1764018726328426, "job": 106, "event": "compaction_finished", "compaction_time_micros": 93689, "compaction_time_cpu_micros": 56709, "output_level": 6, "num_output_files": 1, "total_output_size": 11542713, "num_input_records": 15102, "num_output_records": 13680, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018726329845, "job": 106, "event": "table_file_deletion", "file_number": 172}
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018726333163, "job": 106, "event": "table_file_deletion", "file_number": 170}
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.232479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.333211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.333219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.333222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.333224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:06 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:06.333227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:06.371+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:06 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:06.422+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:06 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:07 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:07.368+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:07 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:07.424+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:07 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:07 compute-0 podman[323545]: 2025-11-24 21:12:07.843311535 +0000 UTC m=+0.078323930 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 21:12:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:08 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4847 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:08 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:08 compute-0 ceph-mon[75677]: pgmap v2728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:08.338+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:08 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:08.436+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:08 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:09 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:09 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4847 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:09.335+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:09 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:12:09.432 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:12:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:12:09.433 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:12:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:12:09.433 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:12:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:09.442+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:09 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:10 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:10 compute-0 ceph-mon[75677]: pgmap v2729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:10.361+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:10 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:10.424+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:10 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:11 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:11.377+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:11 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:11.397+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:11 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:12 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:12 compute-0 ceph-mon[75677]: pgmap v2730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:12 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:12.418+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:12 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:12.425+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:12 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:13 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:13.400+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:13 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:13.437+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:13 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:13 compute-0 podman[323566]: 2025-11-24 21:12:13.910717497 +0000 UTC m=+0.136568089 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 21:12:14 compute-0 ceph-mon[75677]: pgmap v2731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:14 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:14.434+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:14 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:14.437+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:14 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:15 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:15.398+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:15 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:15.465+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:16 compute-0 ceph-mon[75677]: pgmap v2732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:16 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:16.371+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:16 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:12:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1565272056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:12:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:12:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1565272056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:12:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:16.476+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:16 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:17 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4857 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:17 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1565272056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:12:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/1565272056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:12:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:17.408+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:17 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:17.518+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:17 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:18 compute-0 ceph-mon[75677]: pgmap v2733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:18 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4857 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:18 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:18.428+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:18 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:18.567+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:18 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:19 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:19.410+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:19 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:19.593+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:19 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:20 compute-0 ceph-mon[75677]: pgmap v2734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:20 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:20.456+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:20.580+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:20 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:21 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:21.472+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:21 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:21.620+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:21 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:22.474+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:22 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:22.601+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:22 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:22 compute-0 ceph-mon[75677]: pgmap v2735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:22 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:23 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4862 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.019576) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018743019697, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 506, "num_deletes": 303, "total_data_size": 283646, "memory_usage": 293064, "flush_reason": "Manual Compaction"}
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018743024216, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 278960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83549, "largest_seqno": 84054, "table_properties": {"data_size": 276273, "index_size": 594, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8098, "raw_average_key_size": 19, "raw_value_size": 270231, "raw_average_value_size": 667, "num_data_blocks": 26, "num_entries": 405, "num_filter_entries": 405, "num_deletions": 303, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764018727, "oldest_key_time": 1764018727, "file_creation_time": 1764018743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 4647 microseconds, and 2570 cpu microseconds.
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.024267) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 278960 bytes OK
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.024291) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.026349) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.026369) EVENT_LOG_v1 {"time_micros": 1764018743026362, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.026392) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 280401, prev total WAL file size 280401, number of live WAL files 2.
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.026971) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303331' seq:72057594037927935, type:22 .. '6C6F676D0034323836' seq:0, type:0; will stop at (end)
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(272KB)], [173(11MB)]
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018743027031, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 11821673, "oldest_snapshot_seqno": -1}
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 13472 keys, 11622032 bytes, temperature: kUnknown
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018743107748, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 11622032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11545798, "index_size": 41433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33733, "raw_key_size": 371604, "raw_average_key_size": 27, "raw_value_size": 11312533, "raw_average_value_size": 839, "num_data_blocks": 1518, "num_entries": 13472, "num_filter_entries": 13472, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764018743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.108111) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 11622032 bytes
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.109807) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.3 rd, 143.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.0 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(84.0) write-amplify(41.7) OK, records in: 14085, records dropped: 613 output_compression: NoCompression
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.109839) EVENT_LOG_v1 {"time_micros": 1764018743109824, "job": 108, "event": "compaction_finished", "compaction_time_micros": 80814, "compaction_time_cpu_micros": 31520, "output_level": 6, "num_output_files": 1, "total_output_size": 11622032, "num_input_records": 14085, "num_output_records": 13472, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018743110099, "job": 108, "event": "table_file_deletion", "file_number": 175}
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018743114352, "job": 108, "event": "table_file_deletion", "file_number": 173}
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.026847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.114414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.114421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.114423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.114425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:23 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:12:23.114427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:12:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:23.464+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:23 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:23.625+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:23 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:23 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:23 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4862 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:12:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:24.491+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:24 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:12:24
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'backups']
Nov 24 21:12:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:12:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:24.659+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:24 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:24 compute-0 ceph-mon[75677]: pgmap v2736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:24 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:25.502+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:25 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:25 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:25.690+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:25 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:26.526+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:26 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:26 compute-0 ceph-mon[75677]: pgmap v2737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:26 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:26.715+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:26 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:27.529+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:27 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:27 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:27.724+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:27 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:28 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4866 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:28.497+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:28 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:28.698+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:28 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:28 compute-0 ceph-mon[75677]: pgmap v2738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:28 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:28 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4866 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:29.464+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:29 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:29.690+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:29 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:29 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:29 compute-0 podman[323593]: 2025-11-24 21:12:29.849729566 +0000 UTC m=+0.086769078 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 24 21:12:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:30.415+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:30 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:30.728+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:30 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:30 compute-0 ceph-mon[75677]: pgmap v2739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:30 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:31.373+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:31 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:31.680+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:31 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:31 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:32.386+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:32 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:32.702+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:32 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:32 compute-0 ceph-mon[75677]: pgmap v2740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:32 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:33 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4871 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:33.361+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:33 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:33.685+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:33 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:33 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:33 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4871 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:34.315+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:34 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:34.713+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:34 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:34 compute-0 ceph-mon[75677]: pgmap v2741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:34 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:35.309+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:35 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033296094614833626 of space, bias 1.0, pg target 0.09988828384450088 quantized to 32 (current 32)
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:12:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:12:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:35.687+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:35 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 24 21:12:35 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:35 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 24 21:12:35 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 24 21:12:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:36.291+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:36 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:36.642+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:36 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:36 compute-0 ceph-mon[75677]: pgmap v2742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:36 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:36 compute-0 ceph-mon[75677]: osdmap e198: 3 total, 3 up, 3 in
Nov 24 21:12:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:37.286+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:37 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 B/s wr, 0 op/s
Nov 24 21:12:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:37.641+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:37 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:37 compute-0 sudo[323614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:12:37 compute-0 sudo[323614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:37 compute-0 sudo[323614]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:37 compute-0 sudo[323639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:12:37 compute-0 sudo[323639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:37 compute-0 sudo[323639]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:37 compute-0 sudo[323664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:12:37 compute-0 sudo[323664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:37 compute-0 sudo[323664]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:37 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:37 compute-0 sudo[323689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:12:37 compute-0 sudo[323689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:37 compute-0 podman[323713]: 2025-11-24 21:12:37.948066321 +0000 UTC m=+0.071276730 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:12:38 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4876 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:38.244+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:38 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:38 compute-0 sudo[323689]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 21:12:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 21:12:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:12:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:12:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:12:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:12:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:12:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:12:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev ada436d4-1ce3-45ca-8cf7-f5817c9a31b3 does not exist
Nov 24 21:12:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev b32002bc-1462-46cf-a648-378bfba27480 does not exist
Nov 24 21:12:38 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev defe7170-a1b2-46ce-b5ff-03866b5dc6c3 does not exist
Nov 24 21:12:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:12:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:12:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:12:38 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:12:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:12:38 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:12:38 compute-0 sudo[323764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:12:38 compute-0 sudo[323764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:38 compute-0 sudo[323764]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:38 compute-0 sudo[323789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:12:38 compute-0 sudo[323789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:38 compute-0 sudo[323789]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:38.633+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:38 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:38 compute-0 sudo[323814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:12:38 compute-0 sudo[323814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:38 compute-0 sudo[323814]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:38 compute-0 sudo[323839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:12:38 compute-0 sudo[323839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:38 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:38 compute-0 ceph-mon[75677]: pgmap v2744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 148 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 B/s wr, 0 op/s
Nov 24 21:12:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:38 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4876 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:38 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 21:12:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:12:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:12:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:12:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:12:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:12:38 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:12:39 compute-0 podman[323904]: 2025-11-24 21:12:39.190873379 +0000 UTC m=+0.050304905 container create 74f04fb21301a4bc39d59d1e70988a8547004d7a17c4cafc8072cf55a358c713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:12:39 compute-0 systemd[1]: Started libpod-conmon-74f04fb21301a4bc39d59d1e70988a8547004d7a17c4cafc8072cf55a358c713.scope.
Nov 24 21:12:39 compute-0 podman[323904]: 2025-11-24 21:12:39.170937993 +0000 UTC m=+0.030369609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:12:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:12:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:39.274+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:39 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:39 compute-0 podman[323904]: 2025-11-24 21:12:39.29412707 +0000 UTC m=+0.153558676 container init 74f04fb21301a4bc39d59d1e70988a8547004d7a17c4cafc8072cf55a358c713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:12:39 compute-0 podman[323904]: 2025-11-24 21:12:39.303698067 +0000 UTC m=+0.163129603 container start 74f04fb21301a4bc39d59d1e70988a8547004d7a17c4cafc8072cf55a358c713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 21:12:39 compute-0 podman[323904]: 2025-11-24 21:12:39.307407547 +0000 UTC m=+0.166839163 container attach 74f04fb21301a4bc39d59d1e70988a8547004d7a17c4cafc8072cf55a358c713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:12:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 144 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 23 op/s
Nov 24 21:12:39 compute-0 recursing_chebyshev[323920]: 167 167
Nov 24 21:12:39 compute-0 systemd[1]: libpod-74f04fb21301a4bc39d59d1e70988a8547004d7a17c4cafc8072cf55a358c713.scope: Deactivated successfully.
Nov 24 21:12:39 compute-0 podman[323904]: 2025-11-24 21:12:39.315025302 +0000 UTC m=+0.174456828 container died 74f04fb21301a4bc39d59d1e70988a8547004d7a17c4cafc8072cf55a358c713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 21:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e589e584e74374eae2affa1fc550d576ccbd490fbd94a7f021311f963aa7f629-merged.mount: Deactivated successfully.
Nov 24 21:12:39 compute-0 podman[323904]: 2025-11-24 21:12:39.365746428 +0000 UTC m=+0.225177944 container remove 74f04fb21301a4bc39d59d1e70988a8547004d7a17c4cafc8072cf55a358c713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:12:39 compute-0 systemd[1]: libpod-conmon-74f04fb21301a4bc39d59d1e70988a8547004d7a17c4cafc8072cf55a358c713.scope: Deactivated successfully.
Nov 24 21:12:39 compute-0 podman[323945]: 2025-11-24 21:12:39.563685559 +0000 UTC m=+0.061107727 container create c283219726def6f1b78e867a6991cc350a653aa4336770ecad58a373dcdde63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 21:12:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:39.603+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:39 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:39 compute-0 systemd[1]: Started libpod-conmon-c283219726def6f1b78e867a6991cc350a653aa4336770ecad58a373dcdde63e.scope.
Nov 24 21:12:39 compute-0 podman[323945]: 2025-11-24 21:12:39.542261982 +0000 UTC m=+0.039684180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:12:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2927ef494aca649741656a8f9bffb3478336f0f9faed060cdc2e8d2a8b5c4b92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2927ef494aca649741656a8f9bffb3478336f0f9faed060cdc2e8d2a8b5c4b92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2927ef494aca649741656a8f9bffb3478336f0f9faed060cdc2e8d2a8b5c4b92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2927ef494aca649741656a8f9bffb3478336f0f9faed060cdc2e8d2a8b5c4b92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2927ef494aca649741656a8f9bffb3478336f0f9faed060cdc2e8d2a8b5c4b92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:39 compute-0 podman[323945]: 2025-11-24 21:12:39.668640915 +0000 UTC m=+0.166063123 container init c283219726def6f1b78e867a6991cc350a653aa4336770ecad58a373dcdde63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 21:12:39 compute-0 podman[323945]: 2025-11-24 21:12:39.683336421 +0000 UTC m=+0.180758579 container start c283219726def6f1b78e867a6991cc350a653aa4336770ecad58a373dcdde63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:12:39 compute-0 podman[323945]: 2025-11-24 21:12:39.68774412 +0000 UTC m=+0.185166318 container attach c283219726def6f1b78e867a6991cc350a653aa4336770ecad58a373dcdde63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:12:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:39 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:40.302+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:40 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:40.610+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:40 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:12:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:12:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:12:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:12:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:12:40 compute-0 amazing_wright[323961]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:12:40 compute-0 amazing_wright[323961]: --> relative data size: 1.0
Nov 24 21:12:40 compute-0 amazing_wright[323961]: --> All data devices are unavailable
Nov 24 21:12:40 compute-0 ceph-mon[75677]: pgmap v2745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 144 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 23 op/s
Nov 24 21:12:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:40 compute-0 systemd[1]: libpod-c283219726def6f1b78e867a6991cc350a653aa4336770ecad58a373dcdde63e.scope: Deactivated successfully.
Nov 24 21:12:40 compute-0 podman[323945]: 2025-11-24 21:12:40.881772994 +0000 UTC m=+1.379195192 container died c283219726def6f1b78e867a6991cc350a653aa4336770ecad58a373dcdde63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:12:40 compute-0 systemd[1]: libpod-c283219726def6f1b78e867a6991cc350a653aa4336770ecad58a373dcdde63e.scope: Consumed 1.143s CPU time.
Nov 24 21:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2927ef494aca649741656a8f9bffb3478336f0f9faed060cdc2e8d2a8b5c4b92-merged.mount: Deactivated successfully.
Nov 24 21:12:40 compute-0 podman[323945]: 2025-11-24 21:12:40.960882355 +0000 UTC m=+1.458304553 container remove c283219726def6f1b78e867a6991cc350a653aa4336770ecad58a373dcdde63e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:12:40 compute-0 systemd[1]: libpod-conmon-c283219726def6f1b78e867a6991cc350a653aa4336770ecad58a373dcdde63e.scope: Deactivated successfully.
Nov 24 21:12:41 compute-0 sudo[323839]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:41 compute-0 sudo[324002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:12:41 compute-0 sudo[324002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:41 compute-0 sudo[324002]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:41 compute-0 sudo[324027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:12:41 compute-0 sudo[324027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:41 compute-0 sudo[324027]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:41 compute-0 sudo[324052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:12:41 compute-0 sudo[324052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:41 compute-0 sudo[324052]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:41 compute-0 sudo[324077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:12:41 compute-0 sudo[324077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:41.298+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:41 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 24 21:12:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:41.591+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:41 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:41 compute-0 podman[324143]: 2025-11-24 21:12:41.699161006 +0000 UTC m=+0.050626204 container create d8003506e35581883abcc876d981f71ddcd049abf1a3c055fd2ab8aa28fa0ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 21:12:41 compute-0 systemd[1]: Started libpod-conmon-d8003506e35581883abcc876d981f71ddcd049abf1a3c055fd2ab8aa28fa0ece.scope.
Nov 24 21:12:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:12:41 compute-0 podman[324143]: 2025-11-24 21:12:41.676981999 +0000 UTC m=+0.028447177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:12:42 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:42 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:42 compute-0 podman[324143]: 2025-11-24 21:12:42.274294474 +0000 UTC m=+0.625759682 container init d8003506e35581883abcc876d981f71ddcd049abf1a3c055fd2ab8aa28fa0ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 21:12:42 compute-0 podman[324143]: 2025-11-24 21:12:42.288628081 +0000 UTC m=+0.640093279 container start d8003506e35581883abcc876d981f71ddcd049abf1a3c055fd2ab8aa28fa0ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:12:42 compute-0 nifty_pare[324159]: 167 167
Nov 24 21:12:42 compute-0 systemd[1]: libpod-d8003506e35581883abcc876d981f71ddcd049abf1a3c055fd2ab8aa28fa0ece.scope: Deactivated successfully.
Nov 24 21:12:42 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:42.319+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:42 compute-0 podman[324143]: 2025-11-24 21:12:42.405428805 +0000 UTC m=+0.756894003 container attach d8003506e35581883abcc876d981f71ddcd049abf1a3c055fd2ab8aa28fa0ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 21:12:42 compute-0 podman[324143]: 2025-11-24 21:12:42.406734581 +0000 UTC m=+0.758199769 container died d8003506e35581883abcc876d981f71ddcd049abf1a3c055fd2ab8aa28fa0ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 21:12:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:42.620+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:42 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef4fc5c1099b9cc135ef974ec387bfa79ca8ac6ce8f1f666bf36b1e34931b127-merged.mount: Deactivated successfully.
Nov 24 21:12:43 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4882 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 24 21:12:43 compute-0 podman[324143]: 2025-11-24 21:12:43.194499605 +0000 UTC m=+1.545964773 container remove d8003506e35581883abcc876d981f71ddcd049abf1a3c055fd2ab8aa28fa0ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 21:12:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 24 21:12:43 compute-0 ceph-mon[75677]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 24 21:12:43 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:43 compute-0 ceph-mon[75677]: pgmap v2746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 24 21:12:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:43 compute-0 systemd[1]: libpod-conmon-d8003506e35581883abcc876d981f71ddcd049abf1a3c055fd2ab8aa28fa0ece.scope: Deactivated successfully.
Nov 24 21:12:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 24 21:12:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:43.322+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:43 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:43 compute-0 podman[324184]: 2025-11-24 21:12:43.361149412 +0000 UTC m=+0.047818658 container create 30a336b181a5f091ef34ea8f1620d072370c598cbc281d2eb19fa87cfef96a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 21:12:43 compute-0 podman[324184]: 2025-11-24 21:12:43.338075191 +0000 UTC m=+0.024744447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:12:43 compute-0 systemd[1]: Started libpod-conmon-30a336b181a5f091ef34ea8f1620d072370c598cbc281d2eb19fa87cfef96a68.scope.
Nov 24 21:12:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58a423095ddee37f1c4282a05698a214c4e72aff6aac164b72a4fd02f5381a13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58a423095ddee37f1c4282a05698a214c4e72aff6aac164b72a4fd02f5381a13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58a423095ddee37f1c4282a05698a214c4e72aff6aac164b72a4fd02f5381a13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58a423095ddee37f1c4282a05698a214c4e72aff6aac164b72a4fd02f5381a13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:43 compute-0 podman[324184]: 2025-11-24 21:12:43.476846489 +0000 UTC m=+0.163515765 container init 30a336b181a5f091ef34ea8f1620d072370c598cbc281d2eb19fa87cfef96a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:12:43 compute-0 podman[324184]: 2025-11-24 21:12:43.490524707 +0000 UTC m=+0.177193953 container start 30a336b181a5f091ef34ea8f1620d072370c598cbc281d2eb19fa87cfef96a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 21:12:43 compute-0 podman[324184]: 2025-11-24 21:12:43.497007861 +0000 UTC m=+0.183677127 container attach 30a336b181a5f091ef34ea8f1620d072370c598cbc281d2eb19fa87cfef96a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 21:12:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:43.574+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:43 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:44 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:44 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4882 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:44 compute-0 ceph-mon[75677]: osdmap e199: 3 total, 3 up, 3 in
Nov 24 21:12:44 compute-0 ceph-mon[75677]: pgmap v2748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]: {
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:     "0": [
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:         {
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "devices": [
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "/dev/loop3"
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             ],
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_name": "ceph_lv0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_size": "21470642176",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "name": "ceph_lv0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "tags": {
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.cluster_name": "ceph",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.crush_device_class": "",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.encrypted": "0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.osd_id": "0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.type": "block",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.vdo": "0"
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             },
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "type": "block",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "vg_name": "ceph_vg0"
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:         }
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:     ],
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:     "1": [
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:         {
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "devices": [
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "/dev/loop4"
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             ],
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_name": "ceph_lv1",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_size": "21470642176",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "name": "ceph_lv1",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "tags": {
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.cluster_name": "ceph",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.crush_device_class": "",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.encrypted": "0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.osd_id": "1",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.type": "block",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.vdo": "0"
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             },
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "type": "block",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "vg_name": "ceph_vg1"
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:         }
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:     ],
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:     "2": [
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:         {
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "devices": [
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "/dev/loop5"
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             ],
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_name": "ceph_lv2",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_size": "21470642176",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "name": "ceph_lv2",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "tags": {
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.cluster_name": "ceph",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.crush_device_class": "",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.encrypted": "0",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.osd_id": "2",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.type": "block",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:                 "ceph.vdo": "0"
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             },
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "type": "block",
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:             "vg_name": "ceph_vg2"
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:         }
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]:     ]
Nov 24 21:12:44 compute-0 hopeful_varahamihira[324200]: }
Nov 24 21:12:44 compute-0 systemd[1]: libpod-30a336b181a5f091ef34ea8f1620d072370c598cbc281d2eb19fa87cfef96a68.scope: Deactivated successfully.
Nov 24 21:12:44 compute-0 podman[324184]: 2025-11-24 21:12:44.279319048 +0000 UTC m=+0.965988384 container died 30a336b181a5f091ef34ea8f1620d072370c598cbc281d2eb19fa87cfef96a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 21:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-58a423095ddee37f1c4282a05698a214c4e72aff6aac164b72a4fd02f5381a13-merged.mount: Deactivated successfully.
Nov 24 21:12:44 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:44.329+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:44 compute-0 podman[324184]: 2025-11-24 21:12:44.357613557 +0000 UTC m=+1.044282843 container remove 30a336b181a5f091ef34ea8f1620d072370c598cbc281d2eb19fa87cfef96a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 21:12:44 compute-0 systemd[1]: libpod-conmon-30a336b181a5f091ef34ea8f1620d072370c598cbc281d2eb19fa87cfef96a68.scope: Deactivated successfully.
Nov 24 21:12:44 compute-0 sudo[324077]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:44 compute-0 podman[324209]: 2025-11-24 21:12:44.483005654 +0000 UTC m=+0.165020905 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 24 21:12:44 compute-0 sudo[324241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:12:44 compute-0 sudo[324241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:44 compute-0 sudo[324241]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:44 compute-0 sudo[324272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:12:44 compute-0 sudo[324272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:44 compute-0 sudo[324272]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:44.604+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:44 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:44 compute-0 sudo[324297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:12:44 compute-0 sudo[324297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:44 compute-0 sudo[324297]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:44 compute-0 sudo[324322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:12:44 compute-0 sudo[324322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:45 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:45 compute-0 podman[324386]: 2025-11-24 21:12:45.239383033 +0000 UTC m=+0.054210261 container create 978a208607c575f55119dfdfdca7ae06f58a5f1abbb869b11904252427df2d91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 21:12:45 compute-0 systemd[1]: Started libpod-conmon-978a208607c575f55119dfdfdca7ae06f58a5f1abbb869b11904252427df2d91.scope.
Nov 24 21:12:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 25 op/s
Nov 24 21:12:45 compute-0 podman[324386]: 2025-11-24 21:12:45.217511274 +0000 UTC m=+0.032338522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:12:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:12:45 compute-0 podman[324386]: 2025-11-24 21:12:45.333423465 +0000 UTC m=+0.148250703 container init 978a208607c575f55119dfdfdca7ae06f58a5f1abbb869b11904252427df2d91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:12:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:45.337+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:45 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:45 compute-0 podman[324386]: 2025-11-24 21:12:45.33919813 +0000 UTC m=+0.154025348 container start 978a208607c575f55119dfdfdca7ae06f58a5f1abbb869b11904252427df2d91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:12:45 compute-0 podman[324386]: 2025-11-24 21:12:45.342215503 +0000 UTC m=+0.157042721 container attach 978a208607c575f55119dfdfdca7ae06f58a5f1abbb869b11904252427df2d91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:12:45 compute-0 elastic_varahamihira[324402]: 167 167
Nov 24 21:12:45 compute-0 systemd[1]: libpod-978a208607c575f55119dfdfdca7ae06f58a5f1abbb869b11904252427df2d91.scope: Deactivated successfully.
Nov 24 21:12:45 compute-0 podman[324386]: 2025-11-24 21:12:45.343644941 +0000 UTC m=+0.158472159 container died 978a208607c575f55119dfdfdca7ae06f58a5f1abbb869b11904252427df2d91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c20ec7907a58cb8b9390a2113b162db48eb03bbba91a31f22f9d4cb527bff690-merged.mount: Deactivated successfully.
Nov 24 21:12:45 compute-0 podman[324386]: 2025-11-24 21:12:45.379909007 +0000 UTC m=+0.194736235 container remove 978a208607c575f55119dfdfdca7ae06f58a5f1abbb869b11904252427df2d91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 21:12:45 compute-0 systemd[1]: libpod-conmon-978a208607c575f55119dfdfdca7ae06f58a5f1abbb869b11904252427df2d91.scope: Deactivated successfully.
Nov 24 21:12:45 compute-0 podman[324427]: 2025-11-24 21:12:45.568085354 +0000 UTC m=+0.051204389 container create 2fc529ef2d7c104be3e1278383b300a7c0492f404b88102a3f510c40b091c7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 21:12:45 compute-0 systemd[1]: Started libpod-conmon-2fc529ef2d7c104be3e1278383b300a7c0492f404b88102a3f510c40b091c7fc.scope.
Nov 24 21:12:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:12:45 compute-0 podman[324427]: 2025-11-24 21:12:45.547006477 +0000 UTC m=+0.030125492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bc52ba0353881b8a9e8f8cc25a2f7665d6c3f390c2b26c384fcfce2b22f6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bc52ba0353881b8a9e8f8cc25a2f7665d6c3f390c2b26c384fcfce2b22f6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bc52ba0353881b8a9e8f8cc25a2f7665d6c3f390c2b26c384fcfce2b22f6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bc52ba0353881b8a9e8f8cc25a2f7665d6c3f390c2b26c384fcfce2b22f6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:12:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:45.645+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:45 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:45 compute-0 podman[324427]: 2025-11-24 21:12:45.657118242 +0000 UTC m=+0.140237307 container init 2fc529ef2d7c104be3e1278383b300a7c0492f404b88102a3f510c40b091c7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:12:45 compute-0 podman[324427]: 2025-11-24 21:12:45.669772683 +0000 UTC m=+0.152891708 container start 2fc529ef2d7c104be3e1278383b300a7c0492f404b88102a3f510c40b091c7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 21:12:45 compute-0 podman[324427]: 2025-11-24 21:12:45.674173301 +0000 UTC m=+0.157292296 container attach 2fc529ef2d7c104be3e1278383b300a7c0492f404b88102a3f510c40b091c7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 21:12:46 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:46 compute-0 ceph-mon[75677]: pgmap v2749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 25 op/s
Nov 24 21:12:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:46.313+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:46 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:46 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:46.642+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:46 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]: {
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "osd_id": 2,
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "type": "bluestore"
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:     },
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "osd_id": 1,
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "type": "bluestore"
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:     },
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "osd_id": 0,
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:         "type": "bluestore"
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]:     }
Nov 24 21:12:46 compute-0 wizardly_grothendieck[324443]: }
Nov 24 21:12:46 compute-0 systemd[1]: libpod-2fc529ef2d7c104be3e1278383b300a7c0492f404b88102a3f510c40b091c7fc.scope: Deactivated successfully.
Nov 24 21:12:46 compute-0 podman[324427]: 2025-11-24 21:12:46.789145167 +0000 UTC m=+1.272264162 container died 2fc529ef2d7c104be3e1278383b300a7c0492f404b88102a3f510c40b091c7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:12:46 compute-0 systemd[1]: libpod-2fc529ef2d7c104be3e1278383b300a7c0492f404b88102a3f510c40b091c7fc.scope: Consumed 1.127s CPU time.
Nov 24 21:12:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b79bc52ba0353881b8a9e8f8cc25a2f7665d6c3f390c2b26c384fcfce2b22f6f-merged.mount: Deactivated successfully.
Nov 24 21:12:46 compute-0 podman[324427]: 2025-11-24 21:12:46.852980236 +0000 UTC m=+1.336099251 container remove 2fc529ef2d7c104be3e1278383b300a7c0492f404b88102a3f510c40b091c7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:12:46 compute-0 systemd[1]: libpod-conmon-2fc529ef2d7c104be3e1278383b300a7c0492f404b88102a3f510c40b091c7fc.scope: Deactivated successfully.
Nov 24 21:12:46 compute-0 sudo[324322]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:12:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:12:46 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:12:46 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:12:46 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 69e83b91-b88d-4910-9a78-54dd28496154 does not exist
Nov 24 21:12:46 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 3b94da0f-0fad-4484-84ba-697c578a833f does not exist
Nov 24 21:12:47 compute-0 sudo[324489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:12:47 compute-0 sudo[324489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:47 compute-0 sudo[324489]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:47 compute-0 sudo[324514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:12:47 compute-0 sudo[324514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:12:47 compute-0 sudo[324514]: pam_unix(sudo:session): session closed for user root
Nov 24 21:12:47 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:12:47 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:12:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 24 21:12:47 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:47.349+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:47.618+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:47 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:48 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4886 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:48 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:48 compute-0 ceph-mon[75677]: pgmap v2750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 24 21:12:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:48.340+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:48 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:48.616+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:48 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:49 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:49 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4886 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 0 B/s wr, 1 op/s
Nov 24 21:12:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:49.379+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:49 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:49.619+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:49 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:50 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:50 compute-0 ceph-mon[75677]: pgmap v2751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 0 B/s wr, 1 op/s
Nov 24 21:12:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:50.399+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:50 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:50.644+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:50 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:51 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:51 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:51.393+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:51 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:51.620+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:51 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:52 compute-0 ceph-mon[75677]: pgmap v2752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:52 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:52.366+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:52 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:52.627+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:52 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:53 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:53.337+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:53 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:53.648+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:53 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:54 compute-0 ceph-mon[75677]: pgmap v2753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:54 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:54.342+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:54 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:12:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:12:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:54.679+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:54 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:55 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:55.362+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:55 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:55.705+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:55 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:56.341+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:56 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:56 compute-0 ceph-mon[75677]: pgmap v2754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:56 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:56.738+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:56 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:57.294+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:57 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:57 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4896 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:57 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:57.772+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:57 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:12:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:58.275+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:58 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:58 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:58 compute-0 ceph-mon[75677]: pgmap v2755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:58 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4896 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:12:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:58 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:58.743+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:58 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:12:59.288+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:59 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:12:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:12:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:12:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:12:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:12:59.703+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:59 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:12:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:00 compute-0 sshd-session[324478]: Invalid user asterisk from 14.63.196.175 port 40614
Nov 24 21:13:00 compute-0 podman[324540]: 2025-11-24 21:13:00.23613997 +0000 UTC m=+0.089446130 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:13:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:00.282+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:00 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:00 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:00 compute-0 ceph-mon[75677]: pgmap v2756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:00 compute-0 sshd-session[324478]: Received disconnect from 14.63.196.175 port 40614:11: Bye Bye [preauth]
Nov 24 21:13:00 compute-0 sshd-session[324478]: Disconnected from invalid user asterisk 14.63.196.175 port 40614 [preauth]
Nov 24 21:13:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:00.682+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:00 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:01.253+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:01 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:01 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:01 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:01.674+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:01 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:02.213+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:02 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:02 compute-0 ceph-mon[75677]: pgmap v2757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:02 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:02.650+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:02 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4901 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:03.258+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:03 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:03 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4901 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:03 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:03.677+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:03 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:04.295+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:04 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:04 compute-0 ceph-mon[75677]: pgmap v2758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:04.671+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:04 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:05.270+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:05 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:05 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:05 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:05 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:05.657+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:05 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:06.260+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:06 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:06 compute-0 ceph-mon[75677]: pgmap v2759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:06 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:06.701+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:06 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:07.256+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:07 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:07 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:07.669+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:07 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:08 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4906 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:08.223+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:08 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:08 compute-0 ceph-mon[75677]: pgmap v2760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:08 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4906 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:08 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:08.641+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:08 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:08 compute-0 podman[324559]: 2025-11-24 21:13:08.864325222 +0000 UTC m=+0.084767173 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:13:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:09.187+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:09 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:13:09.434 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:13:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:13:09.434 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:13:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:13:09.434 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:13:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:09 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:09.629+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:09 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:10.168+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:10 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:10.674+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:10 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:10 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:10 compute-0 ceph-mon[75677]: pgmap v2761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:10 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:11.153+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:11 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:11 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:11.702+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:11 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:12.158+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:12 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:12.661+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:12 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:12 compute-0 ceph-mon[75677]: pgmap v2762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:12 compute-0 ceph-mon[75677]: 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:13:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:13.121+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:13 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:13 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 45 slow ops, oldest one blocked for 4911 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:13.675+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:13 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:13 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:13 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:13 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:13 compute-0 ceph-mon[75677]: Health check update: 45 slow ops, oldest one blocked for 4911 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:14.133+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:14 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:14.676+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:14 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:14 compute-0 ceph-mon[75677]: pgmap v2763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:14 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:14 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:14 compute-0 podman[324579]: 2025-11-24 21:13:14.962871593 +0000 UTC m=+0.197187281 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 24 21:13:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:15.085+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:15 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:15.693+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:15 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:16.124+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:16 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:13:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4275540320' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:13:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:13:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4275540320' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:13:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:16.709+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:16 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:16 compute-0 ceph-mon[75677]: pgmap v2764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:16 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/4275540320' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:13:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/4275540320' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:13:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:17.078+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:17 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:17.722+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:17 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:17 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:18.102+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:18 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:18 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 4916 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:18.673+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:18 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:18 compute-0 ceph-mon[75677]: pgmap v2765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:18 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:18 compute-0 ceph-mon[75677]: Health check update: 29 slow ops, oldest one blocked for 4916 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:19.100+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:19 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:19.674+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:19 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:19 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:20.139+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:20.688+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:20 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:21 compute-0 ceph-mon[75677]: pgmap v2766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:21 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:21.176+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:21 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:21.685+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:21 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:22.128+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:22 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:22 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:22 compute-0 ceph-mon[75677]: pgmap v2767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:22.706+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:22 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:23.162+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:23 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:23 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:23 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 4921 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:23.730+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:23 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:24.140+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:24 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:24 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:24 compute-0 ceph-mon[75677]: Health check update: 29 slow ops, oldest one blocked for 4921 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:24 compute-0 ceph-mon[75677]: pgmap v2768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:13:24
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr']
Nov 24 21:13:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:13:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:24.745+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:24 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:25.129+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:25 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:25 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:25.728+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:25 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:26.129+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:26 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:26 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:26 compute-0 ceph-mon[75677]: pgmap v2769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:26.764+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:26 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:27.096+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:27 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:27 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:27.720+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:27 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:28.093+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:28 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:28 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 4926 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:28 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:28 compute-0 ceph-mon[75677]: pgmap v2770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:28.680+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:28 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:29 compute-0 sshd-session[324607]: Received disconnect from 182.93.7.194 port 61458:11: Bye Bye [preauth]
Nov 24 21:13:29 compute-0 sshd-session[324607]: Disconnected from authenticating user root 182.93.7.194 port 61458 [preauth]
Nov 24 21:13:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:29.135+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:29 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:29 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:29 compute-0 ceph-mon[75677]: Health check update: 29 slow ops, oldest one blocked for 4926 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:29.727+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:29 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:30.107+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:30 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:30 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:30 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:30 compute-0 ceph-mon[75677]: pgmap v2771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:30 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:30 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:30.692+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:30 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:30 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:30 compute-0 podman[324609]: 2025-11-24 21:13:30.831274313 +0000 UTC m=+0.065357601 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 21:13:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:31.062+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:31 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:31 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:31 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:31 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:31 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:31 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:31.732+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:31 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:31 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:32.016+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:32 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:32 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:32 compute-0 ceph-mon[75677]: pgmap v2772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:32 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:32.777+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:32 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:32 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:32 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:32.976+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:32 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:32 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:33 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:33 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:33 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:33 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:33.821+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:33 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:33 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:33 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:33.932+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:33 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:33 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:34 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:34 compute-0 ceph-mon[75677]: pgmap v2773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:34 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:34 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:34.829+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:34 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:34 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:34 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:34.936+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:34 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:34 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:35 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:35 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:13:35 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:13:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:35.824+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:35 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:35 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:35 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:35.942+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:35 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:35 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:36 compute-0 ceph-mon[75677]: pgmap v2774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:36 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:36 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:36 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:36 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:36.860+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:36 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:36.989+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:36 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:36 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:37 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:37 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 4937 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:37 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:37 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:37.870+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:37 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:37 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:37 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:37.947+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:37 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:37 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:38 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.292142) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018818292215, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1424, "num_deletes": 457, "total_data_size": 1374297, "memory_usage": 1404608, "flush_reason": "Manual Compaction"}
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018818374258, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 922914, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84055, "largest_seqno": 85478, "table_properties": {"data_size": 917535, "index_size": 2069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 21390, "raw_average_key_size": 23, "raw_value_size": 903061, "raw_average_value_size": 1011, "num_data_blocks": 89, "num_entries": 893, "num_filter_entries": 893, "num_deletions": 457, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764018743, "oldest_key_time": 1764018743, "file_creation_time": 1764018818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 82182 microseconds, and 4416 cpu microseconds.
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.374337) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 922914 bytes OK
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.374360) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.447102) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.447150) EVENT_LOG_v1 {"time_micros": 1764018818447140, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.447174) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1366680, prev total WAL file size 1368967, number of live WAL files 2.
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.480414) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323535' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(901KB)], [176(11MB)]
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018818480467, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 12544946, "oldest_snapshot_seqno": -1}
Nov 24 21:13:38 compute-0 ceph-mon[75677]: pgmap v2775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:38 compute-0 ceph-mon[75677]: Health check update: 29 slow ops, oldest one blocked for 4937 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:38 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:38 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 13467 keys, 9540648 bytes, temperature: kUnknown
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018818594416, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9540648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9468013, "index_size": 37834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33733, "raw_key_size": 371127, "raw_average_key_size": 27, "raw_value_size": 9238475, "raw_average_value_size": 686, "num_data_blocks": 1372, "num_entries": 13467, "num_filter_entries": 13467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764013598, "oldest_key_time": 0, "file_creation_time": 1764018818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7034352b-6130-4856-a956-9f7f793f6e65", "db_session_id": "5CV8W25MMEGW3WBPB1SJ", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.594715) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9540648 bytes
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.629046) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.0 rd, 83.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(23.9) write-amplify(10.3) OK, records in: 14365, records dropped: 898 output_compression: NoCompression
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.629093) EVENT_LOG_v1 {"time_micros": 1764018818629075, "job": 110, "event": "compaction_finished", "compaction_time_micros": 114010, "compaction_time_cpu_micros": 48703, "output_level": 6, "num_output_files": 1, "total_output_size": 9540648, "num_input_records": 14365, "num_output_records": 13467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018818629689, "job": 110, "event": "table_file_deletion", "file_number": 178}
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764018818633563, "job": 110, "event": "table_file_deletion", "file_number": 176}
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.480329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.633641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.633646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.633647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.633649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:13:38 compute-0 ceph-mon[75677]: rocksdb: (Original Log Time 2025/11/24-21:13:38.633650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 21:13:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:38.854+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:38 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:38 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:38 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:38.941+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:38 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:38 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:39 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:39 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:39 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:39 compute-0 podman[324629]: 2025-11-24 21:13:39.853405537 +0000 UTC m=+0.079066990 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 21:13:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:39.890+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:39 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:39 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:39 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:39.943+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:39 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:39 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:40 compute-0 ceph-mon[75677]: pgmap v2776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:40 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:40 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 21:13:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 21:13:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 21:13:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 21:13:40 compute-0 ceph-mgr[75975]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 21:13:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:40.869+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:40 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:40 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:40 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:40.933+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:40 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:40 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:41 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:41 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:41 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:41.852+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:41 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:41 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:41 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:41.972+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:41 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:41 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:42.899+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:42 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:42 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:42 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:42.930+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:42 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:42 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:43 compute-0 ceph-mon[75677]: pgmap v2777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:43 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:43 compute-0 ceph-mon[75677]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:13:43 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 29 slow ops, oldest one blocked for 4942 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:43 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:43 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:43.928+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:43 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:43 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:43 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:43.933+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:43 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:43 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:44 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:44 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:44 compute-0 ceph-mon[75677]: Health check update: 29 slow ops, oldest one blocked for 4942 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:44 compute-0 ceph-mon[75677]: pgmap v2778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:44.932+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:44 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:44 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:44 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:44.945+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:44 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:44 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:45 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:45 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:45 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:45 compute-0 podman[324649]: 2025-11-24 21:13:45.856530498 +0000 UTC m=+0.091664780 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 21:13:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:45.935+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:45 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:45 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:45 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:45.972+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:45 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:45 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:46 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:46 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:46 compute-0 ceph-mon[75677]: pgmap v2779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:46 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:46.931+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:46 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:46 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:47.016+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:47 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:47 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:47 compute-0 sudo[324676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:13:47 compute-0 sudo[324676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:47 compute-0 sudo[324676]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:47 compute-0 sudo[324701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:13:47 compute-0 sudo[324701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:47 compute-0 sudo[324701]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:47 compute-0 sudo[324726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:13:47 compute-0 sudo[324726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:47 compute-0 sudo[324726]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:47 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:47 compute-0 sudo[324751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 21:13:47 compute-0 sudo[324751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:47 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:47 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:47 compute-0 sudo[324751]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:47 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:47.953+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:47 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:47 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:13:47 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:13:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 21:13:47 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:13:47 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 21:13:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:48.040+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:48 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:48 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:48 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:13:48 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 80cf9e68-5720-4a7b-85d6-4fce2231319a does not exist
Nov 24 21:13:48 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev 99de27f9-1cff-4200-b538-dc0eadf3b813 does not exist
Nov 24 21:13:48 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev fcc4327d-f98e-4679-abe4-20f238359d48 does not exist
Nov 24 21:13:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 21:13:48 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:13:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 21:13:48 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:13:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:13:48 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:13:48 compute-0 sudo[324806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:13:48 compute-0 sudo[324806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:48 compute-0 sudo[324806]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:48 compute-0 sudo[324831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:13:48 compute-0 sudo[324831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:48 compute-0 sudo[324831]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:48 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 48 slow ops, oldest one blocked for 4947 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:48 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:48 compute-0 sudo[324856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:13:48 compute-0 sudo[324856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:48 compute-0 sudo[324856]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:48 compute-0 sudo[324881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 21:13:48 compute-0 sudo[324881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:48 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:48 compute-0 ceph-mon[75677]: pgmap v2780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:48 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:13:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 21:13:48 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:13:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 21:13:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 21:13:48 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:13:48 compute-0 ceph-mon[75677]: Health check update: 48 slow ops, oldest one blocked for 4947 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:48 compute-0 podman[324947]: 2025-11-24 21:13:48.866355511 +0000 UTC m=+0.109398026 container create 484378b8d0e013da64635a1e2d3662ef11af77d1be2c21392adc63dfbae3154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 21:13:48 compute-0 podman[324947]: 2025-11-24 21:13:48.783262874 +0000 UTC m=+0.026305389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:13:48 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:48.954+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:48 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:48 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:48 compute-0 systemd[1]: Started libpod-conmon-484378b8d0e013da64635a1e2d3662ef11af77d1be2c21392adc63dfbae3154b.scope.
Nov 24 21:13:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:13:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:49.050+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:49 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:49 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:49 compute-0 podman[324947]: 2025-11-24 21:13:49.094929606 +0000 UTC m=+0.337972161 container init 484378b8d0e013da64635a1e2d3662ef11af77d1be2c21392adc63dfbae3154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:13:49 compute-0 podman[324947]: 2025-11-24 21:13:49.107966618 +0000 UTC m=+0.351009113 container start 484378b8d0e013da64635a1e2d3662ef11af77d1be2c21392adc63dfbae3154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 21:13:49 compute-0 relaxed_noyce[324963]: 167 167
Nov 24 21:13:49 compute-0 systemd[1]: libpod-484378b8d0e013da64635a1e2d3662ef11af77d1be2c21392adc63dfbae3154b.scope: Deactivated successfully.
Nov 24 21:13:49 compute-0 podman[324947]: 2025-11-24 21:13:49.18827517 +0000 UTC m=+0.431317665 container attach 484378b8d0e013da64635a1e2d3662ef11af77d1be2c21392adc63dfbae3154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 21:13:49 compute-0 podman[324947]: 2025-11-24 21:13:49.188908588 +0000 UTC m=+0.431951103 container died 484378b8d0e013da64635a1e2d3662ef11af77d1be2c21392adc63dfbae3154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 24 21:13:49 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-711d9d871f02ef23e219f6d15e670f8807db5070e1b1f3f0558323dbb5d684d9-merged.mount: Deactivated successfully.
Nov 24 21:13:49 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:49 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:49 compute-0 podman[324947]: 2025-11-24 21:13:49.851985923 +0000 UTC m=+1.095028448 container remove 484378b8d0e013da64635a1e2d3662ef11af77d1be2c21392adc63dfbae3154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 21:13:49 compute-0 systemd[1]: libpod-conmon-484378b8d0e013da64635a1e2d3662ef11af77d1be2c21392adc63dfbae3154b.scope: Deactivated successfully.
Nov 24 21:13:49 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:49.950+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:49 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:49 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:50.046+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:50 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:50 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:50 compute-0 podman[324990]: 2025-11-24 21:13:50.094146455 +0000 UTC m=+0.059505053 container create b80f351ae948a763e26b6d0565edb941dcc78ab0b26bc5ec2bc8facf92a3b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:13:50 compute-0 podman[324990]: 2025-11-24 21:13:50.058627248 +0000 UTC m=+0.023985846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:13:50 compute-0 systemd[1]: Started libpod-conmon-b80f351ae948a763e26b6d0565edb941dcc78ab0b26bc5ec2bc8facf92a3b6b1.scope.
Nov 24 21:13:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2f524acd8c356529c46912d8729339022ae9f4db3cbc2d7cbdb0c078856e1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2f524acd8c356529c46912d8729339022ae9f4db3cbc2d7cbdb0c078856e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2f524acd8c356529c46912d8729339022ae9f4db3cbc2d7cbdb0c078856e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2f524acd8c356529c46912d8729339022ae9f4db3cbc2d7cbdb0c078856e1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2f524acd8c356529c46912d8729339022ae9f4db3cbc2d7cbdb0c078856e1f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:50 compute-0 podman[324990]: 2025-11-24 21:13:50.239655024 +0000 UTC m=+0.205013632 container init b80f351ae948a763e26b6d0565edb941dcc78ab0b26bc5ec2bc8facf92a3b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 24 21:13:50 compute-0 podman[324990]: 2025-11-24 21:13:50.248974214 +0000 UTC m=+0.214332802 container start b80f351ae948a763e26b6d0565edb941dcc78ab0b26bc5ec2bc8facf92a3b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 21:13:50 compute-0 podman[324990]: 2025-11-24 21:13:50.253550138 +0000 UTC m=+0.218908746 container attach b80f351ae948a763e26b6d0565edb941dcc78ab0b26bc5ec2bc8facf92a3b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:13:50 compute-0 ceph-mon[75677]: pgmap v2781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:50 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:50 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:50.957+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:50 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:50 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:50 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:50.997+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:51 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:51 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:51 compute-0 silly_wiles[325007]: --> passed data devices: 0 physical, 3 LVM
Nov 24 21:13:51 compute-0 silly_wiles[325007]: --> relative data size: 1.0
Nov 24 21:13:51 compute-0 silly_wiles[325007]: --> All data devices are unavailable
Nov 24 21:13:51 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:51 compute-0 systemd[1]: libpod-b80f351ae948a763e26b6d0565edb941dcc78ab0b26bc5ec2bc8facf92a3b6b1.scope: Deactivated successfully.
Nov 24 21:13:51 compute-0 systemd[1]: libpod-b80f351ae948a763e26b6d0565edb941dcc78ab0b26bc5ec2bc8facf92a3b6b1.scope: Consumed 1.055s CPU time.
Nov 24 21:13:51 compute-0 podman[324990]: 2025-11-24 21:13:51.36211061 +0000 UTC m=+1.327469208 container died b80f351ae948a763e26b6d0565edb941dcc78ab0b26bc5ec2bc8facf92a3b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 21:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb2f524acd8c356529c46912d8729339022ae9f4db3cbc2d7cbdb0c078856e1f-merged.mount: Deactivated successfully.
Nov 24 21:13:51 compute-0 podman[324990]: 2025-11-24 21:13:51.421407988 +0000 UTC m=+1.386766576 container remove b80f351ae948a763e26b6d0565edb941dcc78ab0b26bc5ec2bc8facf92a3b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:13:51 compute-0 systemd[1]: libpod-conmon-b80f351ae948a763e26b6d0565edb941dcc78ab0b26bc5ec2bc8facf92a3b6b1.scope: Deactivated successfully.
Nov 24 21:13:51 compute-0 sudo[324881]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:51 compute-0 sudo[325050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:13:51 compute-0 sudo[325050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:51 compute-0 sudo[325050]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:51 compute-0 sudo[325075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:13:51 compute-0 sudo[325075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:51 compute-0 sudo[325075]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:51 compute-0 sudo[325100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:13:51 compute-0 sudo[325100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:51 compute-0 sudo[325100]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:51 compute-0 sudo[325125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- lvm list --format json
Nov 24 21:13:51 compute-0 sudo[325125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:51 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:51 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:51 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:51 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:51 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:51.920+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:52.027+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:52 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:52 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:52 compute-0 podman[325188]: 2025-11-24 21:13:52.042717299 +0000 UTC m=+0.037783158 container create c373facf0cb43c49df3011e27847f5603f6ef585197724c5d9d714690fe459a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 24 21:13:52 compute-0 systemd[1]: Started libpod-conmon-c373facf0cb43c49df3011e27847f5603f6ef585197724c5d9d714690fe459a9.scope.
Nov 24 21:13:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:13:52 compute-0 podman[325188]: 2025-11-24 21:13:52.119432615 +0000 UTC m=+0.114498494 container init c373facf0cb43c49df3011e27847f5603f6ef585197724c5d9d714690fe459a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:13:52 compute-0 podman[325188]: 2025-11-24 21:13:52.026966695 +0000 UTC m=+0.022032584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:13:52 compute-0 podman[325188]: 2025-11-24 21:13:52.125503119 +0000 UTC m=+0.120568978 container start c373facf0cb43c49df3011e27847f5603f6ef585197724c5d9d714690fe459a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:13:52 compute-0 podman[325188]: 2025-11-24 21:13:52.129013703 +0000 UTC m=+0.124079562 container attach c373facf0cb43c49df3011e27847f5603f6ef585197724c5d9d714690fe459a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:13:52 compute-0 wonderful_tesla[325204]: 167 167
Nov 24 21:13:52 compute-0 systemd[1]: libpod-c373facf0cb43c49df3011e27847f5603f6ef585197724c5d9d714690fe459a9.scope: Deactivated successfully.
Nov 24 21:13:52 compute-0 podman[325188]: 2025-11-24 21:13:52.130863213 +0000 UTC m=+0.125929072 container died c373facf0cb43c49df3011e27847f5603f6ef585197724c5d9d714690fe459a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-af92f6973d09f340c1bddef3213c215a324626b4bfcd7c283af0d9d891db1fa2-merged.mount: Deactivated successfully.
Nov 24 21:13:52 compute-0 podman[325188]: 2025-11-24 21:13:52.162173206 +0000 UTC m=+0.157239065 container remove c373facf0cb43c49df3011e27847f5603f6ef585197724c5d9d714690fe459a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:13:52 compute-0 systemd[1]: libpod-conmon-c373facf0cb43c49df3011e27847f5603f6ef585197724c5d9d714690fe459a9.scope: Deactivated successfully.
Nov 24 21:13:52 compute-0 podman[325228]: 2025-11-24 21:13:52.349621264 +0000 UTC m=+0.066520553 container create 0ced798c45d55b3b8574492fd79ec12e56e462e54fe5ee16cdf49cb4845c20ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 24 21:13:52 compute-0 systemd[1]: Started libpod-conmon-0ced798c45d55b3b8574492fd79ec12e56e462e54fe5ee16cdf49cb4845c20ae.scope.
Nov 24 21:13:52 compute-0 podman[325228]: 2025-11-24 21:13:52.320557942 +0000 UTC m=+0.037457241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:13:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea1edb53199233ae9e9492de229eb6ff56f9ffadd1f3bebabebf9eb41aad2c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea1edb53199233ae9e9492de229eb6ff56f9ffadd1f3bebabebf9eb41aad2c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea1edb53199233ae9e9492de229eb6ff56f9ffadd1f3bebabebf9eb41aad2c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ea1edb53199233ae9e9492de229eb6ff56f9ffadd1f3bebabebf9eb41aad2c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:52 compute-0 podman[325228]: 2025-11-24 21:13:52.438833797 +0000 UTC m=+0.155733096 container init 0ced798c45d55b3b8574492fd79ec12e56e462e54fe5ee16cdf49cb4845c20ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yonath, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 21:13:52 compute-0 podman[325228]: 2025-11-24 21:13:52.450264594 +0000 UTC m=+0.167163843 container start 0ced798c45d55b3b8574492fd79ec12e56e462e54fe5ee16cdf49cb4845c20ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 21:13:52 compute-0 podman[325228]: 2025-11-24 21:13:52.453481541 +0000 UTC m=+0.170380790 container attach 0ced798c45d55b3b8574492fd79ec12e56e462e54fe5ee16cdf49cb4845c20ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yonath, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 21:13:52 compute-0 ceph-mon[75677]: pgmap v2782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:52 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:52 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:52 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:52 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:52 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:52.912+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:53.022+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:53 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:53 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:53 compute-0 strange_yonath[325245]: {
Nov 24 21:13:53 compute-0 strange_yonath[325245]:     "0": [
Nov 24 21:13:53 compute-0 strange_yonath[325245]:         {
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "devices": [
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "/dev/loop3"
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             ],
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_name": "ceph_lv0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_size": "21470642176",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "name": "ceph_lv0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "tags": {
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.block_uuid": "cb5Aqo-uest-YNpK-4Kuo-XoUm-s2gR-QyJsFG",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.cluster_name": "ceph",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.crush_device_class": "",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.encrypted": "0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.osd_fsid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.osd_id": "0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.type": "block",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.vdo": "0"
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             },
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "type": "block",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "vg_name": "ceph_vg0"
Nov 24 21:13:53 compute-0 strange_yonath[325245]:         }
Nov 24 21:13:53 compute-0 strange_yonath[325245]:     ],
Nov 24 21:13:53 compute-0 strange_yonath[325245]:     "1": [
Nov 24 21:13:53 compute-0 strange_yonath[325245]:         {
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "devices": [
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "/dev/loop4"
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             ],
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_name": "ceph_lv1",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_size": "21470642176",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=722822cb-bac5-4aa4-891b-811a5e4def90,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "name": "ceph_lv1",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "tags": {
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.block_uuid": "B5L8rd-8oXy-YPy1-iWWY-5rvJ-TNMS-zQNjej",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.cluster_name": "ceph",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.crush_device_class": "",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.encrypted": "0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.osd_fsid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.osd_id": "1",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.type": "block",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.vdo": "0"
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             },
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "type": "block",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "vg_name": "ceph_vg1"
Nov 24 21:13:53 compute-0 strange_yonath[325245]:         }
Nov 24 21:13:53 compute-0 strange_yonath[325245]:     ],
Nov 24 21:13:53 compute-0 strange_yonath[325245]:     "2": [
Nov 24 21:13:53 compute-0 strange_yonath[325245]:         {
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "devices": [
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "/dev/loop5"
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             ],
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_name": "ceph_lv2",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_size": "21470642176",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=05e060a3-406b-57f0-89d2-ec35f5b09305,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=720ccdfc-a888-49fd-ae51-8ab3d2ba9302,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "lv_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "name": "ceph_lv2",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "tags": {
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.block_uuid": "Wb85BY-gM63-WWmg-uwQw-JhAN-nDvb-uSQZ1n",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.cluster_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.cluster_name": "ceph",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.crush_device_class": "",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.encrypted": "0",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.osd_fsid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.osd_id": "2",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.type": "block",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:                 "ceph.vdo": "0"
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             },
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "type": "block",
Nov 24 21:13:53 compute-0 strange_yonath[325245]:             "vg_name": "ceph_vg2"
Nov 24 21:13:53 compute-0 strange_yonath[325245]:         }
Nov 24 21:13:53 compute-0 strange_yonath[325245]:     ]
Nov 24 21:13:53 compute-0 strange_yonath[325245]: }
Nov 24 21:13:53 compute-0 systemd[1]: libpod-0ced798c45d55b3b8574492fd79ec12e56e462e54fe5ee16cdf49cb4845c20ae.scope: Deactivated successfully.
Nov 24 21:13:53 compute-0 podman[325228]: 2025-11-24 21:13:53.246450705 +0000 UTC m=+0.963349964 container died 0ced798c45d55b3b8574492fd79ec12e56e462e54fe5ee16cdf49cb4845c20ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yonath, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:13:53 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 48 slow ops, oldest one blocked for 4952 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:53 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ea1edb53199233ae9e9492de229eb6ff56f9ffadd1f3bebabebf9eb41aad2c7-merged.mount: Deactivated successfully.
Nov 24 21:13:53 compute-0 podman[325228]: 2025-11-24 21:13:53.294293873 +0000 UTC m=+1.011193122 container remove 0ced798c45d55b3b8574492fd79ec12e56e462e54fe5ee16cdf49cb4845c20ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:13:53 compute-0 systemd[1]: libpod-conmon-0ced798c45d55b3b8574492fd79ec12e56e462e54fe5ee16cdf49cb4845c20ae.scope: Deactivated successfully.
Nov 24 21:13:53 compute-0 sudo[325125]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:53 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:53 compute-0 sudo[325267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:13:53 compute-0 sudo[325267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:53 compute-0 sudo[325267]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:53 compute-0 sudo[325292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 21:13:53 compute-0 sudo[325292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:53 compute-0 sudo[325292]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:53 compute-0 sudo[325317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:13:53 compute-0 sudo[325317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:53 compute-0 sudo[325317]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:53 compute-0 sudo[325342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/05e060a3-406b-57f0-89d2-ec35f5b09305/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 05e060a3-406b-57f0-89d2-ec35f5b09305 -- raw list --format json
Nov 24 21:13:53 compute-0 sudo[325342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:53 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:53 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:53 compute-0 ceph-mon[75677]: Health check update: 48 slow ops, oldest one blocked for 4952 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:53 compute-0 podman[325406]: 2025-11-24 21:13:53.949287232 +0000 UTC m=+0.045822364 container create 35b42669b8c6b84dcf0ff2a06f6a25fa570d2eb2f80c3e18c342885e87e1a9d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:13:53 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:53 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:53 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:53.961+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:53 compute-0 systemd[1]: Started libpod-conmon-35b42669b8c6b84dcf0ff2a06f6a25fa570d2eb2f80c3e18c342885e87e1a9d2.scope.
Nov 24 21:13:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:13:54 compute-0 podman[325406]: 2025-11-24 21:13:53.931223826 +0000 UTC m=+0.027758978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:13:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:54.030+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:54 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:54 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:54 compute-0 podman[325406]: 2025-11-24 21:13:54.034162608 +0000 UTC m=+0.130697740 container init 35b42669b8c6b84dcf0ff2a06f6a25fa570d2eb2f80c3e18c342885e87e1a9d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lehmann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:13:54 compute-0 podman[325406]: 2025-11-24 21:13:54.04241811 +0000 UTC m=+0.138953242 container start 35b42669b8c6b84dcf0ff2a06f6a25fa570d2eb2f80c3e18c342885e87e1a9d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lehmann, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 21:13:54 compute-0 podman[325406]: 2025-11-24 21:13:54.045884443 +0000 UTC m=+0.142419615 container attach 35b42669b8c6b84dcf0ff2a06f6a25fa570d2eb2f80c3e18c342885e87e1a9d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 21:13:54 compute-0 vibrant_lehmann[325422]: 167 167
Nov 24 21:13:54 compute-0 systemd[1]: libpod-35b42669b8c6b84dcf0ff2a06f6a25fa570d2eb2f80c3e18c342885e87e1a9d2.scope: Deactivated successfully.
Nov 24 21:13:54 compute-0 podman[325406]: 2025-11-24 21:13:54.049919162 +0000 UTC m=+0.146454304 container died 35b42669b8c6b84dcf0ff2a06f6a25fa570d2eb2f80c3e18c342885e87e1a9d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 21:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-431005e90bc3f8e5adfc1f109ff8c318a0acec64cee21bbf68a162f6fb96f2db-merged.mount: Deactivated successfully.
Nov 24 21:13:54 compute-0 podman[325406]: 2025-11-24 21:13:54.087513014 +0000 UTC m=+0.184048156 container remove 35b42669b8c6b84dcf0ff2a06f6a25fa570d2eb2f80c3e18c342885e87e1a9d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lehmann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 21:13:54 compute-0 systemd[1]: libpod-conmon-35b42669b8c6b84dcf0ff2a06f6a25fa570d2eb2f80c3e18c342885e87e1a9d2.scope: Deactivated successfully.
Nov 24 21:13:54 compute-0 podman[325446]: 2025-11-24 21:13:54.350339032 +0000 UTC m=+0.073457839 container create 976ed3fa84cdb117229739b64c9dd059e702ff6bdc3f72cec38f2208a3bdfcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 21:13:54 compute-0 systemd[1]: Started libpod-conmon-976ed3fa84cdb117229739b64c9dd059e702ff6bdc3f72cec38f2208a3bdfcc8.scope.
Nov 24 21:13:54 compute-0 podman[325446]: 2025-11-24 21:13:54.322577395 +0000 UTC m=+0.045696262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 21:13:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 21:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8e6582991b391e017aece75cd5518b615e659f3ba140b15454261ddf9a72b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8e6582991b391e017aece75cd5518b615e659f3ba140b15454261ddf9a72b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8e6582991b391e017aece75cd5518b615e659f3ba140b15454261ddf9a72b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8e6582991b391e017aece75cd5518b615e659f3ba140b15454261ddf9a72b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 21:13:54 compute-0 podman[325446]: 2025-11-24 21:13:54.452838833 +0000 UTC m=+0.175957670 container init 976ed3fa84cdb117229739b64c9dd059e702ff6bdc3f72cec38f2208a3bdfcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 21:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:13:54 compute-0 podman[325446]: 2025-11-24 21:13:54.475936415 +0000 UTC m=+0.199055232 container start 976ed3fa84cdb117229739b64c9dd059e702ff6bdc3f72cec38f2208a3bdfcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rosalind, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 24 21:13:54 compute-0 podman[325446]: 2025-11-24 21:13:54.482035479 +0000 UTC m=+0.205154496 container attach 976ed3fa84cdb117229739b64c9dd059e702ff6bdc3f72cec38f2208a3bdfcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rosalind, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 21:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:13:54 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:13:54 compute-0 ceph-mon[75677]: pgmap v2783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:54 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:54 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:54 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:54 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:54 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:54.922+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:55.039+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:55 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:55 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:55 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:55 compute-0 sad_rosalind[325463]: {
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:     "720ccdfc-a888-49fd-ae51-8ab3d2ba9302": {
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "osd_id": 2,
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "osd_uuid": "720ccdfc-a888-49fd-ae51-8ab3d2ba9302",
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "type": "bluestore"
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:     },
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:     "722822cb-bac5-4aa4-891b-811a5e4def90": {
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "osd_id": 1,
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "osd_uuid": "722822cb-bac5-4aa4-891b-811a5e4def90",
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "type": "bluestore"
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:     },
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:     "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e": {
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "ceph_fsid": "05e060a3-406b-57f0-89d2-ec35f5b09305",
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "osd_id": 0,
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "osd_uuid": "ca6a1aee-cc3b-4db7-afdb-fc68ddc6b99e",
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:         "type": "bluestore"
Nov 24 21:13:55 compute-0 sad_rosalind[325463]:     }
Nov 24 21:13:55 compute-0 sad_rosalind[325463]: }
Nov 24 21:13:55 compute-0 systemd[1]: libpod-976ed3fa84cdb117229739b64c9dd059e702ff6bdc3f72cec38f2208a3bdfcc8.scope: Deactivated successfully.
Nov 24 21:13:55 compute-0 systemd[1]: libpod-976ed3fa84cdb117229739b64c9dd059e702ff6bdc3f72cec38f2208a3bdfcc8.scope: Consumed 1.014s CPU time.
Nov 24 21:13:55 compute-0 podman[325446]: 2025-11-24 21:13:55.482393898 +0000 UTC m=+1.205512755 container died 976ed3fa84cdb117229739b64c9dd059e702ff6bdc3f72cec38f2208a3bdfcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 21:13:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d8e6582991b391e017aece75cd5518b615e659f3ba140b15454261ddf9a72b1-merged.mount: Deactivated successfully.
Nov 24 21:13:55 compute-0 podman[325446]: 2025-11-24 21:13:55.581429186 +0000 UTC m=+1.304548003 container remove 976ed3fa84cdb117229739b64c9dd059e702ff6bdc3f72cec38f2208a3bdfcc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 21:13:55 compute-0 systemd[1]: libpod-conmon-976ed3fa84cdb117229739b64c9dd059e702ff6bdc3f72cec38f2208a3bdfcc8.scope: Deactivated successfully.
Nov 24 21:13:55 compute-0 sudo[325342]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 21:13:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:13:55 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 21:13:55 compute-0 ceph-mon[75677]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:13:55 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev bce0211e-c39f-4970-b03a-5031240101a9 does not exist
Nov 24 21:13:55 compute-0 ceph-mgr[75975]: [progress WARNING root] complete: ev a5d786a1-51c1-4972-a6a1-1d7703b8a8ff does not exist
Nov 24 21:13:55 compute-0 sudo[325507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 21:13:55 compute-0 sudo[325507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:55 compute-0 sudo[325507]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:55 compute-0 sudo[325532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 21:13:55 compute-0 sudo[325532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 21:13:55 compute-0 sudo[325532]: pam_unix(sudo:session): session closed for user root
Nov 24 21:13:55 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:55 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:55 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:13:55 compute-0 ceph-mon[75677]: from='mgr.14130 192.168.122.100:0/676582025' entity='mgr.compute-0.ofslrn' 
Nov 24 21:13:55 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:55 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:55 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:55.929+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:55 compute-0 sshd-session[325557]: Accepted publickey for zuul from 192.168.122.10 port 44010 ssh2: ECDSA SHA256:HU4D9qA6zqoi7sVZArLPItobKj721wEr4U4wW0I9h3k
Nov 24 21:13:55 compute-0 systemd-logind[795]: New session 55 of user zuul.
Nov 24 21:13:55 compute-0 systemd[1]: Started Session 55 of User zuul.
Nov 24 21:13:56 compute-0 sshd-session[325557]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 21:13:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:56.056+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:56 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:56 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:56 compute-0 sudo[325561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 24 21:13:56 compute-0 sudo[325561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 21:13:56 compute-0 ceph-mon[75677]: pgmap v2784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:56 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:56 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:56 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:56 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:56 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:56.919+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:57.058+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:57 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:57 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:57 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:57 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:57 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:57 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:57 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:57 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:57.954+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:58.088+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:58 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:58 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:58 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 48 slow ops, oldest one blocked for 4957 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:58 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:13:58 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:13:58 compute-0 ceph-mon[75677]: pgmap v2785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:58 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:58 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:58 compute-0 ceph-mon[75677]: Health check update: 48 slow ops, oldest one blocked for 4957 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:13:58 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:58 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:58 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:58.977+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:13:59.125+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:59 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:13:59 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:59 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15309 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:13:59 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:13:59 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 24 21:13:59 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/775195233' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 21:13:59 compute-0 ceph-mon[75677]: from='client.15307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:13:59 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:59 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:13:59 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/775195233' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 21:13:59 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:13:59 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:13:59 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:13:59.928+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:00.083+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:00 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:00 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:00 compute-0 ceph-mon[75677]: from='client.15309 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:00 compute-0 ceph-mon[75677]: pgmap v2786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:00 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:00 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:00 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:00 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:00 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:00.938+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:01.087+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:01 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:01 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:01 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:01 compute-0 podman[325818]: 2025-11-24 21:14:01.872127281 +0000 UTC m=+0.085897703 container health_status 9704f833c77abefb4042fa78ed7d5b45f5e3820001ebceed5ace4f826271a69c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 21:14:01 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:01.898+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:01 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:01 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:01 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:01 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:02.100+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:02 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:02 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:02 compute-0 ovs-vsctl[325864]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 21:14:02 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:02 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:02.899+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:02 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:02 compute-0 ceph-mon[75677]: pgmap v2787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:02 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:02 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:03.133+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:03 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:03 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:03 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 48 slow ops, oldest one blocked for 4962 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:03 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:14:03 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:03 compute-0 virtqemud[256794]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 21:14:03 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:03.892+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:03 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:03 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:03 compute-0 virtqemud[256794]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 21:14:03 compute-0 virtqemud[256794]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 21:14:03 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:03 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:03 compute-0 ceph-mon[75677]: Health check update: 48 slow ops, oldest one blocked for 4962 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:03 compute-0 ceph-mon[75677]: pgmap v2788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:04.134+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:04 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:04 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:04 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: cache status {prefix=cache status} (starting...)
Nov 24 21:14:04 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: client ls {prefix=client ls} (starting...)
Nov 24 21:14:04 compute-0 lvm[326197]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 21:14:04 compute-0 lvm[326197]: VG ceph_vg2 finished
Nov 24 21:14:04 compute-0 lvm[326211]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 21:14:04 compute-0 lvm[326211]: VG ceph_vg0 finished
Nov 24 21:14:04 compute-0 lvm[326232]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 21:14:04 compute-0 lvm[326232]: VG ceph_vg1 finished
Nov 24 21:14:04 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:04.900+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:04 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:04 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:04 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:04 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:05 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15313 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:05 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:05 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:05.169+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:05 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:05 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 21:14:05 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 21:14:05 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15315 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:05 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 21:14:05 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 21:14:05 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 21:14:05 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:05.921+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:05 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:05 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:06 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 21:14:06 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:06 compute-0 ceph-mon[75677]: from='client.15313 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:06 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:06 compute-0 ceph-mon[75677]: pgmap v2789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:06 compute-0 ceph-mon[75677]: from='client.15315 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:06 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:06 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:06.128+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 24 21:14:06 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244092732' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 21:14:06 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15321 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:06 compute-0 ceph-mgr[75975]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 21:14:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T21:14:06.228+0000 7f85d4b75640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 21:14:06 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 21:14:06 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 21:14:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 21:14:06 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592073749' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:14:06 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: ops {prefix=ops} (starting...)
Nov 24 21:14:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 24 21:14:06 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701215445' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 21:14:06 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 24 21:14:06 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229230824' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 21:14:06 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:06.968+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:06 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:06 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:07 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:07 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:07 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3244092732' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 21:14:07 compute-0 ceph-mon[75677]: from='client.15321 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:07 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/592073749' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 21:14:07 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/701215445' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 21:14:07 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1229230824' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 21:14:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 24 21:14:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510167536' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 21:14:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:07.154+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:07 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:07 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:07 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: session ls {prefix=session ls} (starting...)
Nov 24 21:14:07 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 24 21:14:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723475747' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 21:14:07 compute-0 ceph-mds[102499]: mds.cephfs.compute-0.jkqrlp asok_command: status {prefix=status} (starting...)
Nov 24 21:14:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 24 21:14:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638148793' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 21:14:07 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15335 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:07 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 24 21:14:07 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045774107' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 21:14:07 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:07.997+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:07 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:07 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:08 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15339 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:08 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3510167536' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 21:14:08 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:08 compute-0 ceph-mon[75677]: pgmap v2790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2723475747' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 21:14:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3638148793' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 21:14:08 compute-0 ceph-mon[75677]: from='client.15335 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:08 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4045774107' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 21:14:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:08.180+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:08 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:08 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:08 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 48 slow ops, oldest one blocked for 4967 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:14:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 21:14:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/411451659' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 21:14:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 24 21:14:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741701477' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 21:14:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 24 21:14:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661666059' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 21:14:08 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:08.961+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:08 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:08 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:08 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 24 21:14:08 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/600229888' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 21:14:09 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:09 compute-0 ceph-mon[75677]: from='client.15339 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:09 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:09 compute-0 ceph-mon[75677]: Health check update: 48 slow ops, oldest one blocked for 4967 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:09 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/411451659' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 21:14:09 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2741701477' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 21:14:09 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/661666059' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 21:14:09 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/600229888' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 21:14:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:09.185+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:09 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:09 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 24 21:14:09 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2599140798' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 21:14:09 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:09 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15351 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T21:14:09.410+0000 7f85d4b75640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 21:14:09 compute-0 ceph-mgr[75975]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 21:14:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:14:09.435 165944 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 21:14:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:14:09.436 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 21:14:09 compute-0 ovn_metadata_agent[165939]: 2025-11-24 21:14:09.436 165944 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 21:14:09 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15353 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:09 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 24 21:14:09 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3259690040' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 21:14:09 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:09.981+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:09 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:09 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:10 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15357 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:10 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:10 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:10 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2599140798' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 21:14:10 compute-0 ceph-mon[75677]: pgmap v2791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:10 compute-0 ceph-mon[75677]: from='client.15351 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:10 compute-0 ceph-mon[75677]: from='client.15353 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:10 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3259690040' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 21:14:10 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:10.163+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:10 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:10 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 24 21:14:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373470295' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 21:14:10 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15361 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:10 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15365 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:10 compute-0 podman[327156]: 2025-11-24 21:14:10.842467839 +0000 UTC m=+0.064283513 container health_status 088b3a7a6268400f9c192563cebfdd716966f6a634458d589059006101436edf (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 21:14:10 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 24 21:14:10 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/98163372' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:18.282853+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:19.283313+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:20.283673+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:21.283964+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:22.284232+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:23.284495+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:24.284754+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:25.285002+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:26.285256+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:27.285686+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:28.285929+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:29.286234+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:30.286439+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:31.286643+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:32.286855+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:33.287072+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:34.287271+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:35.287441+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:36.287698+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:37.287908+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:38.288063+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:39.288244+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:40.288413+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:41.288639+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:42.288832+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:43.288997+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:44.289179+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:45.289337+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:46.289482+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:47.289734+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:48.289897+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:49.290094+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:50.290262+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:51.290445+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:52.290683+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:53.290905+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:54.291083+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:55.291274+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:56.291498+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:57.291692+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:58.291901+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:40:59.292106+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:00.292279+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:01.292506+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:02.292725+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:03.292884+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:04.293047+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:05.293217+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:06.293395+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 18456576 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:07.293629+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 18456576 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:08.293783+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:09.293979+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:10.294134+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:11.294306+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:12.294449+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:13.294677+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:14.294883+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:15.295141+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:16.295284+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:17.295445+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:18.295608+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:19.295761+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:20.295970+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:21.296172+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:22.296329+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:23.296454+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:24.296565+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:25.296692+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 18505728 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:26.296838+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:27.297008+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:28.297179+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:29.297358+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:30.297549+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:31.297770+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:32.297921+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:33.298078+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:34.298226+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:35.298388+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:36.298624+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:37.298803+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:38.298951+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:39.299155+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:40.299324+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:41.299538+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:42.299700+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:43.299913+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:44.300078+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:45.300193+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 18489344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:46.300350+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:47.300526+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:48.300672+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:49.300878+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:50.301028+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:51.301175+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:52.301396+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:53.301533+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:54.301661+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:55.301897+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:56.302189+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:57.302380+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:58.302530+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:41:59.302698+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:00.302890+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:01.303122+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:02.303349+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:03.303575+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:04.303769+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:05.304011+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 18472960 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:06.304519+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 18456576 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:07.304736+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186008 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 18456576 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:08.304889+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 18456576 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:09.305091+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 18456576 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:10.305259+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 18456576 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:11.305520+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a045/0x137f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34f96400
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 144.037261963s of 144.077178955s, submitted: 30
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 18448384 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:12.305748+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188663 data_alloc: 218103808 data_used: 507904
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa6fe000/0x0/0x4ffc00000, data 0x128a055/0x1380000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [0,0,0,1])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 169 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 18448384 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 170 ms_handle_reset con 0x557d34f96400 session 0x557d3302e5a0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:13.305976+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d326c4800
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 170 ms_handle_reset con 0x557d326c4800 session 0x557d34b39c20
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34f9ac00
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 170 ms_handle_reset con 0x557d34f9ac00 session 0x557d34af6000
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 18448384 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:14.306234+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d35056800
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 18448384 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:15.306480+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 171 ms_handle_reset con 0x557d35056800 session 0x557d3581b860
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 18448384 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa6f7000/0x0/0x4ffc00000, data 0x128d7fb/0x1386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:16.306756+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa6f7000/0x0/0x4ffc00000, data 0x128d7fb/0x1386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 18448384 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:17.307017+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195891 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d3441ac00
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa6f7000/0x0/0x4ffc00000, data 0x128d7fb/0x1386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 18440192 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:18.307277+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 18440192 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 172 ms_handle_reset con 0x557d3441ac00 session 0x557d32b72b40
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:19.307732+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34a59c00
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 18432000 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:20.307929+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 173 ms_handle_reset con 0x557d34a59c00 session 0x557d33cd12c0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 18423808 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:21.308091+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 18423808 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:22.308236+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200694 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 18423808 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:23.308617+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa6f2000/0x0/0x4ffc00000, data 0x1291039/0x138b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 18423808 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa6f2000/0x0/0x4ffc00000, data 0x1291039/0x138b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:24.308940+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 18423808 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:25.309162+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 18423808 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:26.309319+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 18423808 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:27.309576+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.417144775s of 15.964744568s, submitted: 71
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 18423808 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:28.309855+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89038848 unmapped: 18415616 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:29.310135+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89038848 unmapped: 18415616 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:30.310360+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89038848 unmapped: 18415616 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:31.310564+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89038848 unmapped: 18415616 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:32.310763+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89038848 unmapped: 18415616 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:33.310923+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:34.311129+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:35.311311+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:36.311521+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:37.311698+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:38.311872+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:39.312089+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:40.312251+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:41.312446+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:42.312617+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:43.312824+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:44.313032+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:45.313256+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:46.313461+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:47.313647+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:48.313792+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:49.314011+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:50.314180+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:51.314341+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:52.314455+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:53.314666+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:54.314853+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:55.315015+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:56.315085+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:57.315170+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:58.315321+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:42:59.315511+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:00.315746+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:01.315897+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:02.316302+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:03.316738+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:04.316875+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:05.317022+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:06.317191+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:07.317368+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:08.317494+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:09.317715+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:10.317886+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:11.318029+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:12.318179+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:13.318376+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:14.318559+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:15.318688+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:16.318867+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:17.319052+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:18.319243+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:19.319479+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:20.319694+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:21.319907+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa6ef000/0x0/0x4ffc00000, data 0x1292af2/0x138e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:22.320031+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203668 data_alloc: 218103808 data_used: 532480
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:23.320188+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 18399232 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:24.320463+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34f9cc00
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 57.523071289s of 57.535762787s, submitted: 13
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 18391040 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:25.320627+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 175 ms_handle_reset con 0x557d34f9cc00 session 0x557d326fa000
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 18317312 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:26.320842+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 18317312 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:27.320989+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fa6ea000/0x0/0x4ffc00000, data 0x1294ae8/0x1393000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212140 data_alloc: 218103808 data_used: 540672
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 18317312 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:28.321101+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d35080400
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89153536 unmapped: 26697728 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:29.321289+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 26845184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:30.321468+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 175 ms_handle_reset con 0x557d35080400 session 0x557d34b112c0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 26845184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:31.321655+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34a53c00
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 177 ms_handle_reset con 0x557d34a53c00 session 0x557d34b8a1e0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89096192 unmapped: 26755072 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:32.321830+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34f9b000
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 178 ms_handle_reset con 0x557d34f9b000 session 0x557d34b2af00
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228106 data_alloc: 218103808 data_used: 548864
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 26722304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:33.321996+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa2d0000/0x0/0x4ffc00000, data 0x1299ae6/0x139a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 26722304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:34.322120+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 26722304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:35.322295+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa2d0000/0x0/0x4ffc00000, data 0x1299ae6/0x139a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89161728 unmapped: 26689536 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:36.322479+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89161728 unmapped: 26689536 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:37.322723+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d357ef000
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.091391563s of 12.559797287s, submitted: 105
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235012 data_alloc: 218103808 data_used: 548864
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89260032 unmapped: 26591232 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:38.322857+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 180 ms_handle_reset con 0x557d357ef000 session 0x557d34af7860
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 26566656 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:39.323000+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 26566656 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:40.323138+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 180 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x129d1c8/0x13a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:41.323293+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 26533888 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 180 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x129d1c8/0x13a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:42.323522+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 26533888 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240194 data_alloc: 218103808 data_used: 548864
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:43.323707+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 26533888 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d350ed400
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:44.323884+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 26501120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 180 heartbeat osd_stat(store_statfs(0x4fa2cb000/0x0/0x4ffc00000, data 0x129d1c8/0x13a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 180 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:45.324048+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 26492928 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 181 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x129edcc/0x13a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [0,0,0,1])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 181 ms_handle_reset con 0x557d350ed400 session 0x557d358c2b40
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:46.324233+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 26484736 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 181 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x129ed99/0x13a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:47.324396+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 26484736 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239669 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:48.324559+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 26484736 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:49.324793+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 26484736 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:50.324971+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 26484736 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 181 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x129ed99/0x13a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:51.325131+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 26484736 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 181 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x129ed99/0x13a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:52.325295+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 26484736 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.829755783s of 14.950282097s, submitted: 93
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x129ed99/0x13a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:53.325524+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89382912 unmapped: 26468352 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:54.325707+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89382912 unmapped: 26468352 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:55.325827+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89382912 unmapped: 26468352 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:56.325982+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89382912 unmapped: 26468352 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:57.326140+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89382912 unmapped: 26468352 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:58.326316+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:43:59.326532+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:00.326931+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:01.327221+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:02.327379+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:03.328018+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:04.328245+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:05.329007+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:06.329404+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:07.330093+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:08.330668+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:09.331245+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:10.331632+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:11.331792+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:12.331980+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:13.332146+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:14.332369+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:15.332654+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:16.332804+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:17.332952+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:18.333134+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:19.333337+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:20.333577+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:21.333778+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:22.333980+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:23.334294+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:24.334565+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:25.334854+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:26.335017+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:27.335229+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:28.335468+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:29.335753+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:30.335997+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:31.336187+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:32.336357+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:33.336514+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:34.336664+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:35.336795+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:36.336948+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:37.337126+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:38.337295+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:39.337510+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:40.337668+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:41.337830+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:42.338014+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:43.338202+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:44.338411+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:45.338700+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:46.338845+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:47.339063+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:48.339228+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:49.339414+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:50.339562+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:51.339687+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:52.339829+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:53.340017+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:54.340202+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:55.340354+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:56.340539+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:57.340834+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:58.341018+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:44:59.341204+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:00.341413+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:01.341631+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:02.341885+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:03.342051+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:04.342241+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:05.342425+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:06.342638+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:07.342826+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:08.343023+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:09.343262+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:10.343451+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:11.343697+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:12.343845+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 26460160 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:13.344012+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:14.344177+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:15.344439+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:16.344690+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:17.344890+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:18.345102+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:19.345363+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:20.345552+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:21.345762+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:22.345980+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:23.346168+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:24.346389+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:25.346547+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:26.346707+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:27.346884+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:28.347038+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:29.347250+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:30.347438+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:31.347649+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:32.347829+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:33.348072+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:34.348295+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:35.348510+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:36.348694+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:37.348821+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:38.348943+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:39.349138+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:40.349318+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:41.349492+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:42.349666+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:43.349882+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:44.350053+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:45.350248+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:46.350393+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:47.350540+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:48.350729+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:49.350944+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:50.351176+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:51.351376+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:52.351662+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:53.351880+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:54.352080+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:55.352224+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:56.352389+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:57.352493+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:58.352638+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:45:59.352837+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:00.353013+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:01.353140+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:02.353302+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:03.353501+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:04.354156+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:05.354408+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:06.354551+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:07.354730+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:08.354895+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:09.355292+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:10.355635+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:11.356131+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:12.356354+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:13.356842+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:14.357091+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:15.357388+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:16.357665+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:17.357878+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:18.358107+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:19.358420+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:20.358692+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:21.358914+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:22.359073+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:23.359266+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:24.359427+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:25.359649+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:26.359823+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:27.360000+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:28.360268+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:29.360657+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:30.360816+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:31.360942+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:32.361144+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:33.361340+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:34.361498+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:35.361664+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:36.361912+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:37.362220+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:38.362418+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:39.362750+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:40.362918+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:41.363284+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:42.363545+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:43.364003+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:44.364207+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:45.364442+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:46.364654+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:47.364895+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:48.365255+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:49.365460+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:50.365740+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:51.365977+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:52.366728+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:53.366903+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:54.367065+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:55.367398+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:56.367614+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:57.367832+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:58.368134+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:46:59.368403+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:00.368742+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:01.368944+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:02.369104+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:03.369332+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:04.369552+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:05.369761+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:06.369953+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:07.370186+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:08.370383+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:09.370581+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:10.371900+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:11.372084+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:12.372263+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:13.372657+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:14.372853+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:15.373158+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:16.373763+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:17.374054+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:18.374386+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:19.374621+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:20.375255+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:21.375489+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:22.375691+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:23.376105+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:24.376415+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:25.376622+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:26.376784+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:27.376976+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:28.377217+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:29.377424+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:30.377562+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:31.377874+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:32.378205+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:33.378493+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:34.378688+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:35.378894+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:36.379041+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:37.379227+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:38.379401+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:39.379794+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:40.380004+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:41.380177+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:42.380374+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:43.380615+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:44.380906+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:45.381088+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:46.381307+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:47.381471+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:48.381701+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:49.381930+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:50.382123+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:51.382283+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 26451968 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:52.382400+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:53.382527+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:54.382683+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:55.382827+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:56.383011+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:57.383162+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:58.383361+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:47:59.383618+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:00.383779+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:01.383984+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:02.384141+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:03.384335+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:04.384499+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 7894 writes, 31K keys, 7894 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7894 writes, 1805 syncs, 4.37 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1063 writes, 3149 keys, 1063 commit groups, 1.0 writes per commit group, ingest: 1.58 MB, 0.00 MB/s
                                           Interval WAL: 1063 writes, 441 syncs, 2.41 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:05.384642+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:06.384800+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:07.384983+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:08.385208+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:09.385436+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 26443776 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:10.385669+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: mgrc ms_handle_reset ms_handle_reset con 0x557d34a58400
Nov 24 21:14:10 compute-0 ceph-osd[90884]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/103018990
Nov 24 21:14:10 compute-0 ceph-osd[90884]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/103018990,v1:192.168.122.100:6801/103018990]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: get_auth_request con 0x557d34a59c00 auth_method 0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: mgrc handle_mgr_configure stats_period=5
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:11.385900+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:12.386074+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:13.386238+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:14.386404+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:15.386616+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:16.386828+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:17.387051+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:18.387268+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:19.387471+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:20.387652+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:21.387842+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:22.388083+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:23.388313+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:24.388558+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:25.388846+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:26.389104+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:27.389365+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:28.389641+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:29.389883+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:30.390120+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:31.390371+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:32.390562+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:33.390819+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:34.391015+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:35.391311+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:36.391492+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:37.391753+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:38.391970+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:39.392219+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:40.392436+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:41.392685+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:42.392884+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:43.392995+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:44.393166+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:45.393396+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:46.393578+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:47.393833+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:48.393995+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:49.394208+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242451 data_alloc: 218103808 data_used: 557056
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:50.394419+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:51.394568+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c7000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89620480 unmapped: 26230784 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:52.394762+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.705291748s of 299.716674805s, submitted: 13
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:53.394915+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:54.395060+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:55.395186+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:56.395311+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:57.395483+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:58.395652+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:48:59.395827+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:00.395967+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:01.396121+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:02.396281+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:03.396426+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:04.396579+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:05.396786+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:06.396950+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:07.397082+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:08.397251+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:09.397458+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 26214400 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:10.397638+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:11.397841+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:12.398016+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:13.398210+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:14.398329+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:15.398538+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:16.398683+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:17.398817+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:18.398966+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:19.399183+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:20.399336+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:21.399509+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:22.400696+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:23.400959+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:24.401358+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:25.401627+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:26.402093+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:27.402847+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:28.403152+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:29.403473+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:30.403691+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:31.403875+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:32.404006+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:33.404251+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:34.404662+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:35.404964+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:36.405293+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:37.405719+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:38.405890+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:39.406456+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:40.406717+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:41.406931+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:42.407123+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:43.407468+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:44.407747+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:45.407995+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:46.408142+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:47.408305+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:48.408510+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:49.408789+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:50.409053+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:51.409307+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:52.409551+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:53.409786+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:54.409969+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:55.411298+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:56.411790+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:57.412519+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:58.412884+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:49:59.413712+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:00.414458+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:01.414825+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:02.415080+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:03.415334+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:04.415703+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:05.416167+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:06.416456+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:07.416723+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:08.416971+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:09.417141+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:10.417356+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:11.417519+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:12.417708+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:13.417844+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:14.418154+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:15.418329+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:16.418547+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:17.418779+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:18.418978+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:19.419223+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:20.419404+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:21.419622+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:22.419823+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:23.420013+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:24.420235+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:25.420392+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:26.420574+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:27.421822+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:28.422000+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:29.422347+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 26206208 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:30.422661+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:31.422978+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:32.423238+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:33.423500+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:34.423700+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:35.423944+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:36.424128+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:37.424317+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:38.424565+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:39.424906+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:40.425171+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:41.425390+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:42.425576+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:43.425799+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:44.426030+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:45.426226+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:46.426428+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:47.426660+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:48.426819+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:49.427045+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:50.427298+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:51.427741+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:52.428008+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:53.428236+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:54.428529+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:55.428756+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:56.429072+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:57.429302+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:58.429493+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:50:59.430058+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:00.430720+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 26198016 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:01.431338+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:02.431785+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:03.432111+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:04.432379+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:05.432830+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:06.433059+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:07.433390+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:08.433779+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:09.434172+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:10.434395+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:11.434645+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:12.434912+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:13.435319+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:14.435545+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:15.435772+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:16.435997+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:17.436167+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:18.436486+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:19.436721+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:20.436919+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:21.437223+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:22.437372+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:23.437549+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:24.437723+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:25.437971+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:26.438172+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:27.438379+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:28.438540+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:29.438808+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:30.439051+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:31.439236+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:32.439417+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:33.439637+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:34.439827+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:35.440036+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:36.440281+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:37.440476+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:38.440663+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:39.440875+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89661440 unmapped: 26189824 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:40.441041+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:41.441195+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:42.441366+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:43.441522+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:44.441734+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:45.441932+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:46.442103+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:47.442262+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:48.442420+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:49.442669+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:50.442886+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:51.443125+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:52.443304+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:53.443486+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:54.443686+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:55.443891+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:56.444064+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:57.444255+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:58.444418+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:51:59.444709+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:00.444889+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:01.445079+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:02.445286+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:03.445464+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:04.445676+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:05.445873+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:06.446068+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:07.446249+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:08.446466+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:09.446706+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:10.446889+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:11.447087+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:12.447300+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89669632 unmapped: 26181632 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:13.447525+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:14.447726+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:15.447894+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:16.448121+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 ms_handle_reset con 0x557d34417000 session 0x557d3311c780
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d357f1400
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:17.448280+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:18.448441+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:19.448685+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:20.448978+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:21.449178+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:22.449324+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:23.449545+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:24.449724+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:25.449881+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:26.450068+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:27.450238+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:28.450417+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:29.450657+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:30.450823+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:31.450976+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:32.451489+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:33.451649+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:34.451850+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:35.452079+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:36.452229+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:37.452386+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:38.452649+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:39.452884+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:40.453145+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:41.453352+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:42.453664+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:43.453836+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:44.454008+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 26173440 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:45.454211+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:46.454408+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:47.454544+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:48.454752+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:49.454982+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:50.455156+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:51.455368+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:52.455547+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:53.455766+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:54.455948+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:55.456161+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:56.456349+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:57.456679+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:58.456857+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:52:59.456991+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:00.457159+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:01.457298+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:02.457457+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:03.457659+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:04.457829+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:05.458035+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:06.458188+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:07.458356+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:08.458776+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:09.458985+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:10.459160+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:11.459350+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:12.459482+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:13.459641+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:14.460650+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:15.460817+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:16.460995+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 ms_handle_reset con 0x557d34f9f800 session 0x557d338e72c0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d368b9800
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 26165248 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:17.461164+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:18.461257+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:19.461438+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:20.461568+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:21.461753+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:22.461922+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:23.462065+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:24.462241+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:25.462392+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:26.462550+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:27.463000+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:28.463181+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:29.463383+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:30.463552+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:31.463682+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:32.463816+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:33.464007+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:34.464195+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:35.464341+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:36.464487+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:37.464714+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:38.464840+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:39.464999+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:40.465195+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:41.465382+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:42.465565+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:43.467046+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:44.467226+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:45.467459+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:46.467633+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:47.467837+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:48.467949+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:49.468115+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:50.468278+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:51.468430+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:52.468576+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:53.468747+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:54.468861+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:55.469028+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:56.469185+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 26157056 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:57.469332+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:58.469503+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:53:59.469726+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:00.469883+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:01.470038+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:02.470161+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:03.470353+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:04.470530+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:05.470721+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:06.470890+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:07.471031+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:08.471228+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:09.471444+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:10.471611+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:11.471880+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:12.472105+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:13.472307+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:14.472468+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:15.472633+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:16.472787+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:17.472921+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:18.473083+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:19.473319+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:20.473450+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:21.473616+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:22.473757+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:23.473914+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:24.474089+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:25.474280+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:26.474452+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:27.474664+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:28.474869+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:29.475090+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89702400 unmapped: 26148864 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:30.475256+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:31.475464+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:32.475669+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:33.475873+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:34.476072+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:35.476287+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:36.476489+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:37.476651+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:38.476828+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:39.477081+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:40.477222+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:41.477404+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:42.477562+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:43.477731+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:44.477912+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:45.478069+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:46.478227+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:47.478733+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:48.478900+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:49.479075+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:50.479207+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242211 data_alloc: 218103808 data_used: 573440
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:51.479332+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:52.479484+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 26140672 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:53.479651+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89718784 unmapped: 26132480 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:54.479809+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89718784 unmapped: 26132480 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d35bae000
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa2c8000/0x0/0x4ffc00000, data 0x12a0852/0x13a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:55.479939+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89718784 unmapped: 26132480 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 363.347015381s of 363.406768799s, submitted: 16
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 183 ms_handle_reset con 0x557d35bae000 session 0x557d330721e0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246385 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:56.480080+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 26116096 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:57.480268+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 183 heartbeat osd_stat(store_statfs(0x4fa2c4000/0x0/0x4ffc00000, data 0x12a2479/0x13a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 26116096 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:58.480456+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 26116096 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:54:59.482701+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 26116096 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 183 heartbeat osd_stat(store_statfs(0x4fa2c4000/0x0/0x4ffc00000, data 0x12a2479/0x13a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:00.482930+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 26116096 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 183 heartbeat osd_stat(store_statfs(0x4fa2c4000/0x0/0x4ffc00000, data 0x12a2479/0x13a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246385 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:01.483132+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 26116096 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:02.483324+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:03.483556+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:04.483752+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:05.483944+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:06.484081+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:07.484268+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:08.484494+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:09.484843+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:10.485099+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:11.485340+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:12.485530+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:13.485672+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:14.485835+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:15.486001+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:16.486148+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:17.486558+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:18.486990+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:19.487678+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:20.487880+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:21.488153+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:22.488337+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:23.488502+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:24.488810+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 26107904 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:25.489088+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:26.489324+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:27.489648+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:28.489861+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:29.490103+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:30.490315+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:31.490711+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:32.490931+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:33.491086+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:34.491275+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:35.491562+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:36.491829+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:37.491995+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:38.492142+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:39.492453+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:40.492722+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:41.492909+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:42.493168+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:43.493374+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:44.493650+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:45.493823+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:46.494006+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:47.494201+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:48.494360+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:49.494558+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:50.494750+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:51.494926+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:52.495091+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:53.495239+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:54.495437+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:55.495562+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:56.495705+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89751552 unmapped: 26099712 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:57.495865+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:58.496024+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:55:59.496757+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:00.496906+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:01.497014+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:02.497178+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:03.497347+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:04.497514+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:05.497694+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:06.497922+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:07.498149+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:08.498372+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:09.498666+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:10.498849+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:11.499028+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:12.499179+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:13.499559+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:14.499762+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:15.499904+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:16.500093+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:17.500306+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:18.500511+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:19.500704+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:20.500853+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:21.501046+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:22.501213+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:23.501353+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:24.501535+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:25.501719+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89759744 unmapped: 26091520 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:26.501921+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:27.502112+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:28.502243+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:29.502437+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:30.502581+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:31.502784+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:32.502963+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:33.503122+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:34.503271+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:35.503426+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:36.503578+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:37.503754+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:38.503958+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:39.504249+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:40.504477+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:41.504715+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:42.504891+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:43.505075+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:44.505250+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:45.505446+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:46.505662+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:47.505902+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:48.506073+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:49.506314+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:50.506502+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:51.506668+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:52.506821+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:53.507032+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:54.508172+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:55.508707+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:56.509239+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:10 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:10 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:57.509635+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:58.509884+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:56:59.510568+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:10 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:00.510962+0000)
Nov 24 21:14:10 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:10 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:01.511398+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249359 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:02.511772+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34f9bc00
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 126.948913574s of 127.004821777s, submitted: 29
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 184 ms_handle_reset con 0x557d34f9bc00 session 0x557d34ad2780
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34ffb400
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 184 ms_handle_reset con 0x557d34ffb400 session 0x557d34b11860
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:03.512019+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:04.512236+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:05.512405+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c2000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:06.512814+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248479 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:07.513045+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:08.513228+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:09.513405+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c2000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:10.513650+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fa2c2000/0x0/0x4ffc00000, data 0x12a3f32/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:11.513815+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248479 data_alloc: 218103808 data_used: 581632
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d368f3800
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:12.513965+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 26083328 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:13.514162+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.628653526s of 10.690814018s, submitted: 13
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 185 ms_handle_reset con 0x557d368f3800 session 0x557d330c7680
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:14.514337+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:15.514542+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89776128 unmapped: 26075136 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 185 heartbeat osd_stat(store_statfs(0x4fa2c0000/0x0/0x4ffc00000, data 0x12a5b03/0x13ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34f98400
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:16.514719+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252359 data_alloc: 218103808 data_used: 589824
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 26058752 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 186 ms_handle_reset con 0x557d34f98400 session 0x557d338e65a0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:17.514854+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 26058752 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:18.515046+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 26058752 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:19.515276+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 26058752 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:20.515564+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89792512 unmapped: 26058752 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:21.515868+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a76f7/0x13ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250591 data_alloc: 218103808 data_used: 589824
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa2c1000/0x0/0x4ffc00000, data 0x12a76f7/0x13ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 26050560 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:22.516071+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 26050560 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:23.516276+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 26050560 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:24.516469+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 26050560 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:25.516699+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 26050560 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:26.516947+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254765 data_alloc: 218103808 data_used: 598016
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 26050560 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:27.517101+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fa2bd000/0x0/0x4ffc00000, data 0x12a91b0/0x13b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 26050560 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:28.517251+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34fab800
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.210047722s of 15.503950119s, submitted: 85
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89817088 unmapped: 26034176 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:29.517460+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 ms_handle_reset con 0x557d34fab800 session 0x557d33d932c0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:30.517681+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:31.517933+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:32.518123+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:33.518362+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:34.518644+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:35.518867+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:36.519018+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:37.519241+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:38.519444+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:39.519705+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:40.519900+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:41.520085+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:42.520325+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:43.520677+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:44.520884+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:45.521093+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:46.521309+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:47.521518+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:48.521741+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:49.521993+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:50.522218+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:51.522364+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:52.522483+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:53.522749+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:54.522964+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:55.523183+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:56.523368+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:57.523534+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:58.523808+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:57:59.524044+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:00.524284+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:01.524506+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:02.524793+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:03.525021+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 8176 writes, 32K keys, 8176 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8176 writes, 1929 syncs, 4.24 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 282 writes, 657 keys, 282 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s
                                           Interval WAL: 282 writes, 124 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:04.525233+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:05.525465+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:06.525648+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:07.525822+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:08.525977+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:09.526168+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:10.526422+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:11.526643+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:12.526808+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:13.526960+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:14.527119+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:15.527285+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:16.527498+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:17.527687+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:18.527887+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:19.528111+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:20.528295+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:21.528544+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:22.528788+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:23.528989+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:24.529217+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:25.529457+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89825280 unmapped: 26025984 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:26.529658+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:27.529895+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:28.530077+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:29.530319+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:30.530661+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:31.530953+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:32.531216+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:33.531484+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:34.531755+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:35.532029+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:36.532287+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:37.532489+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:38.532693+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:39.533077+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:40.533396+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:41.533695+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:42.533905+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:43.534115+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:44.534362+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:45.534663+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:46.534979+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:47.535244+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:48.535518+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:49.535928+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:50.536165+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:51.536492+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263310 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:52.536781+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 84.032035828s of 84.070137024s, submitted: 15
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:53.537035+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b8000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:54.537307+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:55.537652+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:56.537915+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262078 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:57.538189+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:58.538470+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b8000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:58:59.538818+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:00.539047+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:01.539287+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262078 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:02.539702+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b8000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:03.540652+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:04.541745+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:05.542374+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:06.542734+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262078 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:07.543300+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b8000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b8000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:08.544249+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:09.545388+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:10.546674+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:11.547342+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa2b8000/0x0/0x4ffc00000, data 0x12aadd9/0x13b6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262078 data_alloc: 218103808 data_used: 606208
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:12.547931+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:13.548345+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 89833472 unmapped: 26017792 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d35bac400
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.144378662s of 21.169109344s, submitted: 8
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:14.548719+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 98246656 unmapped: 17604608 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 189 ms_handle_reset con 0x557d35bac400 session 0x557d34aea3c0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 189 heartbeat osd_stat(store_statfs(0x4f9ab8000/0x0/0x4ffc00000, data 0x1aaadd9/0x1bb6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:15.549085+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90923008 unmapped: 24928256 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d35bac000
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:16.549439+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90931200 unmapped: 24920064 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 190 ms_handle_reset con 0x557d35bac000 session 0x557d34af6f00
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330012 data_alloc: 218103808 data_used: 614400
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:17.549814+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90955776 unmapped: 24895488 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:18.550235+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90955776 unmapped: 24895488 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f9aae000/0x0/0x4ffc00000, data 0x1aae5b1/0x1bbe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:19.550524+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90955776 unmapped: 24895488 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:20.550959+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90955776 unmapped: 24895488 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:21.551362+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90955776 unmapped: 24895488 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330012 data_alloc: 218103808 data_used: 614400
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:22.551762+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 24887296 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:23.552139+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 24887296 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f9aae000/0x0/0x4ffc00000, data 0x1aae5b1/0x1bbe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:24.552569+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 24887296 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:25.552921+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 24887296 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:26.553213+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 24887296 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330012 data_alloc: 218103808 data_used: 614400
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:27.553363+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 24887296 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:28.553503+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 24887296 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:29.553729+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 24887296 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f9aae000/0x0/0x4ffc00000, data 0x1aae5b1/0x1bbe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:30.553897+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90996736 unmapped: 24854528 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:31.554061+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90996736 unmapped: 24854528 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330012 data_alloc: 218103808 data_used: 614400
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:32.554193+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 90996736 unmapped: 24854528 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:33.554309+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f9aae000/0x0/0x4ffc00000, data 0x1aae5b1/0x1bbe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:34.554482+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:35.554630+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:36.554822+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330012 data_alloc: 218103808 data_used: 614400
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:37.555067+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f9aae000/0x0/0x4ffc00000, data 0x1aae5b1/0x1bbe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:38.555354+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:39.555655+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:40.555829+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:41.555996+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330012 data_alloc: 218103808 data_used: 614400
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:42.556151+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f9aae000/0x0/0x4ffc00000, data 0x1aae5b1/0x1bbe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:43.556340+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d33a1c800
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 24821760 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.289096832s of 29.573562622s, submitted: 27
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:44.556471+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 24813568 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 191 ms_handle_reset con 0x557d33a1c800 session 0x557d338e63c0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:45.556660+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 24813568 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:46.556902+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 24813568 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9aaf000/0x0/0x4ffc00000, data 0x1ab0182/0x1bbe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:47.557145+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329279 data_alloc: 218103808 data_used: 622592
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 24813568 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:48.557335+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 24813568 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:49.557851+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 24813568 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:50.558153+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f9aaf000/0x0/0x4ffc00000, data 0x1ab0182/0x1bbe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 24813568 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:51.558333+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 24813568 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:52.558443+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:53.558663+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:54.558867+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:55.559045+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:56.559239+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:57.559408+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:58.559556+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:59.559869+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:00.560080+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:01.560291+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:02.560431+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:03.560675+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:04.560867+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:05.561120+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:06.561407+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:07.561666+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:08.561889+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:09.562170+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:10.562338+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:11.562443+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:12.562682+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:13.562864+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:14.563033+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:15.563208+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:16.563378+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:17.563545+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:18.563738+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:19.563911+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:20.564046+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:21.564221+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:22.564439+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:23.564653+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:24.564817+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:25.564997+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:26.565183+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:27.565349+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:28.565574+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:29.565834+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:30.566014+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:31.566192+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:32.566371+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:33.566641+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:34.566838+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:35.566987+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:36.567132+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:37.567409+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:38.567682+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:39.568004+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:40.568234+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:41.568426+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:42.568548+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:43.568686+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:44.568951+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:45.569159+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:46.569422+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:47.569560+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:48.569700+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:49.569888+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:50.570081+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:51.570256+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:52.570378+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:53.570507+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:54.570679+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:55.570822+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:56.570953+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:57.571140+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:58.571323+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:59.571694+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:00.571869+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:01.572094+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:02.572315+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:03.572562+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:04.572815+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:05.573059+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:06.573234+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:07.573431+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:08.573565+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:11.030+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:11 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:11 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:09.573823+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:10.573989+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:11.574182+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:12.574348+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:13.574540+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:14.574717+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:15.574826+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:16.574977+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 24797184 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:17.575568+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:18.575778+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:19.576089+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:20.576304+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:21.576459+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:22.576661+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:23.576843+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:24.577235+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:25.577403+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:26.577615+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:27.577801+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:28.577991+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:29.578265+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:30.578456+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:31.578706+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:32.578889+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:33.579096+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:34.579292+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:35.579480+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:36.579674+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:37.579875+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:38.580053+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:39.580279+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91070464 unmapped: 24780800 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:40.580493+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:41.580659+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:42.580840+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:43.581016+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:44.581211+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:45.581405+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:46.581619+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:47.581786+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:48.582064+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:49.582250+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:50.582407+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:51.582625+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:52.582796+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:53.582956+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:54.583133+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:55.583301+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:56.583454+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:57.583600+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:58.583770+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:59.583983+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 24764416 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:00.584157+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:01.584308+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:02.584431+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:03.584644+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:04.585711+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:05.586250+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:06.586419+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:07.586644+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:08.586815+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:09.587113+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:10.587265+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:11.587412+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:12.587635+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:13.587825+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:14.588010+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:15.588225+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:16.588495+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:17.588638+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:18.588804+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:19.589025+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91103232 unmapped: 24748032 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:20.589215+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91119616 unmapped: 24731648 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:21.589390+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:22.589566+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:23.589809+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:24.590028+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:25.590213+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:26.590402+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:27.590684+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:28.590834+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:29.591029+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:30.591216+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:31.591388+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:32.591666+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:33.591903+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:34.592099+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:35.592248+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:36.592388+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:37.592542+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:38.592703+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:39.593030+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:40.593292+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:41.593479+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:42.593665+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:43.593820+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:44.594010+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:45.594197+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:46.594362+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:47.594566+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:48.595692+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:49.596694+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:50.597449+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:51.598051+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:52.598497+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:53.598933+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:54.599365+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:55.599515+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:56.599664+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:57.599863+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:58.600068+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:59.600281+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:00.600578+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:01.600785+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:02.601043+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:03.601244+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:04.601390+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:05.601567+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:06.601847+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 24723456 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:07.602114+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91136000 unmapped: 24715264 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:08.602302+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91136000 unmapped: 24715264 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:09.602762+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 24707072 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:10.603076+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: mgrc ms_handle_reset ms_handle_reset con 0x557d34a59c00
Nov 24 21:14:11 compute-0 ceph-osd[90884]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/103018990
Nov 24 21:14:11 compute-0 ceph-osd[90884]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/103018990,v1:192.168.122.100:6801/103018990]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: get_auth_request con 0x557d368f3800 auth_method 0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: mgrc handle_mgr_configure stats_period=5
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:11.603354+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:12.603705+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:13.603909+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:14.604100+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:15.604242+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:16.604430+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:17.604691+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:18.604890+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:19.605148+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:20.605289+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:21.605469+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:22.605694+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:23.605950+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:24.606244+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:25.606438+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:26.606656+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:27.606963+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:28.607156+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:29.607429+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:30.607621+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:31.607816+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:32.608030+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91160576 unmapped: 24690688 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:33.608189+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:34.608462+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:35.608666+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:36.608935+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:37.609119+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:38.609339+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:39.609565+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:40.609882+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:41.610070+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:42.610252+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:43.610445+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:44.610697+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:45.610851+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:46.611019+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:47.611235+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:48.611433+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:49.611681+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:50.611961+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:51.612231+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:52.612435+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:53.612645+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:54.612820+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:55.613026+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:56.613183+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:57.613386+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:58.613555+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:59.613871+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:00.614023+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:01.614197+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:02.614368+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:03.614534+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:04.614700+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:05.614870+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:06.615037+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:07.615219+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:08.615388+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:09.615551+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:10.615694+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:11.615852+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:12.616054+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:13.616211+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:14.616416+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:15.616699+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:16.616927+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:17.617147+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:18.617313+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:19.617521+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:20.617801+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:21.617994+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:22.618140+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:23.618253+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:24.618414+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:25.618659+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:26.618889+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:27.619067+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:28.619259+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:29.619461+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:30.619737+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:31.619915+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:32.620150+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:33.620384+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:34.620555+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:35.620816+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:36.620990+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:37.621155+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:38.621332+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:39.621690+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:40.621938+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:41.622171+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:42.622421+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:43.622642+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:44.622811+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:45.622986+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:46.623156+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:47.623384+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:48.623573+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:49.623826+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:50.624004+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:51.624212+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:52.624358+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:53.624564+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:54.624776+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:55.624917+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:56.627330+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:57.628700+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:58.630947+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:59.632974+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:00.634726+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:01.636304+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:02.637238+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:03.638432+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:04.639452+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:05.640329+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:06.641072+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:07.641233+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:08.641823+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91168768 unmapped: 24682496 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:09.642445+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:10.642935+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:11.643433+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:12.643853+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:13.644247+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:14.644411+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:15.644577+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:16.644739+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:17.644941+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:18.645161+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:19.645474+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:20.645697+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:21.645838+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:22.645997+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:23.646195+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:24.646376+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:25.646696+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:26.646894+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:27.647077+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:28.647249+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:29.647478+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:30.648336+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:31.649050+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:32.649489+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:33.649769+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:34.650197+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:35.650675+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:36.650980+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:37.651270+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:38.651547+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:39.651882+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:40.652077+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:41.652269+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:42.652443+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:43.652650+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:44.652822+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:45.652978+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:46.653265+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:47.653444+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:48.653702+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:49.654071+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:50.654237+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:51.654407+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:52.654633+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:53.654898+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:54.655265+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:55.655548+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:56.655872+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:57.656125+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:58.656441+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:59.656795+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:00.657030+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:01.657267+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:02.657558+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:03.657790+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:04.657997+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:05.658230+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:06.658492+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:07.658736+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:08.658944+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:09.659194+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:10.659408+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:11.659647+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:12.659773+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:13.659945+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:14.660147+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:15.660339+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:16.660720+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:17.660887+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:18.661137+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:19.661465+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:20.661700+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:21.661875+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:22.662037+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:23.662203+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:24.662352+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:25.662457+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:26.662614+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:27.662778+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:28.662944+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:29.663139+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:30.663327+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:31.663474+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:32.663692+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:33.663862+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:34.664123+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:35.665089+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:36.665919+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:37.666666+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:38.667065+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:39.667405+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:40.667883+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:41.668398+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:42.668668+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:43.669019+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:44.669393+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:45.669768+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:46.670201+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:47.670572+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:48.671002+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:49.671289+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:50.671504+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:51.671713+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:52.671963+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:53.672239+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:54.672445+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:55.672666+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:56.672820+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 24674304 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:57.672989+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:58.673187+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:59.673497+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:00.673699+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:01.673870+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:02.674078+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:03.674292+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:04.674491+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:05.674664+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:06.674941+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:07.675232+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:08.675467+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:09.675720+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:10.675882+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:11.676220+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:12.676523+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:13.676792+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:14.676996+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:15.677218+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:16.677471+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 ms_handle_reset con 0x557d357f1400 session 0x557d33e103c0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d36805000
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:17.677640+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:18.677810+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:19.678061+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:20.678249+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:21.678432+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:22.678663+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:23.678841+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:24.679009+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:25.679219+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:26.679398+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:27.679559+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:28.679769+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:29.680024+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:30.680215+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:31.680375+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:32.705228+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:33.705487+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:34.705685+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:35.705850+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:36.706012+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:37.706179+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:38.706366+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:39.706551+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:40.706761+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:41.706894+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:42.707057+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:43.707225+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:44.707408+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:45.707639+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:46.707856+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:47.708036+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:48.708153+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:49.708314+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:50.708528+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:51.708696+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:52.708896+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:53.709037+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:54.709205+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:55.709406+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:56.709568+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:57.709805+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:58.709963+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:59.710186+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:00.710398+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:01.710555+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:02.710714+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:03.710903+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 8323 writes, 32K keys, 8323 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8323 writes, 1994 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 147 writes, 412 keys, 147 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 147 writes, 65 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:04.711072+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:05.711240+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:06.711450+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:07.711691+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:08.711975+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:09.712239+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:10.712488+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:11.712704+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:12.712931+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:13.713107+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:14.713273+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:15.713421+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:16.713540+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 ms_handle_reset con 0x557d368b9800 session 0x557d338e7a40
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d35bd3400
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:17.713777+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:18.714029+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:19.714205+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:20.714333+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:21.714487+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:22.714828+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:23.714986+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:24.715143+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:25.715302+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 24666112 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:26.715500+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91193344 unmapped: 24657920 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:27.715674+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91193344 unmapped: 24657920 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:28.715876+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91193344 unmapped: 24657920 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:29.716011+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91193344 unmapped: 24657920 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:30.716117+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91193344 unmapped: 24657920 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:31.716300+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91193344 unmapped: 24657920 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:32.716517+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91193344 unmapped: 24657920 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:33.716674+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:34.716821+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:35.716985+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:36.717161+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:37.717304+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:38.717486+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:39.717740+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:40.717894+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:41.718056+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:42.718165+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:43.718353+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:44.718777+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:45.718947+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:46.719084+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:47.719254+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:48.719495+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:49.719764+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333277 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:50.719938+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:51.720084+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:52.720319+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 24633344 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 549.265136719s of 549.384338379s, submitted: 47
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:53.720665+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:54.720935+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:55.721122+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:56.721324+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:57.721520+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:58.721802+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:59.722063+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:00.722285+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:01.722519+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:02.722728+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:03.722936+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:04.723135+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:05.723350+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:06.723502+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:07.723716+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:08.723890+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:09.725641+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:10.725818+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:11.726021+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:12.726261+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:13.726425+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:14.726695+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:15.726963+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:16.727178+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:17.727398+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:18.727564+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:19.727796+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:20.727953+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:21.728143+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:22.728355+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:23.736676+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:24.736943+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:25.738446+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:26.738684+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:27.738816+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:28.738996+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:29.739234+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:30.739412+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:31.739634+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:32.739779+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:33.739964+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:34.740423+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:35.740689+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:36.740861+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:37.741072+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:38.741325+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:39.741551+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:40.741695+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:41.741975+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:42.742225+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:43.742464+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:44.742695+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:45.742896+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:46.743146+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:47.743422+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 23584768 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:48.743631+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:49.744425+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:50.744637+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:51.744862+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:52.745095+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:53.745304+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:54.745494+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:55.745758+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:56.745959+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:57.746169+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:58.746416+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:59.746743+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:00.746995+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:01.747198+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:02.747480+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:03.747653+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:04.747845+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:05.748038+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:06.748266+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:07.748542+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:08.748850+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:09.749203+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:10.749427+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:11.749904+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:12.750199+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:13.750492+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:14.750803+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:15.751133+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:16.751396+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:17.751648+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:18.751802+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:19.751984+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:20.752157+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:21.752338+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:22.752508+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:23.752700+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:24.752823+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:25.753023+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:26.753351+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:27.753506+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:28.753725+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:29.753966+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:30.754160+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:31.754360+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:32.754557+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:33.754756+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:34.755093+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:35.761130+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:36.761296+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:37.761520+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:38.761701+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:39.761888+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:40.762049+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:41.762244+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:42.762484+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:43.762651+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:44.762830+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:45.762983+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:46.763186+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:47.763373+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:48.763662+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:49.763918+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:50.764167+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:51.764356+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:52.764708+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:53.764917+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:54.765157+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:55.765391+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:56.765624+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:57.765878+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:58.766273+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9aad000/0x0/0x4ffc00000, data 0x1ab1c3b/0x1bc1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:59.766546+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332397 data_alloc: 218103808 data_used: 630784
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92274688 unmapped: 23576576 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d34a53c00
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 127.315841675s of 127.345970154s, submitted: 8
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:00.766745+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 193 handle_osd_map epochs [193,193], i have 193, src has [1,193]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 193 ms_handle_reset con 0x557d34a53c00 session 0x557d3302f680
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92340224 unmapped: 23511040 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:01.766969+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92340224 unmapped: 23511040 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:02.767192+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d368f2c00
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92348416 unmapped: 23502848 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:03.767421+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 194 ms_handle_reset con 0x557d368f2c00 session 0x557d3307da40
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 194 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12b3862/0x13c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 23461888 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:04.767615+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284129 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 23461888 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:05.767806+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 23461888 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:06.768094+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 194 heartbeat osd_stat(store_statfs(0x4fa2a7000/0x0/0x4ffc00000, data 0x12b5457/0x13c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 23461888 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:07.768401+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d35bacc00
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fa2a7000/0x0/0x4ffc00000, data 0x12b5457/0x13c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 23453696 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:08.768716+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 23453696 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:09.769091+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 196 ms_handle_reset con 0x557d35bacc00 session 0x557d32c2f860
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291717 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 23453696 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:10.769306+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 23453696 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:11.769581+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92397568 unmapped: 23453696 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:12.769884+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.087179184s of 12.396060944s, submitted: 86
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:13.770115+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:14.770353+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294515 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:15.770557+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:16.770750+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:17.770961+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:18.771200+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:19.771503+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294515 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:20.771757+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:21.771943+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:22.772109+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:23.772323+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:24.772530+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294515 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:25.772706+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:26.772911+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:27.773103+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:28.773311+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:29.773665+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294515 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:30.773812+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:31.774009+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:32.774243+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:33.774436+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:34.774683+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294515 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:35.774936+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:36.775118+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:37.775275+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:38.775487+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:39.775706+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294515 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:40.775963+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:41.776244+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:42.776483+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:43.776680+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:44.776893+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294515 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:45.777102+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:46.777356+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:47.777551+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:48.777763+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:49.778001+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294515 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:50.778188+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:51.778389+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:52.778575+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:53.778793+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:54.778953+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294515 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:55.779135+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:56.779441+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:57.779800+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:58.780073+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:59.780255+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294515 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:00.780492+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa29e000/0x0/0x4ffc00000, data 0x12ba5f7/0x13cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:01.780770+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:02.780917+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:03.781119+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:04.781276+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: handle_auth_request added challenge on 0x557d36804000
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 51.959289551s of 51.969512939s, submitted: 14
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293801 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 23445504 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:05.781686+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _renew_subs
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 198 heartbeat osd_stat(store_statfs(0x4fa29f000/0x0/0x4ffc00000, data 0x12ba5d4/0x13ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 198 ms_handle_reset con 0x557d36804000 session 0x557d33072f00
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:06.781923+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:07.782048+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:08.782233+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 198 heartbeat osd_stat(store_statfs(0x4fa29c000/0x0/0x4ffc00000, data 0x12bc1fb/0x13d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:09.782392+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296775 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:10.782630+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:11.782792+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:12.782942+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 198 heartbeat osd_stat(store_statfs(0x4fa29c000/0x0/0x4ffc00000, data 0x12bc1fb/0x13d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: get_auth_request con 0x557d32491400 auth_method 0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:13.783161+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:14.783312+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:15.783494+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:16.783739+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:17.783980+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:18.784107+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:19.784302+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:20.784533+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:21.784720+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:22.784935+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:23.785156+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:24.785341+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:25.785513+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:26.785712+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:27.785926+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:28.786127+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:29.786325+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:30.786506+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:31.786710+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:32.786943+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:33.787153+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:34.787382+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:35.787624+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:36.787848+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:37.788050+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:38.788253+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:39.788508+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:40.788702+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:41.788920+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:42.789078+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:43.789290+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:44.789439+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:45.789670+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:46.789886+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92413952 unmapped: 23437312 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:47.790097+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:48.790271+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:49.792310+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:50.792456+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:51.792652+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:52.792836+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:53.793072+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:54.793247+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:55.793406+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:56.793675+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:57.793822+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:58.794035+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:59.794245+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:00.794394+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:01.794578+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:02.794733+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:03.794895+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:04.795054+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:05.795190+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:06.797280+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:07.797426+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:08.797556+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:09.797783+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:10.797965+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:11.798055+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:12.798199+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:13.798350+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:14.798500+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:15.798649+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:16.798840+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:17.799188+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:18.799266+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:19.799441+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:20.799576+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:21.799719+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:22.799854+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:23.800003+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:24.800158+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:25.800288+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:26.800409+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:27.800559+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:28.800683+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:29.800864+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:30.801036+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:31.801188+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:32.801325+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:33.801456+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:34.801606+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:35.801714+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:11 compute-0 ceph-osd[90884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:11 compute-0 ceph-osd[90884]: bluestore.MempoolThread(0x557d317abb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299749 data_alloc: 218103808 data_used: 638976
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:36.801842+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 23429120 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:37.801977+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92635136 unmapped: 23216128 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: do_command 'config diff' '{prefix=config diff}'
Nov 24 21:14:11 compute-0 ceph-osd[90884]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 21:14:11 compute-0 ceph-osd[90884]: do_command 'config show' '{prefix=config show}'
Nov 24 21:14:11 compute-0 ceph-osd[90884]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 21:14:11 compute-0 ceph-osd[90884]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 21:14:11 compute-0 ceph-osd[90884]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:38.802118+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 21:14:11 compute-0 ceph-osd[90884]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 23470080 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:39.802292+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: prioritycache tune_memory target: 4294967296 mapped: 92446720 unmapped: 23404544 heap: 115851264 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:11 compute-0 ceph-osd[90884]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa299000/0x0/0x4ffc00000, data 0x12bdcb4/0x13d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: tick
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_tickets
Nov 24 21:14:11 compute-0 ceph-osd[90884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:40.802413+0000)
Nov 24 21:14:11 compute-0 ceph-osd[90884]: do_command 'log dump' '{prefix=log dump}'
Nov 24 21:14:11 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:11.118+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:11 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:11 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:11 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:11 compute-0 ceph-mon[75677]: from='client.15357 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:11 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:11 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/373470295' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 21:14:11 compute-0 ceph-mon[75677]: from='client.15361 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:11 compute-0 ceph-mon[75677]: from='client.15365 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:11 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/98163372' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 21:14:11 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15367 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 24 21:14:11 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688732057' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 21:14:11 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:11 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:14:11 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15371 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:11 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 24 21:14:11 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329291342' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 21:14:11 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15375 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:12.004+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:12 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:12.093+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:12 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:12 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:12 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 21:14:12 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/923445852' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 21:14:12 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15379 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:12 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:12 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:12 compute-0 ceph-mon[75677]: from='client.15367 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:12 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3688732057' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 21:14:12 compute-0 ceph-mon[75677]: pgmap v2792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:12 compute-0 ceph-mon[75677]: from='client.15371 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:12 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/329291342' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 21:14:12 compute-0 ceph-mon[75677]: from='client.15375 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:12 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15383 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:12 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:12.997+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:12 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:12 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 24 21:14:13 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1163777126' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 21:14:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:13.106+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:13 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:13 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:13 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 48 slow ops, oldest one blocked for 4972 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:14:13 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:13 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 24 21:14:13 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486764728' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 21:14:13 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15389 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:13 compute-0 ceph-mgr[75975]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 21:14:13 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-mgr-compute-0-ofslrn[75971]: 2025-11-24T21:14:13.763+0000 7f85d4b75640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 21:14:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:14.032+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:14 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:14 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:14 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:14.080+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:14 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:14 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:15.047+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:15 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:15 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:15.055+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:15 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/923445852' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 21:14:15 compute-0 ceph-mon[75677]: from='client.15379 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:15 compute-0 ceph-mon[75677]: from='client.15383 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:15 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1163777126' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 21:14:15 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:15 compute-0 ceph-mon[75677]: Health check update: 48 slow ops, oldest one blocked for 4972 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:15 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/486764728' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 21:14:15 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:15 compute-0 crontab[327691]: (root) LIST (root)
Nov 24 21:14:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 24 21:14:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272774697' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 21:14:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 24 21:14:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669587596' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 191 heartbeat osd_stat(store_statfs(0x4f8fb7000/0x0/0x4ffc00000, data 0x25a4bd4/0x26b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4236) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:14.291154+0000 osd.1 (osd.1) 4236 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:45.971983+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4237 sent 4236 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:15.265135+0000 osd.1 (osd.1) 4237 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9989> 2025-11-24T21:00:16.215+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470585 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4237) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:15.265135+0000 osd.1 (osd.1) 4237 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:46.972244+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4238 sent 4237 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:16.216291+0000 osd.1 (osd.1) 4238 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9975> 2025-11-24T21:00:17.235+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4238) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:16.216291+0000 osd.1 (osd.1) 4238 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:47.972507+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4239 sent 4238 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:17.235935+0000 osd.1 (osd.1) 4239 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9964> 2025-11-24T21:00:18.271+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4239) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:17.235935+0000 osd.1 (osd.1) 4239 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:48.972733+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4240 sent 4239 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:18.272267+0000 osd.1 (osd.1) 4240 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9953> 2025-11-24T21:00:19.222+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4240) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:18.272267+0000 osd.1 (osd.1) 4240 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:49.972997+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4241 sent 4240 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:19.223488+0000 osd.1 (osd.1) 4241 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9942> 2025-11-24T21:00:20.265+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4241) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:19.223488+0000 osd.1 (osd.1) 4241 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 191 heartbeat osd_stat(store_statfs(0x4f8fb7000/0x0/0x4ffc00000, data 0x25a4bd4/0x26b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:50.973193+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4242 sent 4241 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:20.266243+0000 osd.1 (osd.1) 4242 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9930> 2025-11-24T21:00:21.258+0000 7f1a67169640 -1 osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470585 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4242) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:20.266243+0000 osd.1 (osd.1) 4242 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:51.973381+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4243 sent 4242 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:21.259238+0000 osd.1 (osd.1) 4243 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9915> 2025-11-24T21:00:22.280+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4243) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:21.259238+0000 osd.1 (osd.1) 4243 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:52.973689+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4244 sent 4243 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:22.281549+0000 osd.1 (osd.1) 4244 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9904> 2025-11-24T21:00:23.280+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4244) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:22.281549+0000 osd.1 (osd.1) 4244 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:53.973913+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4245 sent 4244 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:23.280800+0000 osd.1 (osd.1) 4245 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9892> 2025-11-24T21:00:24.280+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4245) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:23.280800+0000 osd.1 (osd.1) 4245 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:54.974440+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4246 sent 4245 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:24.280970+0000 osd.1 (osd.1) 4246 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9881> 2025-11-24T21:00:25.236+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4246) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:24.280970+0000 osd.1 (osd.1) 4246 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:55.974682+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4247 sent 4246 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:25.237304+0000 osd.1 (osd.1) 4247 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9870> 2025-11-24T21:00:26.263+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4247) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:25.237304+0000 osd.1 (osd.1) 4247 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:56.974957+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4248 sent 4247 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:26.264544+0000 osd.1 (osd.1) 4248 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9856> 2025-11-24T21:00:27.233+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4248) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:26.264544+0000 osd.1 (osd.1) 4248 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:57.975210+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4249 sent 4248 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:27.234466+0000 osd.1 (osd.1) 4249 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9844> 2025-11-24T21:00:28.190+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4249) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:27.234466+0000 osd.1 (osd.1) 4249 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:58.975437+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4250 sent 4249 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:28.191732+0000 osd.1 (osd.1) 4250 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9832> 2025-11-24T21:00:29.224+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4250) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:28.191732+0000 osd.1 (osd.1) 4250 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:59.975703+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4251 sent 4250 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:29.225709+0000 osd.1 (osd.1) 4251 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9821> 2025-11-24T21:00:30.245+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4251) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:29.225709+0000 osd.1 (osd.1) 4251 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:00.975972+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4252 sent 4251 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:30.246309+0000 osd.1 (osd.1) 4252 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9810> 2025-11-24T21:00:31.256+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4252) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:30.246309+0000 osd.1 (osd.1) 4252 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:01.976197+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4253 sent 4252 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:31.257290+0000 osd.1 (osd.1) 4253 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9796> 2025-11-24T21:00:32.251+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4253) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:31.257290+0000 osd.1 (osd.1) 4253 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:02.976385+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4254 sent 4253 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:32.252726+0000 osd.1 (osd.1) 4254 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9785> 2025-11-24T21:00:33.280+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4254) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:32.252726+0000 osd.1 (osd.1) 4254 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:03.976576+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4255 sent 4254 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:33.282043+0000 osd.1 (osd.1) 4255 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9774> 2025-11-24T21:00:34.238+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4255) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:33.282043+0000 osd.1 (osd.1) 4255 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:04.976886+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4256 sent 4255 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:34.239859+0000 osd.1 (osd.1) 4256 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9762> 2025-11-24T21:00:35.201+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4256) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:34.239859+0000 osd.1 (osd.1) 4256 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:05.977151+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4257 sent 4256 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:35.202403+0000 osd.1 (osd.1) 4257 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9751> 2025-11-24T21:00:36.214+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4257) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:35.202403+0000 osd.1 (osd.1) 4257 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:06.977426+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4258 sent 4257 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:36.215749+0000 osd.1 (osd.1) 4258 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9737> 2025-11-24T21:00:37.174+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4258) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:36.215749+0000 osd.1 (osd.1) 4258 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:07.977676+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4259 sent 4258 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:37.175377+0000 osd.1 (osd.1) 4259 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9726> 2025-11-24T21:00:38.176+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:08.977894+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4260 sent 4259 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:38.177868+0000 osd.1 (osd.1) 4260 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4259) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:37.175377+0000 osd.1 (osd.1) 4259 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9714> 2025-11-24T21:00:39.133+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:09.978084+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4261 sent 4260 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:39.134435+0000 osd.1 (osd.1) 4261 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4260) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:38.177868+0000 osd.1 (osd.1) 4260 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4261) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:39.134435+0000 osd.1 (osd.1) 4261 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9701> 2025-11-24T21:00:40.155+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:10.978335+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4262 sent 4261 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:40.156913+0000 osd.1 (osd.1) 4262 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4262) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:40.156913+0000 osd.1 (osd.1) 4262 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9690> 2025-11-24T21:00:41.147+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:11.978535+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4263 sent 4262 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:41.149149+0000 osd.1 (osd.1) 4263 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4263) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:41.149149+0000 osd.1 (osd.1) 4263 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9676> 2025-11-24T21:00:42.099+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:12.978779+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4264 sent 4263 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:42.101192+0000 osd.1 (osd.1) 4264 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4264) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:42.101192+0000 osd.1 (osd.1) 4264 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9665> 2025-11-24T21:00:43.143+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:13.979000+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4265 sent 4264 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:43.144383+0000 osd.1 (osd.1) 4265 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4265) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:43.144383+0000 osd.1 (osd.1) 4265 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9653> 2025-11-24T21:00:44.116+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:14.979218+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4266 sent 4265 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:44.116969+0000 osd.1 (osd.1) 4266 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4266) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:44.116969+0000 osd.1 (osd.1) 4266 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9642> 2025-11-24T21:00:45.151+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:15.979446+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4267 sent 4266 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:45.152125+0000 osd.1 (osd.1) 4267 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4267) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:45.152125+0000 osd.1 (osd.1) 4267 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9631> 2025-11-24T21:00:46.124+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:16.979684+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4268 sent 4267 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:46.125390+0000 osd.1 (osd.1) 4268 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4268) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:46.125390+0000 osd.1 (osd.1) 4268 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9617> 2025-11-24T21:00:47.121+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:17.979890+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4269 sent 4268 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:47.121750+0000 osd.1 (osd.1) 4269 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4269) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:47.121750+0000 osd.1 (osd.1) 4269 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9605> 2025-11-24T21:00:48.085+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:18.980192+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4270 sent 4269 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:48.085665+0000 osd.1 (osd.1) 4270 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4270) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:48.085665+0000 osd.1 (osd.1) 4270 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9594> 2025-11-24T21:00:49.082+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:19.980402+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4271 sent 4270 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:49.083385+0000 osd.1 (osd.1) 4271 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9585> 2025-11-24T21:00:50.035+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4271) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:49.083385+0000 osd.1 (osd.1) 4271 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:20.980698+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4272 sent 4271 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:50.036289+0000 osd.1 (osd.1) 4272 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9573> 2025-11-24T21:00:51.012+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4272) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:50.036289+0000 osd.1 (osd.1) 4272 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:21.980936+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4273 sent 4272 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:51.012847+0000 osd.1 (osd.1) 4273 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9559> 2025-11-24T21:00:52.038+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4273) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:51.012847+0000 osd.1 (osd.1) 4273 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:22.981167+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4274 sent 4273 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:52.039523+0000 osd.1 (osd.1) 4274 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9548> 2025-11-24T21:00:53.023+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4274) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:52.039523+0000 osd.1 (osd.1) 4274 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9542> 2025-11-24T21:00:53.975+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:23.981416+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4276 sent 4274 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:53.024477+0000 osd.1 (osd.1) 4275 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:53.976179+0000 osd.1 (osd.1) 4276 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4276) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:53.024477+0000 osd.1 (osd.1) 4275 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:53.976179+0000 osd.1 (osd.1) 4276 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:24.981671+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9525> 2025-11-24T21:00:55.014+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:25.981867+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4277 sent 4276 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:55.015432+0000 osd.1 (osd.1) 4277 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9515> 2025-11-24T21:00:56.050+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4277) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:55.015432+0000 osd.1 (osd.1) 4277 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:26.982150+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4278 sent 4277 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:56.050937+0000 osd.1 (osd.1) 4278 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9501> 2025-11-24T21:00:57.068+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4278) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:56.050937+0000 osd.1 (osd.1) 4278 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:27.982416+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4279 sent 4278 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:57.069558+0000 osd.1 (osd.1) 4279 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9490> 2025-11-24T21:00:58.110+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4279) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:57.069558+0000 osd.1 (osd.1) 4279 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:28.982681+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4280 sent 4279 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:58.111435+0000 osd.1 (osd.1) 4280 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9479> 2025-11-24T21:00:59.097+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4280) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:58.111435+0000 osd.1 (osd.1) 4280 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:29.982918+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4281 sent 4280 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:00:59.098071+0000 osd.1 (osd.1) 4281 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9468> 2025-11-24T21:01:00.052+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4281) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:00:59.098071+0000 osd.1 (osd.1) 4281 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:30.983181+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4282 sent 4281 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:00.052738+0000 osd.1 (osd.1) 4282 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9456> 2025-11-24T21:01:01.009+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4282) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:00.052738+0000 osd.1 (osd.1) 4282 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:31.983471+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4283 sent 4282 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:01.009961+0000 osd.1 (osd.1) 4283 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9442> 2025-11-24T21:01:02.037+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4283) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:01.009961+0000 osd.1 (osd.1) 4283 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:32.983764+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4284 sent 4283 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:02.038498+0000 osd.1 (osd.1) 4284 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9431> 2025-11-24T21:01:03.026+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4284) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:02.038498+0000 osd.1 (osd.1) 4284 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:33.983988+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4285 sent 4284 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:03.026967+0000 osd.1 (osd.1) 4285 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9420> 2025-11-24T21:01:04.001+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4285) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:03.026967+0000 osd.1 (osd.1) 4285 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:34.984237+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4286 sent 4285 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:04.002646+0000 osd.1 (osd.1) 4286 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9408> 2025-11-24T21:01:05.022+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4286) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:04.002646+0000 osd.1 (osd.1) 4286 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:35.984496+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4287 sent 4286 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:05.023257+0000 osd.1 (osd.1) 4287 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9396> 2025-11-24T21:01:06.034+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4287) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:05.023257+0000 osd.1 (osd.1) 4287 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:36.984786+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4288 sent 4287 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:06.036465+0000 osd.1 (osd.1) 4288 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9382> 2025-11-24T21:01:07.044+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4288) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:06.036465+0000 osd.1 (osd.1) 4288 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:37.985002+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4289 sent 4288 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:07.046038+0000 osd.1 (osd.1) 4289 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9371> 2025-11-24T21:01:08.085+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4289) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:07.046038+0000 osd.1 (osd.1) 4289 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:38.986664+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4290 sent 4289 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:08.086683+0000 osd.1 (osd.1) 4290 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9360> 2025-11-24T21:01:09.067+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4290) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:08.086683+0000 osd.1 (osd.1) 4290 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:39.986896+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4291 sent 4290 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:09.068796+0000 osd.1 (osd.1) 4291 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9349> 2025-11-24T21:01:10.078+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4291) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:09.068796+0000 osd.1 (osd.1) 4291 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:40.987931+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4292 sent 4291 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:10.080517+0000 osd.1 (osd.1) 4292 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9337> 2025-11-24T21:01:11.102+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4292) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:10.080517+0000 osd.1 (osd.1) 4292 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:41.988138+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4293 sent 4292 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:11.104438+0000 osd.1 (osd.1) 4293 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9323> 2025-11-24T21:01:12.076+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4293) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:11.104438+0000 osd.1 (osd.1) 4293 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,18,1])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:42.988323+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4294 sent 4293 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:12.077824+0000 osd.1 (osd.1) 4294 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9311> 2025-11-24T21:01:13.061+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4294) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:12.077824+0000 osd.1 (osd.1) 4294 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:43.988555+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4295 sent 4294 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:13.062887+0000 osd.1 (osd.1) 4295 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9300> 2025-11-24T21:01:14.034+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4295) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:13.062887+0000 osd.1 (osd.1) 4295 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:44.988832+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4296 sent 4295 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:14.036378+0000 osd.1 (osd.1) 4296 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9289> 2025-11-24T21:01:15.043+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4296) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:14.036378+0000 osd.1 (osd.1) 4296 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:45.989063+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4297 sent 4296 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:15.044699+0000 osd.1 (osd.1) 4297 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9278> 2025-11-24T21:01:16.055+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4297) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:15.044699+0000 osd.1 (osd.1) 4297 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:46.989284+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4298 sent 4297 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:16.057095+0000 osd.1 (osd.1) 4298 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9264> 2025-11-24T21:01:17.097+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4298) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:16.057095+0000 osd.1 (osd.1) 4298 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:47.989472+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4299 sent 4298 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:17.098858+0000 osd.1 (osd.1) 4299 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9253> 2025-11-24T21:01:18.069+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4299) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:17.098858+0000 osd.1 (osd.1) 4299 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,17,2])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:48.989728+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4300 sent 4299 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:18.071147+0000 osd.1 (osd.1) 4300 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9241> 2025-11-24T21:01:19.110+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,17,2])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4300) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:18.071147+0000 osd.1 (osd.1) 4300 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:49.989965+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4301 sent 4300 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:19.111793+0000 osd.1 (osd.1) 4301 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9229> 2025-11-24T21:01:20.104+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4301) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:19.111793+0000 osd.1 (osd.1) 4301 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:50.990415+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4302 sent 4301 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:20.105517+0000 osd.1 (osd.1) 4302 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9218> 2025-11-24T21:01:21.142+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4302) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:20.105517+0000 osd.1 (osd.1) 4302 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:51.990640+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4303 sent 4302 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:21.142392+0000 osd.1 (osd.1) 4303 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9204> 2025-11-24T21:01:22.117+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4303) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:21.142392+0000 osd.1 (osd.1) 4303 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:52.990885+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4304 sent 4303 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:22.117717+0000 osd.1 (osd.1) 4304 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9193> 2025-11-24T21:01:23.095+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4304) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:22.117717+0000 osd.1 (osd.1) 4304 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:53.991134+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4305 sent 4304 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:23.096208+0000 osd.1 (osd.1) 4305 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9182> 2025-11-24T21:01:24.053+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4305) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:23.096208+0000 osd.1 (osd.1) 4305 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:54.991384+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4306 sent 4305 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:24.054032+0000 osd.1 (osd.1) 4306 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,16,3])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9170> 2025-11-24T21:01:25.055+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4306) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:24.054032+0000 osd.1 (osd.1) 4306 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,16,3])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:55.991685+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4307 sent 4306 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:25.056277+0000 osd.1 (osd.1) 4307 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9158> 2025-11-24T21:01:26.030+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4307) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:25.056277+0000 osd.1 (osd.1) 4307 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:56.991925+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4308 sent 4307 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:26.030729+0000 osd.1 (osd.1) 4308 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9144> 2025-11-24T21:01:27.043+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4308) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:26.030729+0000 osd.1 (osd.1) 4308 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:57.992202+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4309 sent 4308 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:27.044180+0000 osd.1 (osd.1) 4309 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9133> 2025-11-24T21:01:28.077+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4309) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:27.044180+0000 osd.1 (osd.1) 4309 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:58.992420+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4310 sent 4309 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:28.077814+0000 osd.1 (osd.1) 4310 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,15,4])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9121> 2025-11-24T21:01:29.059+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4310) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:28.077814+0000 osd.1 (osd.1) 4310 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:59.992645+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4311 sent 4310 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:29.060523+0000 osd.1 (osd.1) 4311 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9110> 2025-11-24T21:01:30.093+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4311) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:29.060523+0000 osd.1 (osd.1) 4311 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:00.992843+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4312 sent 4311 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:30.093952+0000 osd.1 (osd.1) 4312 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9099> 2025-11-24T21:01:31.051+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4312) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:30.093952+0000 osd.1 (osd.1) 4312 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:01.993677+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4313 sent 4312 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:31.052242+0000 osd.1 (osd.1) 4313 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9085> 2025-11-24T21:01:32.038+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4313) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:31.052242+0000 osd.1 (osd.1) 4313 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:02.995201+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4314 sent 4313 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:32.038809+0000 osd.1 (osd.1) 4314 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9074> 2025-11-24T21:01:33.056+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4314) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:32.038809+0000 osd.1 (osd.1) 4314 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:03.995412+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4315 sent 4314 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:33.057428+0000 osd.1 (osd.1) 4315 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9063> 2025-11-24T21:01:34.047+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,14,5])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4315) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:33.057428+0000 osd.1 (osd.1) 4315 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:04.995628+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4316 sent 4315 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:34.048032+0000 osd.1 (osd.1) 4316 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9051> 2025-11-24T21:01:35.087+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4316) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:34.048032+0000 osd.1 (osd.1) 4316 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:05.995831+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4317 sent 4316 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:35.088445+0000 osd.1 (osd.1) 4317 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9040> 2025-11-24T21:01:36.125+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4317) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:35.088445+0000 osd.1 (osd.1) 4317 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:06.996068+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4318 sent 4317 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:36.126515+0000 osd.1 (osd.1) 4318 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9026> 2025-11-24T21:01:37.089+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4318) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:36.126515+0000 osd.1 (osd.1) 4318 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:07.996325+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4319 sent 4318 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:37.090520+0000 osd.1 (osd.1) 4319 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9015> 2025-11-24T21:01:38.135+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4319) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:37.090520+0000 osd.1 (osd.1) 4319 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:08.996564+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4320 sent 4319 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:38.135878+0000 osd.1 (osd.1) 4320 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -9004> 2025-11-24T21:01:39.103+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,13,6])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4320) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:38.135878+0000 osd.1 (osd.1) 4320 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:09.996839+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4321 sent 4320 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:39.104666+0000 osd.1 (osd.1) 4321 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8992> 2025-11-24T21:01:40.134+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4321) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:39.104666+0000 osd.1 (osd.1) 4321 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:10.997000+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4322 sent 4321 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:40.135636+0000 osd.1 (osd.1) 4322 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8981> 2025-11-24T21:01:41.123+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4322) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:40.135636+0000 osd.1 (osd.1) 4322 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:11.997203+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4323 sent 4322 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:41.123917+0000 osd.1 (osd.1) 4323 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8967> 2025-11-24T21:01:42.115+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4323) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:41.123917+0000 osd.1 (osd.1) 4323 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:12.997424+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4324 sent 4323 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:42.116077+0000 osd.1 (osd.1) 4324 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8956> 2025-11-24T21:01:43.095+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4324) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:42.116077+0000 osd.1 (osd.1) 4324 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:13.997666+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4325 sent 4324 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:43.096415+0000 osd.1 (osd.1) 4325 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8945> 2025-11-24T21:01:44.087+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4325) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:43.096415+0000 osd.1 (osd.1) 4325 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,12,7])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:14.997947+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4326 sent 4325 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:44.088530+0000 osd.1 (osd.1) 4326 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8933> 2025-11-24T21:01:45.116+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4326) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:44.088530+0000 osd.1 (osd.1) 4326 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:15.998146+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4327 sent 4326 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:45.117560+0000 osd.1 (osd.1) 4327 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8922> 2025-11-24T21:01:46.165+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,12,7])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4327) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:45.117560+0000 osd.1 (osd.1) 4327 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:16.998398+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4328 sent 4327 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:46.166126+0000 osd.1 (osd.1) 4328 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8907> 2025-11-24T21:01:47.210+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4328) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:46.166126+0000 osd.1 (osd.1) 4328 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:17.998632+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4329 sent 4328 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:47.211958+0000 osd.1 (osd.1) 4329 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8896> 2025-11-24T21:01:48.184+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4329) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:47.211958+0000 osd.1 (osd.1) 4329 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:18.998866+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4330 sent 4329 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:48.185807+0000 osd.1 (osd.1) 4330 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8885> 2025-11-24T21:01:49.182+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4330) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:48.185807+0000 osd.1 (osd.1) 4330 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:19.999181+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4331 sent 4330 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:49.183164+0000 osd.1 (osd.1) 4331 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8874> 2025-11-24T21:01:50.160+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4331) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:49.183164+0000 osd.1 (osd.1) 4331 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:20.999444+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4332 sent 4331 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:50.161895+0000 osd.1 (osd.1) 4332 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8863> 2025-11-24T21:01:51.137+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4332) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:50.161895+0000 osd.1 (osd.1) 4332 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,11,8])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:21.999943+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4333 sent 4332 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:51.138976+0000 osd.1 (osd.1) 4333 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8848> 2025-11-24T21:01:52.171+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4333) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:51.138976+0000 osd.1 (osd.1) 4333 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:23.000179+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4334 sent 4333 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:52.173034+0000 osd.1 (osd.1) 4334 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8837> 2025-11-24T21:01:53.151+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4334) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:52.173034+0000 osd.1 (osd.1) 4334 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:24.000387+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4335 sent 4334 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:53.152925+0000 osd.1 (osd.1) 4335 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8826> 2025-11-24T21:01:54.106+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4335) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:53.152925+0000 osd.1 (osd.1) 4335 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:25.000657+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4336 sent 4335 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:54.108019+0000 osd.1 (osd.1) 4336 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8815> 2025-11-24T21:01:55.081+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4336) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:54.108019+0000 osd.1 (osd.1) 4336 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:26.000869+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4337 sent 4336 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:55.082429+0000 osd.1 (osd.1) 4337 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8804> 2025-11-24T21:01:56.078+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,10,9])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4337) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:55.082429+0000 osd.1 (osd.1) 4337 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:27.001096+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4338 sent 4337 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:56.080154+0000 osd.1 (osd.1) 4338 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8789> 2025-11-24T21:01:57.084+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4338) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:56.080154+0000 osd.1 (osd.1) 4338 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:28.001353+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4339 sent 4338 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:57.086184+0000 osd.1 (osd.1) 4339 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8778> 2025-11-24T21:01:58.093+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4339) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:57.086184+0000 osd.1 (osd.1) 4339 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:29.001672+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4340 sent 4339 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:58.093904+0000 osd.1 (osd.1) 4340 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8767> 2025-11-24T21:01:59.086+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4340) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:58.093904+0000 osd.1 (osd.1) 4340 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:30.001955+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4341 sent 4340 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:01:59.087371+0000 osd.1 (osd.1) 4341 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8756> 2025-11-24T21:02:00.119+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4341) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:01:59.087371+0000 osd.1 (osd.1) 4341 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:31.002268+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4342 sent 4341 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:00.119511+0000 osd.1 (osd.1) 4342 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8745> 2025-11-24T21:02:01.072+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4342) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:00.119511+0000 osd.1 (osd.1) 4342 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,9,10])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:32.003506+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4343 sent 4342 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:01.072860+0000 osd.1 (osd.1) 4343 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8730> 2025-11-24T21:02:02.086+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4343) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:01.072860+0000 osd.1 (osd.1) 4343 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:33.003785+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4344 sent 4343 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:02.086442+0000 osd.1 (osd.1) 4344 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,8,11])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8718> 2025-11-24T21:02:03.077+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4344) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:02.086442+0000 osd.1 (osd.1) 4344 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:34.004083+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4345 sent 4344 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:03.077909+0000 osd.1 (osd.1) 4345 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8707> 2025-11-24T21:02:04.098+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,8,11])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4345) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:03.077909+0000 osd.1 (osd.1) 4345 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:35.004407+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4346 sent 4345 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:04.099273+0000 osd.1 (osd.1) 4346 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8695> 2025-11-24T21:02:05.074+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4346) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:04.099273+0000 osd.1 (osd.1) 4346 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:36.004670+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4347 sent 4346 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:05.074943+0000 osd.1 (osd.1) 4347 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8684> 2025-11-24T21:02:06.041+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4347) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:05.074943+0000 osd.1 (osd.1) 4347 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8676> 2025-11-24T21:02:07.002+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:37.004951+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4349 sent 4347 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:06.042072+0000 osd.1 (osd.1) 4348 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:07.003136+0000 osd.1 (osd.1) 4349 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4349) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:06.042072+0000 osd.1 (osd.1) 4348 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:07.003136+0000 osd.1 (osd.1) 4349 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:38.005216+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8660> 2025-11-24T21:02:08.016+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,7,12])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:39.005428+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4350 sent 4349 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:08.017449+0000 osd.1 (osd.1) 4350 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8650> 2025-11-24T21:02:09.039+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4350) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:08.017449+0000 osd.1 (osd.1) 4350 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:40.005678+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4351 sent 4350 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:09.040102+0000 osd.1 (osd.1) 4351 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8639> 2025-11-24T21:02:10.024+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4351) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:09.040102+0000 osd.1 (osd.1) 4351 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8634> 2025-11-24T21:02:10.992+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:41.005904+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4353 sent 4351 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:10.025156+0000 osd.1 (osd.1) 4352 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:10.992974+0000 osd.1 (osd.1) 4353 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,7,12])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4353) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:10.025156+0000 osd.1 (osd.1) 4352 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:10.992974+0000 osd.1 (osd.1) 4353 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:42.006138+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8614> 2025-11-24T21:02:12.017+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,6,13])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:43.006309+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4354 sent 4353 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:12.018532+0000 osd.1 (osd.1) 4354 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8604> 2025-11-24T21:02:13.057+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4354) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:12.018532+0000 osd.1 (osd.1) 4354 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:44.006530+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4355 sent 4354 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:13.058322+0000 osd.1 (osd.1) 4355 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8593> 2025-11-24T21:02:14.050+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4355) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:13.058322+0000 osd.1 (osd.1) 4355 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8588> 2025-11-24T21:02:15.003+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:45.006801+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4357 sent 4355 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:14.051627+0000 osd.1 (osd.1) 4356 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:15.004181+0000 osd.1 (osd.1) 4357 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8578> 2025-11-24T21:02:15.988+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4357) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:14.051627+0000 osd.1 (osd.1) 4356 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:15.004181+0000 osd.1 (osd.1) 4357 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:46.006981+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4358 sent 4357 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:15.989340+0000 osd.1 (osd.1) 4358 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,6,13])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:47.007180+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4358) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:15.989340+0000 osd.1 (osd.1) 4358 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8557> 2025-11-24T21:02:17.008+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8554> 2025-11-24T21:02:17.997+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:48.007393+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4360 sent 4358 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:17.008773+0000 osd.1 (osd.1) 4359 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:17.997822+0000 osd.1 (osd.1) 4360 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,5,14])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:49.007609+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4360) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:17.008773+0000 osd.1 (osd.1) 4359 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:17.997822+0000 osd.1 (osd.1) 4360 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8537> 2025-11-24T21:02:19.039+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,5,14])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:50.007778+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4361 sent 4360 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:19.039990+0000 osd.1 (osd.1) 4361 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4361) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:19.039990+0000 osd.1 (osd.1) 4361 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8525> 2025-11-24T21:02:20.086+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,5,14])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:51.008021+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4362 sent 4361 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:20.087783+0000 osd.1 (osd.1) 4362 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4362) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:20.087783+0000 osd.1 (osd.1) 4362 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8513> 2025-11-24T21:02:21.065+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:52.008246+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4363 sent 4362 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:21.067395+0000 osd.1 (osd.1) 4363 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4363) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:21.067395+0000 osd.1 (osd.1) 4363 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8499> 2025-11-24T21:02:22.065+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:53.008507+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4364 sent 4363 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:22.067295+0000 osd.1 (osd.1) 4364 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8490> 2025-11-24T21:02:23.017+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4364) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:22.067295+0000 osd.1 (osd.1) 4364 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:54.008771+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4365 sent 4364 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:23.019517+0000 osd.1 (osd.1) 4365 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8480> 2025-11-24T21:02:24.008+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4365) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:23.019517+0000 osd.1 (osd.1) 4365 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,4,15])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:55.009017+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4366 sent 4365 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:24.010029+0000 osd.1 (osd.1) 4366 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8467> 2025-11-24T21:02:25.048+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4366) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:24.010029+0000 osd.1 (osd.1) 4366 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:56.009231+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4367 sent 4366 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:25.049577+0000 osd.1 (osd.1) 4367 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8456> 2025-11-24T21:02:26.071+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4367) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:25.049577+0000 osd.1 (osd.1) 4367 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:57.009481+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4368 sent 4367 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:26.072984+0000 osd.1 (osd.1) 4368 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8442> 2025-11-24T21:02:27.115+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4368) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:26.072984+0000 osd.1 (osd.1) 4368 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:58.009709+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4369 sent 4368 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:27.116409+0000 osd.1 (osd.1) 4369 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8431> 2025-11-24T21:02:28.101+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4369) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:27.116409+0000 osd.1 (osd.1) 4369 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:59.009913+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4370 sent 4369 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:28.103017+0000 osd.1 (osd.1) 4370 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,16])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8419> 2025-11-24T21:02:29.084+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4370) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:28.103017+0000 osd.1 (osd.1) 4370 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:00.010127+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4371 sent 4370 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:29.085682+0000 osd.1 (osd.1) 4371 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8408> 2025-11-24T21:02:30.112+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4371) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:29.085682+0000 osd.1 (osd.1) 4371 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:01.010393+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4372 sent 4371 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:30.113500+0000 osd.1 (osd.1) 4372 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8397> 2025-11-24T21:02:31.096+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4372) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:30.113500+0000 osd.1 (osd.1) 4372 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:02.010683+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4373 sent 4372 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:31.097692+0000 osd.1 (osd.1) 4373 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8383> 2025-11-24T21:02:32.120+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4373) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:31.097692+0000 osd.1 (osd.1) 4373 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:03.010958+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4374 sent 4373 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:32.122168+0000 osd.1 (osd.1) 4374 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8372> 2025-11-24T21:02:33.157+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4374) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:32.122168+0000 osd.1 (osd.1) 4374 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:04.011292+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4375 sent 4374 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:33.159296+0000 osd.1 (osd.1) 4375 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8361> 2025-11-24T21:02:34.173+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4375) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:33.159296+0000 osd.1 (osd.1) 4375 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,2,17])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:05.011866+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4376 sent 4375 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:34.175363+0000 osd.1 (osd.1) 4376 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8349> 2025-11-24T21:02:35.217+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4376) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:34.175363+0000 osd.1 (osd.1) 4376 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:06.012083+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4377 sent 4376 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:35.217726+0000 osd.1 (osd.1) 4377 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8338> 2025-11-24T21:02:36.251+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4377) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:35.217726+0000 osd.1 (osd.1) 4377 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:07.012314+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4378 sent 4377 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:36.251854+0000 osd.1 (osd.1) 4378 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8324> 2025-11-24T21:02:37.236+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4378) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:36.251854+0000 osd.1 (osd.1) 4378 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:08.012551+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4379 sent 4378 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:37.236889+0000 osd.1 (osd.1) 4379 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8313> 2025-11-24T21:02:38.245+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4379) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:37.236889+0000 osd.1 (osd.1) 4379 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:09.012778+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4380 sent 4379 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:38.246207+0000 osd.1 (osd.1) 4380 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8301> 2025-11-24T21:02:39.275+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4380) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:38.246207+0000 osd.1 (osd.1) 4380 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:10.013045+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4381 sent 4380 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:39.275724+0000 osd.1 (osd.1) 4381 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8290> 2025-11-24T21:02:40.284+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4381) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:39.275724+0000 osd.1 (osd.1) 4381 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:11.013297+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4382 sent 4381 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:40.285004+0000 osd.1 (osd.1) 4382 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8279> 2025-11-24T21:02:41.294+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4382) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:40.285004+0000 osd.1 (osd.1) 4382 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:12.013554+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4383 sent 4382 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:41.294783+0000 osd.1 (osd.1) 4383 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8265> 2025-11-24T21:02:42.263+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4383) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:41.294783+0000 osd.1 (osd.1) 4383 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:13.013833+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4384 sent 4383 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:42.264577+0000 osd.1 (osd.1) 4384 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8254> 2025-11-24T21:02:43.222+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4384) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:42.264577+0000 osd.1 (osd.1) 4384 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:14.014060+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4385 sent 4384 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:43.223306+0000 osd.1 (osd.1) 4385 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8242> 2025-11-24T21:02:44.196+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4385) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:43.223306+0000 osd.1 (osd.1) 4385 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:15.014307+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4386 sent 4385 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:44.197281+0000 osd.1 (osd.1) 4386 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8231> 2025-11-24T21:02:45.205+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4386) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:44.197281+0000 osd.1 (osd.1) 4386 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:16.014554+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4387 sent 4386 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:45.206051+0000 osd.1 (osd.1) 4387 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8219> 2025-11-24T21:02:46.238+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4387) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:45.206051+0000 osd.1 (osd.1) 4387 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:17.014881+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4388 sent 4387 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:46.239302+0000 osd.1 (osd.1) 4388 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8204> 2025-11-24T21:02:47.209+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4388) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:46.239302+0000 osd.1 (osd.1) 4388 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:18.015125+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4389 sent 4388 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:47.209982+0000 osd.1 (osd.1) 4389 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8193> 2025-11-24T21:02:48.247+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4389) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:47.209982+0000 osd.1 (osd.1) 4389 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:19.015337+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4390 sent 4389 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:48.248406+0000 osd.1 (osd.1) 4390 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8182> 2025-11-24T21:02:49.287+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4390) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:48.248406+0000 osd.1 (osd.1) 4390 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:20.015683+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4391 sent 4390 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:49.288217+0000 osd.1 (osd.1) 4391 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8171> 2025-11-24T21:02:50.327+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4391) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:49.288217+0000 osd.1 (osd.1) 4391 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:21.015921+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4392 sent 4391 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:50.327859+0000 osd.1 (osd.1) 4392 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8159> 2025-11-24T21:02:51.283+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4392) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:50.327859+0000 osd.1 (osd.1) 4392 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:22.016121+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4393 sent 4392 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:51.283798+0000 osd.1 (osd.1) 4393 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8144> 2025-11-24T21:02:52.238+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4393) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:51.283798+0000 osd.1 (osd.1) 4393 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:23.016314+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4394 sent 4393 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:52.238779+0000 osd.1 (osd.1) 4394 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8133> 2025-11-24T21:02:53.205+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4394) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:52.238779+0000 osd.1 (osd.1) 4394 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:24.016547+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4395 sent 4394 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:53.205964+0000 osd.1 (osd.1) 4395 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8122> 2025-11-24T21:02:54.189+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4395) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:53.205964+0000 osd.1 (osd.1) 4395 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:25.017041+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4396 sent 4395 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:54.189747+0000 osd.1 (osd.1) 4396 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8111> 2025-11-24T21:02:55.157+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4396) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:54.189747+0000 osd.1 (osd.1) 4396 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:26.017362+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4397 sent 4396 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:55.157805+0000 osd.1 (osd.1) 4397 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8100> 2025-11-24T21:02:56.169+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4397) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:55.157805+0000 osd.1 (osd.1) 4397 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:27.017662+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4398 sent 4397 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:56.169916+0000 osd.1 (osd.1) 4398 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8085> 2025-11-24T21:02:57.166+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4398) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:56.169916+0000 osd.1 (osd.1) 4398 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:28.017889+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4399 sent 4398 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:57.166928+0000 osd.1 (osd.1) 4399 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8074> 2025-11-24T21:02:58.207+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4399) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:57.166928+0000 osd.1 (osd.1) 4399 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:29.018112+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4400 sent 4399 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:58.208690+0000 osd.1 (osd.1) 4400 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8063> 2025-11-24T21:02:59.248+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4400) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:58.208690+0000 osd.1 (osd.1) 4400 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:30.018331+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4401 sent 4400 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:02:59.249770+0000 osd.1 (osd.1) 4401 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8051> 2025-11-24T21:03:00.208+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4401) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:02:59.249770+0000 osd.1 (osd.1) 4401 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:31.018718+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4402 sent 4401 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:00.210037+0000 osd.1 (osd.1) 4402 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8040> 2025-11-24T21:03:01.182+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4402) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:00.210037+0000 osd.1 (osd.1) 4402 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:32.019024+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4403 sent 4402 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:01.183980+0000 osd.1 (osd.1) 4403 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8026> 2025-11-24T21:03:02.170+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4403) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:01.183980+0000 osd.1 (osd.1) 4403 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:33.019264+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4404 sent 4403 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:02.172390+0000 osd.1 (osd.1) 4404 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8015> 2025-11-24T21:03:03.192+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4404) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:02.172390+0000 osd.1 (osd.1) 4404 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:34.019511+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4405 sent 4404 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:03.193935+0000 osd.1 (osd.1) 4405 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -8004> 2025-11-24T21:03:04.217+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4405) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:03.193935+0000 osd.1 (osd.1) 4405 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:35.019843+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4406 sent 4405 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:04.218849+0000 osd.1 (osd.1) 4406 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7992> 2025-11-24T21:03:05.225+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4406) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:04.218849+0000 osd.1 (osd.1) 4406 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:36.020081+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4407 sent 4406 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:05.226653+0000 osd.1 (osd.1) 4407 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7980> 2025-11-24T21:03:06.183+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4407) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:05.226653+0000 osd.1 (osd.1) 4407 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:37.020348+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4408 sent 4407 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:06.185091+0000 osd.1 (osd.1) 4408 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7966> 2025-11-24T21:03:07.185+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4408) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:06.185091+0000 osd.1 (osd.1) 4408 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:38.020578+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4409 sent 4408 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:07.186747+0000 osd.1 (osd.1) 4409 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7954> 2025-11-24T21:03:08.176+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4409) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:07.186747+0000 osd.1 (osd.1) 4409 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:39.020881+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4410 sent 4409 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:08.177938+0000 osd.1 (osd.1) 4410 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7943> 2025-11-24T21:03:09.194+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4410) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:08.177938+0000 osd.1 (osd.1) 4410 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:40.021117+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4411 sent 4410 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:09.195404+0000 osd.1 (osd.1) 4411 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7932> 2025-11-24T21:03:10.240+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4411) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:09.195404+0000 osd.1 (osd.1) 4411 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:41.021405+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4412 sent 4411 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:10.242575+0000 osd.1 (osd.1) 4412 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7921> 2025-11-24T21:03:11.193+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4412) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:10.242575+0000 osd.1 (osd.1) 4412 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:42.021755+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4413 sent 4412 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:11.195505+0000 osd.1 (osd.1) 4413 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7907> 2025-11-24T21:03:12.184+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4413) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:11.195505+0000 osd.1 (osd.1) 4413 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:43.022024+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4414 sent 4413 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:12.186349+0000 osd.1 (osd.1) 4414 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7895> 2025-11-24T21:03:13.189+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4414) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:12.186349+0000 osd.1 (osd.1) 4414 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:44.022252+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4415 sent 4414 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:13.189872+0000 osd.1 (osd.1) 4415 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7884> 2025-11-24T21:03:14.181+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4415) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:13.189872+0000 osd.1 (osd.1) 4415 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:45.022634+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4416 sent 4415 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:14.182305+0000 osd.1 (osd.1) 4416 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7871> 2025-11-24T21:03:15.222+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4416) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:14.182305+0000 osd.1 (osd.1) 4416 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:46.022997+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4417 sent 4416 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:15.223273+0000 osd.1 (osd.1) 4417 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7860> 2025-11-24T21:03:16.270+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4417) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:15.223273+0000 osd.1 (osd.1) 4417 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:47.023414+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4418 sent 4417 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:16.270781+0000 osd.1 (osd.1) 4418 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7846> 2025-11-24T21:03:17.279+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4418) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:16.270781+0000 osd.1 (osd.1) 4418 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:48.024887+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4419 sent 4418 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:17.280378+0000 osd.1 (osd.1) 4419 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7835> 2025-11-24T21:03:18.290+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4419) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:17.280378+0000 osd.1 (osd.1) 4419 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:49.025362+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4420 sent 4419 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:18.290767+0000 osd.1 (osd.1) 4420 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7823> 2025-11-24T21:03:19.276+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4420) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:18.290767+0000 osd.1 (osd.1) 4420 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:50.026422+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4421 sent 4420 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:19.276894+0000 osd.1 (osd.1) 4421 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7812> 2025-11-24T21:03:20.313+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4421) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:19.276894+0000 osd.1 (osd.1) 4421 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:51.027240+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4422 sent 4421 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:20.314069+0000 osd.1 (osd.1) 4422 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7801> 2025-11-24T21:03:21.351+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4422) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:20.314069+0000 osd.1 (osd.1) 4422 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:52.027474+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4423 sent 4422 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:21.351964+0000 osd.1 (osd.1) 4423 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7787> 2025-11-24T21:03:22.361+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:53.028254+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4424 sent 4423 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:22.361706+0000 osd.1 (osd.1) 4424 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4423) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:21.351964+0000 osd.1 (osd.1) 4423 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4424) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:22.361706+0000 osd.1 (osd.1) 4424 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7774> 2025-11-24T21:03:23.381+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:54.028616+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4425 sent 4424 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:23.381933+0000 osd.1 (osd.1) 4425 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4425) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:23.381933+0000 osd.1 (osd.1) 4425 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7762> 2025-11-24T21:03:24.425+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:55.030135+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4426 sent 4425 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:24.426298+0000 osd.1 (osd.1) 4426 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4426) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:24.426298+0000 osd.1 (osd.1) 4426 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7751> 2025-11-24T21:03:25.383+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:56.031089+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4427 sent 4426 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:25.383705+0000 osd.1 (osd.1) 4427 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4427) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:25.383705+0000 osd.1 (osd.1) 4427 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7740> 2025-11-24T21:03:26.379+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:57.031305+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4428 sent 4427 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:26.379933+0000 osd.1 (osd.1) 4428 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4428) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:26.379933+0000 osd.1 (osd.1) 4428 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7726> 2025-11-24T21:03:27.426+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:58.031476+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4429 sent 4428 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:27.427531+0000 osd.1 (osd.1) 4429 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4429) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:27.427531+0000 osd.1 (osd.1) 4429 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7714> 2025-11-24T21:03:28.385+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:59.031837+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4430 sent 4429 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:28.386183+0000 osd.1 (osd.1) 4430 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4430) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:28.386183+0000 osd.1 (osd.1) 4430 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7703> 2025-11-24T21:03:29.359+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:00.032130+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4431 sent 4430 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:29.360222+0000 osd.1 (osd.1) 4431 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4431) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:29.360222+0000 osd.1 (osd.1) 4431 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7692> 2025-11-24T21:03:30.382+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:01.032331+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4432 sent 4431 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:30.383347+0000 osd.1 (osd.1) 4432 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4432) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:30.383347+0000 osd.1 (osd.1) 4432 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7681> 2025-11-24T21:03:31.414+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:02.032528+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4433 sent 4432 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:31.415109+0000 osd.1 (osd.1) 4433 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4433) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:31.415109+0000 osd.1 (osd.1) 4433 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7666> 2025-11-24T21:03:32.385+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:03.032742+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4434 sent 4433 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:32.385771+0000 osd.1 (osd.1) 4434 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7657> 2025-11-24T21:03:33.365+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4434) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:32.385771+0000 osd.1 (osd.1) 4434 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:04.032972+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4435 sent 4434 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:33.366284+0000 osd.1 (osd.1) 4435 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7645> 2025-11-24T21:03:34.321+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4435) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:33.366284+0000 osd.1 (osd.1) 4435 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:05.033528+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4436 sent 4435 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:34.322319+0000 osd.1 (osd.1) 4436 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7634> 2025-11-24T21:03:35.359+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: mgrc ms_handle_reset ms_handle_reset con 0x55ba3f6b9c00
Nov 24 21:14:15 compute-0 ceph-osd[89640]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/103018990
Nov 24 21:14:15 compute-0 ceph-osd[89640]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/103018990,v1:192.168.122.100:6801/103018990]
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: get_auth_request con 0x55ba3e4e5c00 auth_method 0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: mgrc handle_mgr_configure stats_period=5
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4436) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:34.322319+0000 osd.1 (osd.1) 4436 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:06.033766+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4437 sent 4436 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:35.360483+0000 osd.1 (osd.1) 4437 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7618> 2025-11-24T21:03:36.401+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4437) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:35.360483+0000 osd.1 (osd.1) 4437 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:07.033955+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4438 sent 4437 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:36.403445+0000 osd.1 (osd.1) 4438 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7604> 2025-11-24T21:03:37.410+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4438) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:36.403445+0000 osd.1 (osd.1) 4438 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:08.034125+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4439 sent 4438 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:37.411869+0000 osd.1 (osd.1) 4439 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7592> 2025-11-24T21:03:38.375+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4439) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:37.411869+0000 osd.1 (osd.1) 4439 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:09.034401+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4440 sent 4439 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:38.377138+0000 osd.1 (osd.1) 4440 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 ms_handle_reset con 0x55ba3f6b9400 session 0x55ba3cac2960
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3ce04800
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7579> 2025-11-24T21:03:39.424+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 ms_handle_reset con 0x55ba3d70a000 session 0x55ba393a9860
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3f6b8400
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 ms_handle_reset con 0x55ba3cb5a400 session 0x55ba3a8ef2c0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3d70a000
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4440) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:38.377138+0000 osd.1 (osd.1) 4440 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:10.034629+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4441 sent 4440 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:39.426524+0000 osd.1 (osd.1) 4441 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7564> 2025-11-24T21:03:40.384+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4441) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:39.426524+0000 osd.1 (osd.1) 4441 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:11.034840+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4442 sent 4441 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:40.385018+0000 osd.1 (osd.1) 4442 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7553> 2025-11-24T21:03:41.410+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4442) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:40.385018+0000 osd.1 (osd.1) 4442 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:12.035028+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4443 sent 4442 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:41.410985+0000 osd.1 (osd.1) 4443 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7539> 2025-11-24T21:03:42.442+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4443) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:41.410985+0000 osd.1 (osd.1) 4443 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:13.035258+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4444 sent 4443 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:42.444558+0000 osd.1 (osd.1) 4444 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7527> 2025-11-24T21:03:43.469+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4444) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:42.444558+0000 osd.1 (osd.1) 4444 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:14.035474+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4445 sent 4444 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:43.471118+0000 osd.1 (osd.1) 4445 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7516> 2025-11-24T21:03:44.421+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:15.035891+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4446 sent 4445 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:44.423007+0000 osd.1 (osd.1) 4446 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4445) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:43.471118+0000 osd.1 (osd.1) 4445 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7505> 2025-11-24T21:03:45.410+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:16.036063+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4447 sent 4446 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:45.412208+0000 osd.1 (osd.1) 4447 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4446) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:44.423007+0000 osd.1 (osd.1) 4446 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4447) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:45.412208+0000 osd.1 (osd.1) 4447 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7491> 2025-11-24T21:03:46.433+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:17.036286+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4448 sent 4447 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:46.434442+0000 osd.1 (osd.1) 4448 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4448) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:46.434442+0000 osd.1 (osd.1) 4448 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7477> 2025-11-24T21:03:47.390+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:18.036509+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4449 sent 4448 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:47.391357+0000 osd.1 (osd.1) 4449 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4449) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:47.391357+0000 osd.1 (osd.1) 4449 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7467> 2025-11-24T21:03:48.345+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:19.036756+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4450 sent 4449 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:48.347803+0000 osd.1 (osd.1) 4450 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4450) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:48.347803+0000 osd.1 (osd.1) 4450 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7456> 2025-11-24T21:03:49.326+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:20.036968+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4451 sent 4450 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:49.328136+0000 osd.1 (osd.1) 4451 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4451) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:49.328136+0000 osd.1 (osd.1) 4451 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7445> 2025-11-24T21:03:50.302+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:21.037189+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4452 sent 4451 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:50.303239+0000 osd.1 (osd.1) 4452 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4452) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:50.303239+0000 osd.1 (osd.1) 4452 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7433> 2025-11-24T21:03:51.293+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:22.037442+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4453 sent 4452 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:51.293863+0000 osd.1 (osd.1) 4453 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4453) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:51.293863+0000 osd.1 (osd.1) 4453 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7419> 2025-11-24T21:03:52.294+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:23.037675+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4454 sent 4453 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:52.295371+0000 osd.1 (osd.1) 4454 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4454) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:52.295371+0000 osd.1 (osd.1) 4454 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7408> 2025-11-24T21:03:53.307+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:24.037954+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4455 sent 4454 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:53.308797+0000 osd.1 (osd.1) 4455 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4455) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:53.308797+0000 osd.1 (osd.1) 4455 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7397> 2025-11-24T21:03:54.273+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:25.038243+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4456 sent 4455 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:54.274627+0000 osd.1 (osd.1) 4456 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4456) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:54.274627+0000 osd.1 (osd.1) 4456 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7386> 2025-11-24T21:03:55.281+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:26.038463+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4457 sent 4456 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:55.282512+0000 osd.1 (osd.1) 4457 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4457) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:55.282512+0000 osd.1 (osd.1) 4457 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7374> 2025-11-24T21:03:56.248+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:27.038731+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4458 sent 4457 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:56.249699+0000 osd.1 (osd.1) 4458 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4458) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:56.249699+0000 osd.1 (osd.1) 4458 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7359> 2025-11-24T21:03:57.287+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:28.039025+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4459 sent 4458 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:57.287941+0000 osd.1 (osd.1) 4459 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4459) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:57.287941+0000 osd.1 (osd.1) 4459 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7348> 2025-11-24T21:03:58.295+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:29.039305+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4460 sent 4459 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:58.296556+0000 osd.1 (osd.1) 4460 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4460) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:58.296556+0000 osd.1 (osd.1) 4460 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7337> 2025-11-24T21:03:59.337+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:30.039521+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4461 sent 4460 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:03:59.338554+0000 osd.1 (osd.1) 4461 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4461) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:03:59.338554+0000 osd.1 (osd.1) 4461 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7325> 2025-11-24T21:04:00.377+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:31.039744+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4462 sent 4461 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:00.377726+0000 osd.1 (osd.1) 4462 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4462) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:00.377726+0000 osd.1 (osd.1) 4462 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7314> 2025-11-24T21:04:01.371+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:32.039929+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4463 sent 4462 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:01.371895+0000 osd.1 (osd.1) 4463 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4463) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:01.371895+0000 osd.1 (osd.1) 4463 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7300> 2025-11-24T21:04:02.376+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:33.040206+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4464 sent 4463 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:02.377064+0000 osd.1 (osd.1) 4464 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4464) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:02.377064+0000 osd.1 (osd.1) 4464 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7289> 2025-11-24T21:04:03.346+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:34.040511+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4465 sent 4464 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:03.346870+0000 osd.1 (osd.1) 4465 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4465) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:03.346870+0000 osd.1 (osd.1) 4465 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7278> 2025-11-24T21:04:04.331+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:35.041171+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4466 sent 4465 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:04.332139+0000 osd.1 (osd.1) 4466 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4466) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:04.332139+0000 osd.1 (osd.1) 4466 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7267> 2025-11-24T21:04:05.327+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:36.041433+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4467 sent 4466 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:05.328016+0000 osd.1 (osd.1) 4467 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4467) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:05.328016+0000 osd.1 (osd.1) 4467 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7255> 2025-11-24T21:04:06.372+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:37.041730+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4468 sent 4467 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:06.373127+0000 osd.1 (osd.1) 4468 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4468) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:06.373127+0000 osd.1 (osd.1) 4468 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7241> 2025-11-24T21:04:07.395+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:38.041956+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4469 sent 4468 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:07.396237+0000 osd.1 (osd.1) 4469 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4469) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:07.396237+0000 osd.1 (osd.1) 4469 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7230> 2025-11-24T21:04:08.386+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:39.042198+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4470 sent 4469 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:08.387031+0000 osd.1 (osd.1) 4470 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4470) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:08.387031+0000 osd.1 (osd.1) 4470 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7219> 2025-11-24T21:04:09.386+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:40.042422+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4471 sent 4470 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:09.387305+0000 osd.1 (osd.1) 4471 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4471) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:09.387305+0000 osd.1 (osd.1) 4471 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7206> 2025-11-24T21:04:10.421+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:41.042647+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4472 sent 4471 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:10.422485+0000 osd.1 (osd.1) 4472 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4472) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:10.422485+0000 osd.1 (osd.1) 4472 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7194> 2025-11-24T21:04:11.462+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:42.042895+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4473 sent 4472 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:11.462752+0000 osd.1 (osd.1) 4473 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4473) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:11.462752+0000 osd.1 (osd.1) 4473 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7180> 2025-11-24T21:04:12.478+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:43.043141+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4474 sent 4473 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:12.478948+0000 osd.1 (osd.1) 4474 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4474) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:12.478948+0000 osd.1 (osd.1) 4474 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7168> 2025-11-24T21:04:13.490+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:44.043397+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4475 sent 4474 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:13.492056+0000 osd.1 (osd.1) 4475 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4475) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:13.492056+0000 osd.1 (osd.1) 4475 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7157> 2025-11-24T21:04:14.472+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:45.043671+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4476 sent 4475 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:14.473805+0000 osd.1 (osd.1) 4476 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4476) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:14.473805+0000 osd.1 (osd.1) 4476 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7146> 2025-11-24T21:04:15.516+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:46.043979+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4477 sent 4476 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:15.517866+0000 osd.1 (osd.1) 4477 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4477) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:15.517866+0000 osd.1 (osd.1) 4477 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7134> 2025-11-24T21:04:16.522+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:47.044234+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4478 sent 4477 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:16.523923+0000 osd.1 (osd.1) 4478 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4478) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:16.523923+0000 osd.1 (osd.1) 4478 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7119> 2025-11-24T21:04:17.546+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:48.044502+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4479 sent 4478 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:17.548280+0000 osd.1 (osd.1) 4479 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4479) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:17.548280+0000 osd.1 (osd.1) 4479 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7108> 2025-11-24T21:04:18.578+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:49.044740+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4480 sent 4479 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:18.579579+0000 osd.1 (osd.1) 4480 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4480) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:18.579579+0000 osd.1 (osd.1) 4480 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7097> 2025-11-24T21:04:19.587+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:50.045032+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4481 sent 4480 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:19.589187+0000 osd.1 (osd.1) 4481 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4481) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:19.589187+0000 osd.1 (osd.1) 4481 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7085> 2025-11-24T21:04:20.541+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:51.045269+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4482 sent 4481 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:20.543076+0000 osd.1 (osd.1) 4482 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4482) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:20.543076+0000 osd.1 (osd.1) 4482 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7074> 2025-11-24T21:04:21.559+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:52.045494+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4483 sent 4482 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:21.561101+0000 osd.1 (osd.1) 4483 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4483) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:21.561101+0000 osd.1 (osd.1) 4483 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7060> 2025-11-24T21:04:22.597+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:53.045694+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4484 sent 4483 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:22.599268+0000 osd.1 (osd.1) 4484 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4484) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:22.599268+0000 osd.1 (osd.1) 4484 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7049> 2025-11-24T21:04:23.646+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:54.045969+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4485 sent 4484 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:23.648009+0000 osd.1 (osd.1) 4485 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4485) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:23.648009+0000 osd.1 (osd.1) 4485 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7037> 2025-11-24T21:04:24.694+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:55.046223+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4486 sent 4485 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:24.695996+0000 osd.1 (osd.1) 4486 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4486) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:24.695996+0000 osd.1 (osd.1) 4486 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7026> 2025-11-24T21:04:25.698+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:56.046499+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4487 sent 4486 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:25.699638+0000 osd.1 (osd.1) 4487 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4487) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:25.699638+0000 osd.1 (osd.1) 4487 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7015> 2025-11-24T21:04:26.734+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:57.046826+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4488 sent 4487 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:26.735835+0000 osd.1 (osd.1) 4488 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4488) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:26.735835+0000 osd.1 (osd.1) 4488 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -7000> 2025-11-24T21:04:27.731+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:58.047055+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4489 sent 4488 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:27.733315+0000 osd.1 (osd.1) 4489 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4489) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:27.733315+0000 osd.1 (osd.1) 4489 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6988> 2025-11-24T21:04:28.727+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:59.047292+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4490 sent 4489 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:28.727724+0000 osd.1 (osd.1) 4490 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4490) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:28.727724+0000 osd.1 (osd.1) 4490 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6977> 2025-11-24T21:04:29.771+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:00.047528+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4491 sent 4490 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:29.772080+0000 osd.1 (osd.1) 4491 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4491) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:29.772080+0000 osd.1 (osd.1) 4491 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6966> 2025-11-24T21:04:30.757+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:01.047805+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4492 sent 4491 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:30.758167+0000 osd.1 (osd.1) 4492 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4492) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:30.758167+0000 osd.1 (osd.1) 4492 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6955> 2025-11-24T21:04:31.792+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:02.048085+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4493 sent 4492 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:31.792956+0000 osd.1 (osd.1) 4493 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4493) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:31.792956+0000 osd.1 (osd.1) 4493 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6941> 2025-11-24T21:04:32.824+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:03.048303+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4494 sent 4493 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:32.824579+0000 osd.1 (osd.1) 4494 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4494) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:32.824579+0000 osd.1 (osd.1) 4494 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6929> 2025-11-24T21:04:33.822+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:04.048624+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4495 sent 4494 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:33.823684+0000 osd.1 (osd.1) 4495 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4495) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:33.823684+0000 osd.1 (osd.1) 4495 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6918> 2025-11-24T21:04:34.834+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:05.048871+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4496 sent 4495 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:34.835473+0000 osd.1 (osd.1) 4496 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4496) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:34.835473+0000 osd.1 (osd.1) 4496 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6907> 2025-11-24T21:04:35.818+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:06.049120+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4497 sent 4496 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:35.819076+0000 osd.1 (osd.1) 4497 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4497) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:35.819076+0000 osd.1 (osd.1) 4497 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6896> 2025-11-24T21:04:36.780+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:07.049372+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4498 sent 4497 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:36.781446+0000 osd.1 (osd.1) 4498 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4498) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:36.781446+0000 osd.1 (osd.1) 4498 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6881> 2025-11-24T21:04:37.799+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:08.049644+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4499 sent 4498 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:37.800639+0000 osd.1 (osd.1) 4499 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4499) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:37.800639+0000 osd.1 (osd.1) 4499 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6870> 2025-11-24T21:04:38.840+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:09.049839+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4500 sent 4499 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:38.841490+0000 osd.1 (osd.1) 4500 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4500) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:38.841490+0000 osd.1 (osd.1) 4500 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6859> 2025-11-24T21:04:39.843+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:10.050047+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4501 sent 4500 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:39.844178+0000 osd.1 (osd.1) 4501 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4501) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:39.844178+0000 osd.1 (osd.1) 4501 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6848> 2025-11-24T21:04:40.875+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:11.050268+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4502 sent 4501 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:40.876484+0000 osd.1 (osd.1) 4502 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4502) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:40.876484+0000 osd.1 (osd.1) 4502 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6837> 2025-11-24T21:04:41.900+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:12.050489+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4503 sent 4502 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:41.901334+0000 osd.1 (osd.1) 4503 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4503) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:41.901334+0000 osd.1 (osd.1) 4503 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6822> 2025-11-24T21:04:42.866+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:13.050687+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4504 sent 4503 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:42.866909+0000 osd.1 (osd.1) 4504 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4504) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:42.866909+0000 osd.1 (osd.1) 4504 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6811> 2025-11-24T21:04:43.872+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:14.050919+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4505 sent 4504 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:43.873129+0000 osd.1 (osd.1) 4505 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4505) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:43.873129+0000 osd.1 (osd.1) 4505 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6799> 2025-11-24T21:04:44.920+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:15.051190+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4506 sent 4505 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:44.920839+0000 osd.1 (osd.1) 4506 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4506) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:44.920839+0000 osd.1 (osd.1) 4506 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6788> 2025-11-24T21:04:45.896+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:16.051451+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4507 sent 4506 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:45.897134+0000 osd.1 (osd.1) 4507 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4507) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:45.897134+0000 osd.1 (osd.1) 4507 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6777> 2025-11-24T21:04:46.937+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:17.051646+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4508 sent 4507 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:46.937784+0000 osd.1 (osd.1) 4508 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4508) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:46.937784+0000 osd.1 (osd.1) 4508 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6763> 2025-11-24T21:04:47.971+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:18.051840+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4509 sent 4508 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:47.972563+0000 osd.1 (osd.1) 4509 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4509) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:47.972563+0000 osd.1 (osd.1) 4509 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6752> 2025-11-24T21:04:48.960+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:19.052015+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4510 sent 4509 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:48.961261+0000 osd.1 (osd.1) 4510 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4510) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:48.961261+0000 osd.1 (osd.1) 4510 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6741> 2025-11-24T21:04:49.970+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:20.052246+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4511 sent 4510 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:49.971905+0000 osd.1 (osd.1) 4511 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4511) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:49.971905+0000 osd.1 (osd.1) 4511 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6729> 2025-11-24T21:04:50.949+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:21.052440+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4512 sent 4511 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:50.950813+0000 osd.1 (osd.1) 4512 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4512) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:50.950813+0000 osd.1 (osd.1) 4512 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6717> 2025-11-24T21:04:51.937+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:22.052648+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4513 sent 4512 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:51.938500+0000 osd.1 (osd.1) 4513 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4513) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:51.938500+0000 osd.1 (osd.1) 4513 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6703> 2025-11-24T21:04:52.985+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:23.052858+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4514 sent 4513 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:52.986903+0000 osd.1 (osd.1) 4514 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4514) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:52.986903+0000 osd.1 (osd.1) 4514 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6691> 2025-11-24T21:04:54.015+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:24.053079+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4515 sent 4514 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:54.016221+0000 osd.1 (osd.1) 4515 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4515) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:54.016221+0000 osd.1 (osd.1) 4515 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6680> 2025-11-24T21:04:55.036+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:25.053330+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4516 sent 4515 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:55.037229+0000 osd.1 (osd.1) 4516 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4516) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:55.037229+0000 osd.1 (osd.1) 4516 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6669> 2025-11-24T21:04:56.026+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:26.053627+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4517 sent 4516 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:56.027827+0000 osd.1 (osd.1) 4517 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4517) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:56.027827+0000 osd.1 (osd.1) 4517 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6658> 2025-11-24T21:04:57.007+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:27.053973+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4518 sent 4517 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:57.008250+0000 osd.1 (osd.1) 4518 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4518) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:57.008250+0000 osd.1 (osd.1) 4518 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6643> 2025-11-24T21:04:57.960+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:28.054265+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4519 sent 4518 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:57.961970+0000 osd.1 (osd.1) 4519 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4519) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:57.961970+0000 osd.1 (osd.1) 4519 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6632> 2025-11-24T21:04:58.967+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:29.054673+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4520 sent 4519 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:58.968485+0000 osd.1 (osd.1) 4520 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4520) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:58.968485+0000 osd.1 (osd.1) 4520 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6621> 2025-11-24T21:04:59.991+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:30.055037+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4521 sent 4520 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:04:59.992689+0000 osd.1 (osd.1) 4521 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4521) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:04:59.992689+0000 osd.1 (osd.1) 4521 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6610> 2025-11-24T21:05:01.031+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:31.055266+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4522 sent 4521 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:01.032532+0000 osd.1 (osd.1) 4522 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4522) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:01.032532+0000 osd.1 (osd.1) 4522 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6599> 2025-11-24T21:05:02.051+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:32.055491+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4523 sent 4522 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:02.053294+0000 osd.1 (osd.1) 4523 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4523) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:02.053294+0000 osd.1 (osd.1) 4523 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6584> 2025-11-24T21:05:03.015+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:33.055709+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4524 sent 4523 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:03.016392+0000 osd.1 (osd.1) 4524 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4524) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:03.016392+0000 osd.1 (osd.1) 4524 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6573> 2025-11-24T21:05:03.999+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:34.055923+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4525 sent 4524 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:04.001288+0000 osd.1 (osd.1) 4525 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4525) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:04.001288+0000 osd.1 (osd.1) 4525 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6562> 2025-11-24T21:05:04.965+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:35.056155+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4526 sent 4525 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:04.967546+0000 osd.1 (osd.1) 4526 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6553> 2025-11-24T21:05:05.919+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4526) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:04.967546+0000 osd.1 (osd.1) 4526 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:36.057253+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4527 sent 4526 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:05.920372+0000 osd.1 (osd.1) 4527 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6542> 2025-11-24T21:05:06.947+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4527) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:05.920372+0000 osd.1 (osd.1) 4527 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:37.057642+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4528 sent 4527 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:06.947553+0000 osd.1 (osd.1) 4528 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6528> 2025-11-24T21:05:07.905+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4528) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:06.947553+0000 osd.1 (osd.1) 4528 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:38.057903+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4529 sent 4528 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:07.906201+0000 osd.1 (osd.1) 4529 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6516> 2025-11-24T21:05:08.943+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4529) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:07.906201+0000 osd.1 (osd.1) 4529 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:39.058200+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4530 sent 4529 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:08.944369+0000 osd.1 (osd.1) 4530 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6505> 2025-11-24T21:05:09.947+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4530) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:08.944369+0000 osd.1 (osd.1) 4530 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:40.058716+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4531 sent 4530 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:09.948574+0000 osd.1 (osd.1) 4531 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6494> 2025-11-24T21:05:10.975+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4531) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:09.948574+0000 osd.1 (osd.1) 4531 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:41.058969+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4532 sent 4531 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:10.976003+0000 osd.1 (osd.1) 4532 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6483> 2025-11-24T21:05:11.940+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4532) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:10.976003+0000 osd.1 (osd.1) 4532 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:42.059243+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4533 sent 4532 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:11.941019+0000 osd.1 (osd.1) 4533 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6469> 2025-11-24T21:05:12.975+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4533) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:11.941019+0000 osd.1 (osd.1) 4533 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:43.059478+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4534 sent 4533 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:12.976542+0000 osd.1 (osd.1) 4534 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6458> 2025-11-24T21:05:13.966+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:44.059773+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4535 sent 4534 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:13.967329+0000 osd.1 (osd.1) 4535 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4534) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:12.976542+0000 osd.1 (osd.1) 4534 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6446> 2025-11-24T21:05:14.963+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:45.060023+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4536 sent 4535 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:14.964568+0000 osd.1 (osd.1) 4536 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4535) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:13.967329+0000 osd.1 (osd.1) 4535 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4536) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:14.964568+0000 osd.1 (osd.1) 4536 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6433> 2025-11-24T21:05:15.940+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:46.060370+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4537 sent 4536 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:15.941534+0000 osd.1 (osd.1) 4537 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4537) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:15.941534+0000 osd.1 (osd.1) 4537 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6421> 2025-11-24T21:05:16.941+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:47.060675+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4538 sent 4537 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:16.941805+0000 osd.1 (osd.1) 4538 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4538) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:16.941805+0000 osd.1 (osd.1) 4538 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6407> 2025-11-24T21:05:17.961+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:48.060920+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4539 sent 4538 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:17.961717+0000 osd.1 (osd.1) 4539 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4539) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:17.961717+0000 osd.1 (osd.1) 4539 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6396> 2025-11-24T21:05:18.950+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:49.061193+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4540 sent 4539 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:18.951007+0000 osd.1 (osd.1) 4540 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4540) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:18.951007+0000 osd.1 (osd.1) 4540 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6385> 2025-11-24T21:05:19.979+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:50.061419+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4541 sent 4540 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:19.980416+0000 osd.1 (osd.1) 4541 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4541) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:19.980416+0000 osd.1 (osd.1) 4541 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6372> 2025-11-24T21:05:21.013+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:51.061680+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4542 sent 4541 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:21.014012+0000 osd.1 (osd.1) 4542 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4542) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:21.014012+0000 osd.1 (osd.1) 4542 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 54607872 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6361> 2025-11-24T21:05:22.024+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:52.061970+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4543 sent 4542 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:22.025114+0000 osd.1 (osd.1) 4543 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4543) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:22.025114+0000 osd.1 (osd.1) 4543 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6347> 2025-11-24T21:05:22.991+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:53.062288+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4544 sent 4543 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:22.992499+0000 osd.1 (osd.1) 4544 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4544) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:22.992499+0000 osd.1 (osd.1) 4544 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6335> 2025-11-24T21:05:23.975+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:54.062573+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4545 sent 4544 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:23.976240+0000 osd.1 (osd.1) 4545 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4545) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:23.976240+0000 osd.1 (osd.1) 4545 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6324> 2025-11-24T21:05:24.980+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:55.062937+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4546 sent 4545 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:24.980710+0000 osd.1 (osd.1) 4546 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4546) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:24.980710+0000 osd.1 (osd.1) 4546 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6312> 2025-11-24T21:05:25.983+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:56.063167+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4547 sent 4546 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:25.983702+0000 osd.1 (osd.1) 4547 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4547) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:25.983702+0000 osd.1 (osd.1) 4547 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6301> 2025-11-24T21:05:27.005+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:57.063880+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4548 sent 4547 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:27.006093+0000 osd.1 (osd.1) 4548 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4548) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:27.006093+0000 osd.1 (osd.1) 4548 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6286> 2025-11-24T21:05:28.012+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:58.064113+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4549 sent 4548 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:28.013864+0000 osd.1 (osd.1) 4549 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4549) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:28.013864+0000 osd.1 (osd.1) 4549 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6274> 2025-11-24T21:05:29.015+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:59.065100+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4550 sent 4549 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:29.016849+0000 osd.1 (osd.1) 4550 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4550) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:29.016849+0000 osd.1 (osd.1) 4550 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6263> 2025-11-24T21:05:29.980+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:00.065397+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4551 sent 4550 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:29.981225+0000 osd.1 (osd.1) 4551 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4551) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:29.981225+0000 osd.1 (osd.1) 4551 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6252> 2025-11-24T21:05:31.003+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:01.065787+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4552 sent 4551 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:31.004102+0000 osd.1 (osd.1) 4552 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4552) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:31.004102+0000 osd.1 (osd.1) 4552 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6241> 2025-11-24T21:05:31.990+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:02.066140+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4553 sent 4552 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:31.992130+0000 osd.1 (osd.1) 4553 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4553) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:31.992130+0000 osd.1 (osd.1) 4553 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6226> 2025-11-24T21:05:33.032+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:03.066516+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4554 sent 4553 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:33.034536+0000 osd.1 (osd.1) 4554 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4554) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:33.034536+0000 osd.1 (osd.1) 4554 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6215> 2025-11-24T21:05:34.003+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:04.066832+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4555 sent 4554 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:34.005277+0000 osd.1 (osd.1) 4555 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4555) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:34.005277+0000 osd.1 (osd.1) 4555 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6204> 2025-11-24T21:05:34.967+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:05.067217+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4556 sent 4555 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:34.968564+0000 osd.1 (osd.1) 4556 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4556) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:34.968564+0000 osd.1 (osd.1) 4556 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6193> 2025-11-24T21:05:35.964+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:06.067556+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4557 sent 4556 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:35.965894+0000 osd.1 (osd.1) 4557 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4557) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:35.965894+0000 osd.1 (osd.1) 4557 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6182> 2025-11-24T21:05:36.943+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:07.067856+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4558 sent 4557 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:36.944421+0000 osd.1 (osd.1) 4558 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4558) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:36.944421+0000 osd.1 (osd.1) 4558 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6168> 2025-11-24T21:05:37.928+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:08.068162+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4559 sent 4558 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:37.930543+0000 osd.1 (osd.1) 4559 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4559) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:37.930543+0000 osd.1 (osd.1) 4559 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6156> 2025-11-24T21:05:38.923+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:09.068658+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4560 sent 4559 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:38.924827+0000 osd.1 (osd.1) 4560 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4560) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:38.924827+0000 osd.1 (osd.1) 4560 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6145> 2025-11-24T21:05:39.884+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:10.068946+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4561 sent 4560 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:39.885924+0000 osd.1 (osd.1) 4561 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4561) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:39.885924+0000 osd.1 (osd.1) 4561 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6135> 2025-11-24T21:05:40.839+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:11.069229+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4562 sent 4561 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:40.840634+0000 osd.1 (osd.1) 4562 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4562) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:40.840634+0000 osd.1 (osd.1) 4562 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6124> 2025-11-24T21:05:41.861+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:12.069520+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4563 sent 4562 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:41.863315+0000 osd.1 (osd.1) 4563 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4563) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:41.863315+0000 osd.1 (osd.1) 4563 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6109> 2025-11-24T21:05:42.831+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:13.069973+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4564 sent 4563 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:42.833247+0000 osd.1 (osd.1) 4564 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4564) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:42.833247+0000 osd.1 (osd.1) 4564 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6098> 2025-11-24T21:05:43.876+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:14.070167+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4565 sent 4564 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:43.876413+0000 osd.1 (osd.1) 4565 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4565) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:43.876413+0000 osd.1 (osd.1) 4565 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6086> 2025-11-24T21:05:44.895+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:15.070352+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4566 sent 4565 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:44.896414+0000 osd.1 (osd.1) 4566 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4566) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:44.896414+0000 osd.1 (osd.1) 4566 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6076> 2025-11-24T21:05:45.854+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:16.070556+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4567 sent 4566 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:45.855423+0000 osd.1 (osd.1) 4567 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4567) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:45.855423+0000 osd.1 (osd.1) 4567 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6064> 2025-11-24T21:05:46.880+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:17.070829+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4568 sent 4567 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:46.881294+0000 osd.1 (osd.1) 4568 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4568) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:46.881294+0000 osd.1 (osd.1) 4568 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6049> 2025-11-24T21:05:47.833+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:18.071109+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4569 sent 4568 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:47.834558+0000 osd.1 (osd.1) 4569 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4569) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:47.834558+0000 osd.1 (osd.1) 4569 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6038> 2025-11-24T21:05:48.802+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:19.071394+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4570 sent 4569 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:48.802668+0000 osd.1 (osd.1) 4570 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4570) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:48.802668+0000 osd.1 (osd.1) 4570 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6027> 2025-11-24T21:05:49.793+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:20.071642+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4571 sent 4570 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:49.794555+0000 osd.1 (osd.1) 4571 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4571) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:49.794555+0000 osd.1 (osd.1) 4571 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6015> 2025-11-24T21:05:50.833+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:21.072012+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4572 sent 4571 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:50.834141+0000 osd.1 (osd.1) 4572 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4572) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:50.834141+0000 osd.1 (osd.1) 4572 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -6003> 2025-11-24T21:05:51.823+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:22.072235+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4573 sent 4572 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:51.824173+0000 osd.1 (osd.1) 4573 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4573) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:51.824173+0000 osd.1 (osd.1) 4573 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5989> 2025-11-24T21:05:52.810+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:23.072560+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4574 sent 4573 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:52.810827+0000 osd.1 (osd.1) 4574 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4574) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:52.810827+0000 osd.1 (osd.1) 4574 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5976> 2025-11-24T21:05:53.846+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:24.072905+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4575 sent 4574 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:53.847408+0000 osd.1 (osd.1) 4575 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4575) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:53.847408+0000 osd.1 (osd.1) 4575 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5965> 2025-11-24T21:05:54.849+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:25.073127+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4576 sent 4575 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:54.850259+0000 osd.1 (osd.1) 4576 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4576) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:54.850259+0000 osd.1 (osd.1) 4576 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5954> 2025-11-24T21:05:55.846+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:26.073339+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4577 sent 4576 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:55.846969+0000 osd.1 (osd.1) 4577 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4577) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:55.846969+0000 osd.1 (osd.1) 4577 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5943> 2025-11-24T21:05:56.829+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:27.073647+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4578 sent 4577 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:56.830169+0000 osd.1 (osd.1) 4578 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4578) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:56.830169+0000 osd.1 (osd.1) 4578 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5929> 2025-11-24T21:05:57.818+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:28.073957+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4579 sent 4578 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:57.818751+0000 osd.1 (osd.1) 4579 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4579) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:57.818751+0000 osd.1 (osd.1) 4579 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5917> 2025-11-24T21:05:58.787+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:29.074218+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4580 sent 4579 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:58.788403+0000 osd.1 (osd.1) 4580 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5907> 2025-11-24T21:05:59.819+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:30.074936+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4581 sent 4580 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:05:59.820420+0000 osd.1 (osd.1) 4581 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5898> 2025-11-24T21:06:00.845+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:31.075351+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 3 last_log 4582 sent 4581 num 3 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:00.846550+0000 osd.1 (osd.1) 4582 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5889> 2025-11-24T21:06:01.857+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:32.075856+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 4 last_log 4583 sent 4582 num 4 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:01.857751+0000 osd.1 (osd.1) 4583 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4580) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:58.788403+0000 osd.1 (osd.1) 4580 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5874> 2025-11-24T21:06:02.808+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:33.076299+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 4 last_log 4584 sent 4583 num 4 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:02.809511+0000 osd.1 (osd.1) 4584 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5865> 2025-11-24T21:06:03.786+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:34.076710+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 5 last_log 4585 sent 4584 num 5 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:03.787211+0000 osd.1 (osd.1) 4585 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4581) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:05:59.820420+0000 osd.1 (osd.1) 4581 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4582) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:00.846550+0000 osd.1 (osd.1) 4582 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4583) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:01.857751+0000 osd.1 (osd.1) 4583 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4584) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:02.809511+0000 osd.1 (osd.1) 4584 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5847> 2025-11-24T21:06:04.774+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:35.077054+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4586 sent 4585 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:04.774889+0000 osd.1 (osd.1) 4586 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4585) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:03.787211+0000 osd.1 (osd.1) 4585 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4586) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:04.774889+0000 osd.1 (osd.1) 4586 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5834> 2025-11-24T21:06:05.737+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:36.077365+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4587 sent 4586 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:05.739331+0000 osd.1 (osd.1) 4587 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4587) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:05.739331+0000 osd.1 (osd.1) 4587 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5822> 2025-11-24T21:06:06.762+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:37.077686+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4588 sent 4587 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:06.764581+0000 osd.1 (osd.1) 4588 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5813> 2025-11-24T21:06:07.744+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4588) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:06.764581+0000 osd.1 (osd.1) 4588 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:38.077950+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4589 sent 4588 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:07.745370+0000 osd.1 (osd.1) 4589 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5799> 2025-11-24T21:06:08.788+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:39.078192+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4590 sent 4589 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:08.789839+0000 osd.1 (osd.1) 4590 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4589) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:07.745370+0000 osd.1 (osd.1) 4589 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5788> 2025-11-24T21:06:09.748+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:40.078432+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4591 sent 4590 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:09.749461+0000 osd.1 (osd.1) 4591 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5779> 2025-11-24T21:06:10.778+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:41.078665+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 3 last_log 4592 sent 4591 num 3 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:10.780392+0000 osd.1 (osd.1) 4592 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4590) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:08.789839+0000 osd.1 (osd.1) 4590 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4591) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:09.749461+0000 osd.1 (osd.1) 4591 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5766> 2025-11-24T21:06:11.785+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:42.078930+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4593 sent 4592 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:11.786274+0000 osd.1 (osd.1) 4593 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4592) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:10.780392+0000 osd.1 (osd.1) 4592 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4593) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:11.786274+0000 osd.1 (osd.1) 4593 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5751> 2025-11-24T21:06:12.748+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:43.079216+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4594 sent 4593 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:12.750364+0000 osd.1 (osd.1) 4594 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5739> 2025-11-24T21:06:13.713+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4594) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:12.750364+0000 osd.1 (osd.1) 4594 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:44.079415+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4595 sent 4594 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:13.714198+0000 osd.1 (osd.1) 4595 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5728> 2025-11-24T21:06:14.731+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4595) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:13.714198+0000 osd.1 (osd.1) 4595 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:45.079643+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4596 sent 4595 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:14.733019+0000 osd.1 (osd.1) 4596 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5717> 2025-11-24T21:06:15.780+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4596) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:14.733019+0000 osd.1 (osd.1) 4596 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:46.079889+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4597 sent 4596 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:15.781442+0000 osd.1 (osd.1) 4597 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5706> 2025-11-24T21:06:16.787+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:47.080153+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4598 sent 4597 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:16.789400+0000 osd.1 (osd.1) 4598 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4597) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:15.781442+0000 osd.1 (osd.1) 4597 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5695> 2025-11-24T21:06:17.802+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:48.080426+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4599 sent 4598 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:17.803825+0000 osd.1 (osd.1) 4599 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4598) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:16.789400+0000 osd.1 (osd.1) 4598 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4599) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:17.803825+0000 osd.1 (osd.1) 4599 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5678> 2025-11-24T21:06:18.753+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:49.080676+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4600 sent 4599 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:18.755562+0000 osd.1 (osd.1) 4600 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4600) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:18.755562+0000 osd.1 (osd.1) 4600 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5667> 2025-11-24T21:06:19.729+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:50.080883+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4601 sent 4600 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:19.731121+0000 osd.1 (osd.1) 4601 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4601) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:19.731121+0000 osd.1 (osd.1) 4601 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5656> 2025-11-24T21:06:20.687+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:51.081143+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4602 sent 4601 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:20.687892+0000 osd.1 (osd.1) 4602 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4602) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:20.687892+0000 osd.1 (osd.1) 4602 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5645> 2025-11-24T21:06:21.688+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:52.081407+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4603 sent 4602 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:21.689085+0000 osd.1 (osd.1) 4603 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4603) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:21.689085+0000 osd.1 (osd.1) 4603 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5633> 2025-11-24T21:06:22.730+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:53.081656+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4604 sent 4603 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:22.730991+0000 osd.1 (osd.1) 4604 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4604) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:22.730991+0000 osd.1 (osd.1) 4604 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5618> 2025-11-24T21:06:23.769+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:54.081899+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4605 sent 4604 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:23.770004+0000 osd.1 (osd.1) 4605 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4605) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:23.770004+0000 osd.1 (osd.1) 4605 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5607> 2025-11-24T21:06:24.793+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:55.082182+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4606 sent 4605 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:24.794297+0000 osd.1 (osd.1) 4606 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4606) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:24.794297+0000 osd.1 (osd.1) 4606 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5596> 2025-11-24T21:06:25.841+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:56.082462+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4607 sent 4606 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:25.841777+0000 osd.1 (osd.1) 4607 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4607) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:25.841777+0000 osd.1 (osd.1) 4607 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5585> 2025-11-24T21:06:26.835+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:57.082728+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4608 sent 4607 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:26.836312+0000 osd.1 (osd.1) 4608 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4608) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:26.836312+0000 osd.1 (osd.1) 4608 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5574> 2025-11-24T21:06:27.805+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:58.083021+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4609 sent 4608 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:27.805640+0000 osd.1 (osd.1) 4609 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4609) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:27.805640+0000 osd.1 (osd.1) 4609 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5560> 2025-11-24T21:06:28.783+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:59.083251+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4610 sent 4609 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:28.783774+0000 osd.1 (osd.1) 4610 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4610) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:28.783774+0000 osd.1 (osd.1) 4610 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5548> 2025-11-24T21:06:29.796+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:00.083488+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4611 sent 4610 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:29.797560+0000 osd.1 (osd.1) 4611 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5539> 2025-11-24T21:06:30.767+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4611) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:29.797560+0000 osd.1 (osd.1) 4611 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:01.083744+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4612 sent 4611 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:30.767673+0000 osd.1 (osd.1) 4612 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5527> 2025-11-24T21:06:31.798+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4612) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:30.767673+0000 osd.1 (osd.1) 4612 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:02.083948+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4613 sent 4612 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:31.798776+0000 osd.1 (osd.1) 4613 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5516> 2025-11-24T21:06:32.779+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4613) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:31.798776+0000 osd.1 (osd.1) 4613 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:03.084205+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4614 sent 4613 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:32.779974+0000 osd.1 (osd.1) 4614 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5501> 2025-11-24T21:06:33.736+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4614) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:32.779974+0000 osd.1 (osd.1) 4614 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:04.084465+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4615 sent 4614 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:33.737318+0000 osd.1 (osd.1) 4615 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5489> 2025-11-24T21:06:34.727+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4615) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:33.737318+0000 osd.1 (osd.1) 4615 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:05.084702+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4616 sent 4615 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:34.728559+0000 osd.1 (osd.1) 4616 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5478> 2025-11-24T21:06:35.684+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4616) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:34.728559+0000 osd.1 (osd.1) 4616 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:06.085004+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4617 sent 4616 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:35.685538+0000 osd.1 (osd.1) 4617 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5467> 2025-11-24T21:06:36.645+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4617) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:35.685538+0000 osd.1 (osd.1) 4617 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:07.085281+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4618 sent 4617 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:36.645873+0000 osd.1 (osd.1) 4618 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5456> 2025-11-24T21:06:37.616+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4618) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:36.645873+0000 osd.1 (osd.1) 4618 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:08.085519+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4619 sent 4618 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:37.616935+0000 osd.1 (osd.1) 4619 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5442> 2025-11-24T21:06:38.580+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4619) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:37.616935+0000 osd.1 (osd.1) 4619 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:09.085825+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4620 sent 4619 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:38.580704+0000 osd.1 (osd.1) 4620 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5429> 2025-11-24T21:06:39.534+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4620) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:38.580704+0000 osd.1 (osd.1) 4620 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:10.086089+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4621 sent 4620 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:39.535383+0000 osd.1 (osd.1) 4621 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5417> 2025-11-24T21:06:40.578+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4621) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:39.535383+0000 osd.1 (osd.1) 4621 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:11.086323+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4622 sent 4621 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:40.579405+0000 osd.1 (osd.1) 4622 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5406> 2025-11-24T21:06:41.567+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4622) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:40.579405+0000 osd.1 (osd.1) 4622 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:12.086552+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4623 sent 4622 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:41.567999+0000 osd.1 (osd.1) 4623 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5395> 2025-11-24T21:06:42.573+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4623) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:41.567999+0000 osd.1 (osd.1) 4623 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:13.086857+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4624 sent 4623 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:42.573945+0000 osd.1 (osd.1) 4624 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5381> 2025-11-24T21:06:43.592+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4624) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:42.573945+0000 osd.1 (osd.1) 4624 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:14.087092+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4625 sent 4624 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:43.593215+0000 osd.1 (osd.1) 4625 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5369> 2025-11-24T21:06:44.572+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4625) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:43.593215+0000 osd.1 (osd.1) 4625 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:15.088013+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4626 sent 4625 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:44.572922+0000 osd.1 (osd.1) 4626 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5357> 2025-11-24T21:06:45.553+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4626) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:44.572922+0000 osd.1 (osd.1) 4626 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:16.088491+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4627 sent 4626 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:45.554043+0000 osd.1 (osd.1) 4627 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5344> 2025-11-24T21:06:46.588+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4627) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:45.554043+0000 osd.1 (osd.1) 4627 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:17.088814+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4628 sent 4627 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:46.590627+0000 osd.1 (osd.1) 4628 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5332> 2025-11-24T21:06:47.621+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4628) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:46.590627+0000 osd.1 (osd.1) 4628 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:18.089043+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4629 sent 4628 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:47.622923+0000 osd.1 (osd.1) 4629 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5318> 2025-11-24T21:06:48.656+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:19.089238+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4630 sent 4629 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:48.658517+0000 osd.1 (osd.1) 4630 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4629) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:47.622923+0000 osd.1 (osd.1) 4629 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5307> 2025-11-24T21:06:49.623+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:20.089452+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4631 sent 4630 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:49.624869+0000 osd.1 (osd.1) 4631 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4630) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:48.658517+0000 osd.1 (osd.1) 4630 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4631) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:49.624869+0000 osd.1 (osd.1) 4631 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5293> 2025-11-24T21:06:50.628+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:21.089734+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4632 sent 4631 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:50.629872+0000 osd.1 (osd.1) 4632 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4632) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:50.629872+0000 osd.1 (osd.1) 4632 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5282> 2025-11-24T21:06:51.597+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 ms_handle_reset con 0x55ba3f8bfc00 session 0x55ba3da9d2c0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3f1a7800
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:22.089942+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4633 sent 4632 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:51.598252+0000 osd.1 (osd.1) 4633 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4633) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:51.598252+0000 osd.1 (osd.1) 4633 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5267> 2025-11-24T21:06:52.627+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:23.090179+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4634 sent 4633 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:52.629034+0000 osd.1 (osd.1) 4634 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4634) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:52.629034+0000 osd.1 (osd.1) 4634 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5253> 2025-11-24T21:06:53.580+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:24.090415+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4635 sent 4634 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:53.582363+0000 osd.1 (osd.1) 4635 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4635) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:53.582363+0000 osd.1 (osd.1) 4635 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5242> 2025-11-24T21:06:54.563+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:25.090704+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4636 sent 4635 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:54.566254+0000 osd.1 (osd.1) 4636 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4636) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:54.566254+0000 osd.1 (osd.1) 4636 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5231> 2025-11-24T21:06:55.614+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:26.090897+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4637 sent 4636 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:55.617938+0000 osd.1 (osd.1) 4637 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4637) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:55.617938+0000 osd.1 (osd.1) 4637 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5220> 2025-11-24T21:06:56.657+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:27.091058+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4638 sent 4637 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:56.659363+0000 osd.1 (osd.1) 4638 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4638) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:56.659363+0000 osd.1 (osd.1) 4638 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5208> 2025-11-24T21:06:57.661+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:28.091278+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4639 sent 4638 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:57.663110+0000 osd.1 (osd.1) 4639 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4639) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:57.663110+0000 osd.1 (osd.1) 4639 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5193> 2025-11-24T21:06:58.670+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:29.091526+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4640 sent 4639 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:58.670964+0000 osd.1 (osd.1) 4640 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4640) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:58.670964+0000 osd.1 (osd.1) 4640 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5182> 2025-11-24T21:06:59.684+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:30.091804+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4641 sent 4640 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:06:59.684757+0000 osd.1 (osd.1) 4641 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4641) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:06:59.684757+0000 osd.1 (osd.1) 4641 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5171> 2025-11-24T21:07:00.715+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:31.092103+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4642 sent 4641 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:00.716618+0000 osd.1 (osd.1) 4642 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4642) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:00.716618+0000 osd.1 (osd.1) 4642 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5160> 2025-11-24T21:07:01.737+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:32.092357+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4643 sent 4642 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:01.737846+0000 osd.1 (osd.1) 4643 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4643) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:01.737846+0000 osd.1 (osd.1) 4643 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5149> 2025-11-24T21:07:02.754+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:33.092648+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4644 sent 4643 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:02.755185+0000 osd.1 (osd.1) 4644 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4644) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:02.755185+0000 osd.1 (osd.1) 4644 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5135> 2025-11-24T21:07:03.749+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:34.092855+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4645 sent 4644 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:03.750279+0000 osd.1 (osd.1) 4645 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4645) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:03.750279+0000 osd.1 (osd.1) 4645 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5123> 2025-11-24T21:07:04.749+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:35.093107+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4646 sent 4645 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:04.749867+0000 osd.1 (osd.1) 4646 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4646) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:04.749867+0000 osd.1 (osd.1) 4646 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5112> 2025-11-24T21:07:05.769+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:36.093345+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4647 sent 4646 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:05.770009+0000 osd.1 (osd.1) 4647 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4647) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:05.770009+0000 osd.1 (osd.1) 4647 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5101> 2025-11-24T21:07:06.725+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:37.093716+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4648 sent 4647 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:06.725525+0000 osd.1 (osd.1) 4648 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5092> 2025-11-24T21:07:07.705+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4648) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:06.725525+0000 osd.1 (osd.1) 4648 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:38.093984+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4649 sent 4648 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:07.706141+0000 osd.1 (osd.1) 4649 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4649) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:07.706141+0000 osd.1 (osd.1) 4649 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5076> 2025-11-24T21:07:08.744+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:39.094369+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4650 sent 4649 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:08.745196+0000 osd.1 (osd.1) 4650 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5066> 2025-11-24T21:07:09.732+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4650) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:08.745196+0000 osd.1 (osd.1) 4650 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:40.094766+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4651 sent 4650 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:09.733359+0000 osd.1 (osd.1) 4651 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5055> 2025-11-24T21:07:10.755+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4651) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:09.733359+0000 osd.1 (osd.1) 4651 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:41.095055+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4652 sent 4651 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:10.756529+0000 osd.1 (osd.1) 4652 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5043> 2025-11-24T21:07:11.719+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4652) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:10.756529+0000 osd.1 (osd.1) 4652 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:42.095313+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4653 sent 4652 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:11.719743+0000 osd.1 (osd.1) 4653 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5032> 2025-11-24T21:07:12.704+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4653) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:11.719743+0000 osd.1 (osd.1) 4653 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:43.095640+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4654 sent 4653 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:12.705516+0000 osd.1 (osd.1) 4654 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5018> 2025-11-24T21:07:13.703+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4654) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:12.705516+0000 osd.1 (osd.1) 4654 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:44.095907+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4655 sent 4654 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:13.703744+0000 osd.1 (osd.1) 4655 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -5007> 2025-11-24T21:07:14.675+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4655) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:13.703744+0000 osd.1 (osd.1) 4655 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:45.096336+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4656 sent 4655 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:14.675789+0000 osd.1 (osd.1) 4656 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4996> 2025-11-24T21:07:15.676+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4656) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:14.675789+0000 osd.1 (osd.1) 4656 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:46.096566+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4657 sent 4656 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:15.677291+0000 osd.1 (osd.1) 4657 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4984> 2025-11-24T21:07:16.704+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4657) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:15.677291+0000 osd.1 (osd.1) 4657 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:47.096884+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4658 sent 4657 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:16.705402+0000 osd.1 (osd.1) 4658 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4973> 2025-11-24T21:07:17.751+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4658) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:16.705402+0000 osd.1 (osd.1) 4658 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:48.097175+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4659 sent 4658 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:17.752332+0000 osd.1 (osd.1) 4659 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4958> 2025-11-24T21:07:18.754+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4659) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:17.752332+0000 osd.1 (osd.1) 4659 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:49.097518+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4660 sent 4659 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:18.755009+0000 osd.1 (osd.1) 4660 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4946> 2025-11-24T21:07:19.796+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4660) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:18.755009+0000 osd.1 (osd.1) 4660 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:50.097800+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4661 sent 4660 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:19.797274+0000 osd.1 (osd.1) 4661 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4935> 2025-11-24T21:07:20.764+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4661) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:19.797274+0000 osd.1 (osd.1) 4661 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:51.098048+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4662 sent 4661 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:20.766550+0000 osd.1 (osd.1) 4662 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4924> 2025-11-24T21:07:21.784+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4662) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:20.766550+0000 osd.1 (osd.1) 4662 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:52.098304+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4663 sent 4662 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:21.785322+0000 osd.1 (osd.1) 4663 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4913> 2025-11-24T21:07:22.798+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4663) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:21.785322+0000 osd.1 (osd.1) 4663 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:53.098553+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4664 sent 4663 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:22.799938+0000 osd.1 (osd.1) 4664 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4898> 2025-11-24T21:07:23.846+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4664) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:22.799938+0000 osd.1 (osd.1) 4664 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:54.098941+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4665 sent 4664 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:23.847810+0000 osd.1 (osd.1) 4665 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4887> 2025-11-24T21:07:24.850+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4665) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:23.847810+0000 osd.1 (osd.1) 4665 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:55.099176+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4666 sent 4665 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:24.851491+0000 osd.1 (osd.1) 4666 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4876> 2025-11-24T21:07:25.896+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4666) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:24.851491+0000 osd.1 (osd.1) 4666 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:56.099366+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4667 sent 4666 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:25.898289+0000 osd.1 (osd.1) 4667 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4865> 2025-11-24T21:07:26.888+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4667) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:25.898289+0000 osd.1 (osd.1) 4667 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:57.099562+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4668 sent 4667 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:26.889842+0000 osd.1 (osd.1) 4668 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4668) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:26.889842+0000 osd.1 (osd.1) 4668 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4852> 2025-11-24T21:07:27.937+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:58.099854+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4669 sent 4668 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:27.938650+0000 osd.1 (osd.1) 4669 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4669) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:27.938650+0000 osd.1 (osd.1) 4669 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4837> 2025-11-24T21:07:28.935+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:59.100187+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4670 sent 4669 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:28.937426+0000 osd.1 (osd.1) 4670 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4828> 2025-11-24T21:07:29.959+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4670) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:28.937426+0000 osd.1 (osd.1) 4670 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:00.100429+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4671 sent 4670 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:29.961004+0000 osd.1 (osd.1) 4671 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4816> 2025-11-24T21:07:31.006+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4671) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:29.961004+0000 osd.1 (osd.1) 4671 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:01.100667+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4672 sent 4671 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:31.008198+0000 osd.1 (osd.1) 4672 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4805> 2025-11-24T21:07:32.039+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4672) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:31.008198+0000 osd.1 (osd.1) 4672 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:02.100918+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4673 sent 4672 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:32.040237+0000 osd.1 (osd.1) 4673 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4793> 2025-11-24T21:07:33.006+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4673) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:32.040237+0000 osd.1 (osd.1) 4673 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:03.101195+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4674 sent 4673 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:33.007276+0000 osd.1 (osd.1) 4674 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4779> 2025-11-24T21:07:33.990+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4674) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:33.007276+0000 osd.1 (osd.1) 4674 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:04.101408+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4675 sent 4674 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:33.992268+0000 osd.1 (osd.1) 4675 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4768> 2025-11-24T21:07:34.998+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4675) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:33.992268+0000 osd.1 (osd.1) 4675 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:05.101651+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4676 sent 4675 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:34.999948+0000 osd.1 (osd.1) 4676 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4756> 2025-11-24T21:07:35.950+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:06.101891+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4677 sent 4676 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:35.950867+0000 osd.1 (osd.1) 4677 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4676) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:34.999948+0000 osd.1 (osd.1) 4676 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4745> 2025-11-24T21:07:36.921+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:07.102374+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4678 sent 4677 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:36.921440+0000 osd.1 (osd.1) 4678 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4677) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:35.950867+0000 osd.1 (osd.1) 4677 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4678) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:36.921440+0000 osd.1 (osd.1) 4678 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4732> 2025-11-24T21:07:37.930+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:08.102784+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4679 sent 4678 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:37.931106+0000 osd.1 (osd.1) 4679 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4679) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:37.931106+0000 osd.1 (osd.1) 4679 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4718> 2025-11-24T21:07:38.929+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:09.103249+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4680 sent 4679 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:38.930294+0000 osd.1 (osd.1) 4680 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4680) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:38.930294+0000 osd.1 (osd.1) 4680 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4707> 2025-11-24T21:07:39.959+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:10.103640+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4681 sent 4680 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:39.959806+0000 osd.1 (osd.1) 4681 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4681) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:39.959806+0000 osd.1 (osd.1) 4681 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4696> 2025-11-24T21:07:40.920+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:11.104323+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4682 sent 4681 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:40.921219+0000 osd.1 (osd.1) 4682 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4682) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:40.921219+0000 osd.1 (osd.1) 4682 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4684> 2025-11-24T21:07:41.902+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:12.104760+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4683 sent 4682 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:41.902701+0000 osd.1 (osd.1) 4683 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4683) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:41.902701+0000 osd.1 (osd.1) 4683 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4673> 2025-11-24T21:07:42.923+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:13.105129+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4684 sent 4683 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:42.924131+0000 osd.1 (osd.1) 4684 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4684) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:42.924131+0000 osd.1 (osd.1) 4684 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4659> 2025-11-24T21:07:43.965+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:14.105520+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4685 sent 4684 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:43.966549+0000 osd.1 (osd.1) 4685 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4685) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:43.966549+0000 osd.1 (osd.1) 4685 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4647> 2025-11-24T21:07:44.999+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:15.105924+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4686 sent 4685 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:45.000316+0000 osd.1 (osd.1) 4686 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4686) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:45.000316+0000 osd.1 (osd.1) 4686 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4636> 2025-11-24T21:07:46.007+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:16.106269+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4687 sent 4686 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:46.008227+0000 osd.1 (osd.1) 4687 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4687) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:46.008227+0000 osd.1 (osd.1) 4687 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4624> 2025-11-24T21:07:47.049+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:17.106538+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4688 sent 4687 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:47.050488+0000 osd.1 (osd.1) 4688 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4688) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:47.050488+0000 osd.1 (osd.1) 4688 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4613> 2025-11-24T21:07:48.057+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:18.106862+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4689 sent 4688 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:48.058364+0000 osd.1 (osd.1) 4689 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4689) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:48.058364+0000 osd.1 (osd.1) 4689 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4599> 2025-11-24T21:07:49.052+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:19.107171+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4690 sent 4689 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:49.052985+0000 osd.1 (osd.1) 4690 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4690) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:49.052985+0000 osd.1 (osd.1) 4690 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4588> 2025-11-24T21:07:50.023+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:20.107558+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4691 sent 4690 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:50.024497+0000 osd.1 (osd.1) 4691 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4691) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:50.024497+0000 osd.1 (osd.1) 4691 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4577> 2025-11-24T21:07:51.023+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:21.108040+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4692 sent 4691 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:51.023791+0000 osd.1 (osd.1) 4692 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4692) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:51.023791+0000 osd.1 (osd.1) 4692 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4565> 2025-11-24T21:07:51.982+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:22.108376+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4693 sent 4692 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:51.983352+0000 osd.1 (osd.1) 4693 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4693) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:51.983352+0000 osd.1 (osd.1) 4693 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4554> 2025-11-24T21:07:52.943+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:23.108678+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4694 sent 4693 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:52.943897+0000 osd.1 (osd.1) 4694 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4694) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:52.943897+0000 osd.1 (osd.1) 4694 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4539> 2025-11-24T21:07:53.968+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:24.109035+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4695 sent 4694 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:53.968842+0000 osd.1 (osd.1) 4695 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4695) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:53.968842+0000 osd.1 (osd.1) 4695 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4528> 2025-11-24T21:07:54.973+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:25.109425+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4696 sent 4695 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:54.974169+0000 osd.1 (osd.1) 4696 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4696) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:54.974169+0000 osd.1 (osd.1) 4696 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4517> 2025-11-24T21:07:56.004+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:26.109732+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4697 sent 4696 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:56.005258+0000 osd.1 (osd.1) 4697 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4697) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:56.005258+0000 osd.1 (osd.1) 4697 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4506> 2025-11-24T21:07:56.966+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:27.110015+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4698 sent 4697 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:56.967174+0000 osd.1 (osd.1) 4698 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4698) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:56.967174+0000 osd.1 (osd.1) 4698 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4495> 2025-11-24T21:07:57.977+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:28.110307+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4699 sent 4698 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:57.977692+0000 osd.1 (osd.1) 4699 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4699) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:57.977692+0000 osd.1 (osd.1) 4699 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4479> 2025-11-24T21:07:58.972+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:29.110569+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4700 sent 4699 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:58.973719+0000 osd.1 (osd.1) 4700 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4700) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:58.973719+0000 osd.1 (osd.1) 4700 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4468> 2025-11-24T21:07:59.967+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:30.110934+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4701 sent 4700 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:07:59.968341+0000 osd.1 (osd.1) 4701 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4701) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:07:59.968341+0000 osd.1 (osd.1) 4701 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4457> 2025-11-24T21:08:00.938+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:31.111245+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4702 sent 4701 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:00.939770+0000 osd.1 (osd.1) 4702 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4702) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:00.939770+0000 osd.1 (osd.1) 4702 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4446> 2025-11-24T21:08:01.965+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:32.111466+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4703 sent 4702 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:01.966934+0000 osd.1 (osd.1) 4703 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4703) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:01.966934+0000 osd.1 (osd.1) 4703 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4435> 2025-11-24T21:08:02.979+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:33.111674+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4704 sent 4703 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:02.981295+0000 osd.1 (osd.1) 4704 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4704) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:02.981295+0000 osd.1 (osd.1) 4704 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4420> 2025-11-24T21:08:03.954+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:34.112267+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4705 sent 4704 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:03.956482+0000 osd.1 (osd.1) 4705 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4705) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:03.956482+0000 osd.1 (osd.1) 4705 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4409> 2025-11-24T21:08:04.934+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:35.112756+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4706 sent 4705 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:04.935748+0000 osd.1 (osd.1) 4706 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4706) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:04.935748+0000 osd.1 (osd.1) 4706 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4398> 2025-11-24T21:08:05.892+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:36.113289+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4707 sent 4706 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:05.894222+0000 osd.1 (osd.1) 4707 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4707) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:05.894222+0000 osd.1 (osd.1) 4707 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4387> 2025-11-24T21:08:06.846+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:37.113532+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4708 sent 4707 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:06.847762+0000 osd.1 (osd.1) 4708 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4708) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:06.847762+0000 osd.1 (osd.1) 4708 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4375> 2025-11-24T21:08:07.863+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:38.114007+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4709 sent 4708 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:07.864772+0000 osd.1 (osd.1) 4709 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4709) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:07.864772+0000 osd.1 (osd.1) 4709 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4361> 2025-11-24T21:08:08.857+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:39.114183+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4710 sent 4709 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:08.859346+0000 osd.1 (osd.1) 4710 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4710) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:08.859346+0000 osd.1 (osd.1) 4710 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4350> 2025-11-24T21:08:09.886+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:40.114388+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4711 sent 4710 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:09.888217+0000 osd.1 (osd.1) 4711 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4711) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:09.888217+0000 osd.1 (osd.1) 4711 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4339> 2025-11-24T21:08:10.897+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:41.114760+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4712 sent 4711 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:10.898981+0000 osd.1 (osd.1) 4712 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4712) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:10.898981+0000 osd.1 (osd.1) 4712 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4328> 2025-11-24T21:08:11.944+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:42.115034+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4713 sent 4712 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:11.945800+0000 osd.1 (osd.1) 4713 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4713) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:11.945800+0000 osd.1 (osd.1) 4713 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4316> 2025-11-24T21:08:12.988+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:43.115459+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4714 sent 4713 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:12.990514+0000 osd.1 (osd.1) 4714 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4714) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:12.990514+0000 osd.1 (osd.1) 4714 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4302> 2025-11-24T21:08:13.979+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:44.115908+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4715 sent 4714 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:13.979815+0000 osd.1 (osd.1) 4715 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4715) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:13.979815+0000 osd.1 (osd.1) 4715 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4291> 2025-11-24T21:08:14.990+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:45.116197+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4716 sent 4715 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:14.991533+0000 osd.1 (osd.1) 4716 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4716) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:14.991533+0000 osd.1 (osd.1) 4716 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4280> 2025-11-24T21:08:15.974+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:46.116457+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4717 sent 4716 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:15.975069+0000 osd.1 (osd.1) 4717 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4717) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:15.975069+0000 osd.1 (osd.1) 4717 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4269> 2025-11-24T21:08:16.995+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:47.116715+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4718 sent 4717 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:16.995697+0000 osd.1 (osd.1) 4718 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4718) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:16.995697+0000 osd.1 (osd.1) 4718 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4257> 2025-11-24T21:08:17.987+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:48.116982+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4719 sent 4718 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:17.988077+0000 osd.1 (osd.1) 4719 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 ms_handle_reset con 0x55ba3f8bf800 session 0x55ba3ad4c960
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3f1a6c00
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4719) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:17.988077+0000 osd.1 (osd.1) 4719 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4240> 2025-11-24T21:08:19.005+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:49.117174+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4720 sent 4719 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:19.006126+0000 osd.1 (osd.1) 4720 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4720) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:19.006126+0000 osd.1 (osd.1) 4720 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4229> 2025-11-24T21:08:20.008+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:50.117465+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4721 sent 4720 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:20.009521+0000 osd.1 (osd.1) 4721 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4721) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:20.009521+0000 osd.1 (osd.1) 4721 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4218> 2025-11-24T21:08:21.039+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:51.117759+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4722 sent 4721 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:21.040161+0000 osd.1 (osd.1) 4722 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4722) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:21.040161+0000 osd.1 (osd.1) 4722 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4206> 2025-11-24T21:08:22.075+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:52.118002+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4723 sent 4722 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:22.076622+0000 osd.1 (osd.1) 4723 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4723) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:22.076622+0000 osd.1 (osd.1) 4723 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4195> 2025-11-24T21:08:23.094+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:53.118242+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4724 sent 4723 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:23.095038+0000 osd.1 (osd.1) 4724 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4724) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:23.095038+0000 osd.1 (osd.1) 4724 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:54.118887+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4178> 2025-11-24T21:08:24.127+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:55.119047+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4725 sent 4724 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:24.127777+0000 osd.1 (osd.1) 4725 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4169> 2025-11-24T21:08:25.126+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4725) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:24.127777+0000 osd.1 (osd.1) 4725 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:56.119282+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4726 sent 4725 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:25.127100+0000 osd.1 (osd.1) 4726 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4158> 2025-11-24T21:08:26.159+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4726) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:25.127100+0000 osd.1 (osd.1) 4726 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:57.119569+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4727 sent 4726 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:26.159921+0000 osd.1 (osd.1) 4727 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4147> 2025-11-24T21:08:27.181+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4727) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:26.159921+0000 osd.1 (osd.1) 4727 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:58.119957+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4728 sent 4727 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:27.181747+0000 osd.1 (osd.1) 4728 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4135> 2025-11-24T21:08:28.145+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 9713 writes, 37K keys, 9713 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9713 writes, 2484 syncs, 3.91 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 109 writes, 367 keys, 109 commit groups, 1.0 writes per commit group, ingest: 0.19 MB, 0.00 MB/s
                                           Interval WAL: 109 writes, 46 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4728) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:27.181747+0000 osd.1 (osd.1) 4728 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4125> 2025-11-24T21:08:29.111+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:59.120378+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4730 sent 4728 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:28.146282+0000 osd.1 (osd.1) 4729 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:29.112342+0000 osd.1 (osd.1) 4730 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4730) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:28.146282+0000 osd.1 (osd.1) 4729 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:29.112342+0000 osd.1 (osd.1) 4730 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:00.120748+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4108> 2025-11-24T21:08:30.151+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:01.121143+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4731 sent 4730 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:30.151727+0000 osd.1 (osd.1) 4731 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4099> 2025-11-24T21:08:31.135+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4731) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:30.151727+0000 osd.1 (osd.1) 4731 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:02.121516+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4732 sent 4731 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:31.136274+0000 osd.1 (osd.1) 4732 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4088> 2025-11-24T21:08:32.166+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4732) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:31.136274+0000 osd.1 (osd.1) 4732 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:03.121922+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4733 sent 4732 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:32.166773+0000 osd.1 (osd.1) 4733 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4077> 2025-11-24T21:08:33.212+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4733) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:32.166773+0000 osd.1 (osd.1) 4733 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:04.122273+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4734 sent 4733 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:33.212842+0000 osd.1 (osd.1) 4734 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4063> 2025-11-24T21:08:34.207+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4734) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:33.212842+0000 osd.1 (osd.1) 4734 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:05.122737+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4735 sent 4734 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:34.207687+0000 osd.1 (osd.1) 4735 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4051> 2025-11-24T21:08:35.254+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4735) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:34.207687+0000 osd.1 (osd.1) 4735 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:06.123135+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4736 sent 4735 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:35.255494+0000 osd.1 (osd.1) 4736 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4040> 2025-11-24T21:08:36.263+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4736) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:35.255494+0000 osd.1 (osd.1) 4736 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:07.123358+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4737 sent 4736 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:36.264937+0000 osd.1 (osd.1) 4737 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4028> 2025-11-24T21:08:37.218+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4737) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:36.264937+0000 osd.1 (osd.1) 4737 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:08.123811+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4738 sent 4737 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:37.219672+0000 osd.1 (osd.1) 4738 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4017> 2025-11-24T21:08:38.226+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4738) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:37.219672+0000 osd.1 (osd.1) 4738 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:09.124121+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4739 sent 4738 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:38.227365+0000 osd.1 (osd.1) 4739 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -4003> 2025-11-24T21:08:39.202+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4739) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:38.227365+0000 osd.1 (osd.1) 4739 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:10.124465+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4740 sent 4739 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:39.203987+0000 osd.1 (osd.1) 4740 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3991> 2025-11-24T21:08:40.169+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4740) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:39.203987+0000 osd.1 (osd.1) 4740 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:11.124821+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4741 sent 4740 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:40.170957+0000 osd.1 (osd.1) 4741 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3980> 2025-11-24T21:08:41.154+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4741) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:40.170957+0000 osd.1 (osd.1) 4741 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:12.125094+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4742 sent 4741 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:41.156088+0000 osd.1 (osd.1) 4742 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3968> 2025-11-24T21:08:42.182+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4742) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:41.156088+0000 osd.1 (osd.1) 4742 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:13.125325+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4743 sent 4742 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:42.184312+0000 osd.1 (osd.1) 4743 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3956> 2025-11-24T21:08:43.146+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3950> 2025-11-24T21:08:44.117+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4743) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:42.184312+0000 osd.1 (osd.1) 4743 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:14.125613+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4745 sent 4743 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:43.148185+0000 osd.1 (osd.1) 4744 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:44.119510+0000 osd.1 (osd.1) 4745 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4745) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:43.148185+0000 osd.1 (osd.1) 4744 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:44.119510+0000 osd.1 (osd.1) 4745 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:15.126011+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3932> 2025-11-24T21:08:45.154+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:16.126202+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4746 sent 4745 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:45.156231+0000 osd.1 (osd.1) 4746 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3923> 2025-11-24T21:08:46.133+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 ms_handle_reset con 0x55ba3f8bf400 session 0x55ba3ac78960
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3f1a6800
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3917> 2025-11-24T21:08:47.092+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:17.126397+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 3 last_log 4748 sent 4746 num 3 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:46.135863+0000 osd.1 (osd.1) 4747 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:47.094138+0000 osd.1 (osd.1) 4748 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4746) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:45.156231+0000 osd.1 (osd.1) 4746 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4748) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:46.135863+0000 osd.1 (osd.1) 4747 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:47.094138+0000 osd.1 (osd.1) 4748 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3902> 2025-11-24T21:08:48.066+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:18.126580+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4749 sent 4748 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:48.068075+0000 osd.1 (osd.1) 4749 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4749) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:48.068075+0000 osd.1 (osd.1) 4749 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3888> 2025-11-24T21:08:49.112+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:19.126939+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4750 sent 4749 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:49.114549+0000 osd.1 (osd.1) 4750 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4750) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:49.114549+0000 osd.1 (osd.1) 4750 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:20.127203+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3874> 2025-11-24T21:08:50.144+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:21.127377+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4751 sent 4750 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:50.146346+0000 osd.1 (osd.1) 4751 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3864> 2025-11-24T21:08:51.156+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4751) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:50.146346+0000 osd.1 (osd.1) 4751 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:22.127655+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4752 sent 4751 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:51.157350+0000 osd.1 (osd.1) 4752 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3853> 2025-11-24T21:08:52.132+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4752) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:51.157350+0000 osd.1 (osd.1) 4752 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3848> 2025-11-24T21:08:53.084+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:23.127873+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4754 sent 4752 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:52.133670+0000 osd.1 (osd.1) 4753 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:53.085267+0000 osd.1 (osd.1) 4754 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4754) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:52.133670+0000 osd.1 (osd.1) 4753 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:53.085267+0000 osd.1 (osd.1) 4754 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3831> 2025-11-24T21:08:54.075+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:24.128076+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4755 sent 4754 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:54.076302+0000 osd.1 (osd.1) 4755 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4755) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:54.076302+0000 osd.1 (osd.1) 4755 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3820> 2025-11-24T21:08:55.048+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:25.128288+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4756 sent 4755 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:55.048807+0000 osd.1 (osd.1) 4756 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4756) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:55.048807+0000 osd.1 (osd.1) 4756 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3809> 2025-11-24T21:08:56.060+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:26.128516+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4757 sent 4756 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:56.061299+0000 osd.1 (osd.1) 4757 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4757) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:56.061299+0000 osd.1 (osd.1) 4757 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3797> 2025-11-24T21:08:57.039+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:27.128740+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4758 sent 4757 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:57.040114+0000 osd.1 (osd.1) 4758 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4758) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:57.040114+0000 osd.1 (osd.1) 4758 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3785> 2025-11-24T21:08:58.071+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:28.128934+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4759 sent 4758 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:58.072031+0000 osd.1 (osd.1) 4759 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4759) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:58.072031+0000 osd.1 (osd.1) 4759 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3771> 2025-11-24T21:08:59.077+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:29.129196+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4760 sent 4759 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:08:59.078012+0000 osd.1 (osd.1) 4760 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4760) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:08:59.078012+0000 osd.1 (osd.1) 4760 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3759> 2025-11-24T21:09:00.102+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:30.129402+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4761 sent 4760 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:00.103201+0000 osd.1 (osd.1) 4761 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4761) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:00.103201+0000 osd.1 (osd.1) 4761 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:31.129662+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3745> 2025-11-24T21:09:01.133+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3741> 2025-11-24T21:09:02.104+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:32.129945+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4763 sent 4761 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:01.134017+0000 osd.1 (osd.1) 4762 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:02.105169+0000 osd.1 (osd.1) 4763 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4763) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:01.134017+0000 osd.1 (osd.1) 4762 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:02.105169+0000 osd.1 (osd.1) 4763 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3728> 2025-11-24T21:09:03.113+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:33.130205+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4764 sent 4763 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:03.113892+0000 osd.1 (osd.1) 4764 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4764) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:03.113892+0000 osd.1 (osd.1) 4764 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3713> 2025-11-24T21:09:04.128+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:34.130434+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4765 sent 4764 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:04.129086+0000 osd.1 (osd.1) 4765 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4765) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:04.129086+0000 osd.1 (osd.1) 4765 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:35.130679+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3699> 2025-11-24T21:09:05.176+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:36.130847+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4766 sent 4765 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:05.177235+0000 osd.1 (osd.1) 4766 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3689> 2025-11-24T21:09:06.224+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4766) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:05.177235+0000 osd.1 (osd.1) 4766 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:37.131075+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4767 sent 4766 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:06.224841+0000 osd.1 (osd.1) 4767 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3678> 2025-11-24T21:09:07.238+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4767) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:06.224841+0000 osd.1 (osd.1) 4767 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:38.131330+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4768 sent 4767 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:07.239482+0000 osd.1 (osd.1) 4768 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3666> 2025-11-24T21:09:08.266+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4768) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:07.239482+0000 osd.1 (osd.1) 4768 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:39.131891+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4769 sent 4768 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:08.266702+0000 osd.1 (osd.1) 4769 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3652> 2025-11-24T21:09:09.228+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4769) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:08.266702+0000 osd.1 (osd.1) 4769 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:40.140805+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4770 sent 4769 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:09.229440+0000 osd.1 (osd.1) 4770 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3640> 2025-11-24T21:09:10.201+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4770) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:09.229440+0000 osd.1 (osd.1) 4770 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:41.141023+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4771 sent 4770 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:10.201969+0000 osd.1 (osd.1) 4771 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3629> 2025-11-24T21:09:11.164+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4771) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:10.201969+0000 osd.1 (osd.1) 4771 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:42.141222+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4772 sent 4771 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:11.165368+0000 osd.1 (osd.1) 4772 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3618> 2025-11-24T21:09:12.171+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4772) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:11.165368+0000 osd.1 (osd.1) 4772 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3612> 2025-11-24T21:09:13.134+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:43.141442+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4774 sent 4772 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:12.171825+0000 osd.1 (osd.1) 4773 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:13.134885+0000 osd.1 (osd.1) 4774 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4774) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:12.171825+0000 osd.1 (osd.1) 4773 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:13.134885+0000 osd.1 (osd.1) 4774 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,18])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:44.141971+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3592> 2025-11-24T21:09:14.165+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:45.142663+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4775 sent 4774 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:14.166762+0000 osd.1 (osd.1) 4775 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3583> 2025-11-24T21:09:15.145+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4775) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:14.166762+0000 osd.1 (osd.1) 4775 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:46.142911+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4776 sent 4775 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:15.146901+0000 osd.1 (osd.1) 4776 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3572> 2025-11-24T21:09:16.186+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4776) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:15.146901+0000 osd.1 (osd.1) 4776 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:47.143268+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4777 sent 4776 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:16.188008+0000 osd.1 (osd.1) 4777 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3561> 2025-11-24T21:09:17.165+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4777) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:16.188008+0000 osd.1 (osd.1) 4777 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:48.143717+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4778 sent 4777 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:17.167554+0000 osd.1 (osd.1) 4778 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3550> 2025-11-24T21:09:18.169+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473559 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4778) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:17.167554+0000 osd.1 (osd.1) 4778 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:49.144250+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4779 sent 4778 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:18.170948+0000 osd.1 (osd.1) 4779 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3535> 2025-11-24T21:09:19.208+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:50.144938+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4780 sent 4779 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:19.209335+0000 osd.1 (osd.1) 4780 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4779) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:18.170948+0000 osd.1 (osd.1) 4779 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3524> 2025-11-24T21:09:20.159+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:51.145446+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4781 sent 4780 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:20.161423+0000 osd.1 (osd.1) 4781 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4780) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:19.209335+0000 osd.1 (osd.1) 4780 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3513> 2025-11-24T21:09:21.205+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:52.145960+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4782 sent 4781 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:21.206907+0000 osd.1 (osd.1) 4782 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3503> 2025-11-24T21:09:22.252+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4781) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:20.161423+0000 osd.1 (osd.1) 4781 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 54599680 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:53.146665+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4783 sent 4782 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:22.253372+0000 osd.1 (osd.1) 4783 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3492> 2025-11-24T21:09:23.203+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 548.489562988s of 548.556335449s, submitted: 30
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb4000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4782) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:21.206907+0000 osd.1 (osd.1) 4782 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4783) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:22.253372+0000 osd.1 (osd.1) 4783 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:54.147099+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4784 sent 4783 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:23.204715+0000 osd.1 (osd.1) 4784 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3474> 2025-11-24T21:09:24.157+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4784) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:23.204715+0000 osd.1 (osd.1) 4784 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3469> 2025-11-24T21:09:25.134+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:55.147331+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4786 sent 4784 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:24.159407+0000 osd.1 (osd.1) 4785 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:25.136160+0000 osd.1 (osd.1) 4786 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:56.147560+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3456> 2025-11-24T21:09:26.149+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4786) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:24.159407+0000 osd.1 (osd.1) 4785 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:25.136160+0000 osd.1 (osd.1) 4786 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:57.147819+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4787 sent 4786 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:26.151398+0000 osd.1 (osd.1) 4787 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3444> 2025-11-24T21:09:27.160+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4787) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:26.151398+0000 osd.1 (osd.1) 4787 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:58.148098+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4788 sent 4787 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:27.162066+0000 osd.1 (osd.1) 4788 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3433> 2025-11-24T21:09:28.208+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4788) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:27.162066+0000 osd.1 (osd.1) 4788 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:59.148315+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4789 sent 4788 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:28.210204+0000 osd.1 (osd.1) 4789 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3418> 2025-11-24T21:09:29.219+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4789) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:28.210204+0000 osd.1 (osd.1) 4789 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:00.148526+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4790 sent 4789 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:29.220026+0000 osd.1 (osd.1) 4790 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3407> 2025-11-24T21:09:30.246+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4790) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:29.220026+0000 osd.1 (osd.1) 4790 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:01.148891+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4791 sent 4790 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:30.247168+0000 osd.1 (osd.1) 4791 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3396> 2025-11-24T21:09:31.222+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4791) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:30.247168+0000 osd.1 (osd.1) 4791 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:02.149114+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4792 sent 4791 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:31.222952+0000 osd.1 (osd.1) 4792 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3384> 2025-11-24T21:09:32.263+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:03.149334+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4793 sent 4792 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:32.264256+0000 osd.1 (osd.1) 4793 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4792) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:31.222952+0000 osd.1 (osd.1) 4792 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3373> 2025-11-24T21:09:33.287+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:04.149638+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4794 sent 4793 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:33.288347+0000 osd.1 (osd.1) 4794 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4793) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:32.264256+0000 osd.1 (osd.1) 4793 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4794) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:33.288347+0000 osd.1 (osd.1) 4794 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3357> 2025-11-24T21:09:34.309+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:05.150019+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4795 sent 4794 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:34.309695+0000 osd.1 (osd.1) 4795 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3347> 2025-11-24T21:09:35.325+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4795) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:34.309695+0000 osd.1 (osd.1) 4795 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:06.150339+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4796 sent 4795 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:35.326523+0000 osd.1 (osd.1) 4796 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3336> 2025-11-24T21:09:36.369+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4796) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:35.326523+0000 osd.1 (osd.1) 4796 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:07.150548+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4797 sent 4796 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:36.369862+0000 osd.1 (osd.1) 4797 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3324> 2025-11-24T21:09:37.344+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4797) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:36.369862+0000 osd.1 (osd.1) 4797 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:08.150804+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4798 sent 4797 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:37.344722+0000 osd.1 (osd.1) 4798 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3313> 2025-11-24T21:09:38.361+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:09.151186+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4799 sent 4798 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:38.362046+0000 osd.1 (osd.1) 4799 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4798) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:37.344722+0000 osd.1 (osd.1) 4798 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3299> 2025-11-24T21:09:39.323+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:10.151468+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4800 sent 4799 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:39.323810+0000 osd.1 (osd.1) 4800 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3290> 2025-11-24T21:09:40.297+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4799) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:38.362046+0000 osd.1 (osd.1) 4799 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4800) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:39.323810+0000 osd.1 (osd.1) 4800 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:11.151765+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4801 sent 4800 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:40.298193+0000 osd.1 (osd.1) 4801 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3277> 2025-11-24T21:09:41.259+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4801) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:40.298193+0000 osd.1 (osd.1) 4801 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:12.152075+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4802 sent 4801 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:41.260659+0000 osd.1 (osd.1) 4802 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3266> 2025-11-24T21:09:42.258+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4802) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:41.260659+0000 osd.1 (osd.1) 4802 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:13.152335+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4803 sent 4802 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:42.259270+0000 osd.1 (osd.1) 4803 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3254> 2025-11-24T21:09:43.240+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4803) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:42.259270+0000 osd.1 (osd.1) 4803 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:14.152562+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4804 sent 4803 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:43.241385+0000 osd.1 (osd.1) 4804 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3240> 2025-11-24T21:09:44.228+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4804) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:43.241385+0000 osd.1 (osd.1) 4804 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:15.152956+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4805 sent 4804 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:44.229371+0000 osd.1 (osd.1) 4805 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3229> 2025-11-24T21:09:45.244+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4805) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:44.229371+0000 osd.1 (osd.1) 4805 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:16.153222+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4806 sent 4805 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:45.244869+0000 osd.1 (osd.1) 4806 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3218> 2025-11-24T21:09:46.200+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4806) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:45.244869+0000 osd.1 (osd.1) 4806 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:17.153445+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4807 sent 4806 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:46.201259+0000 osd.1 (osd.1) 4807 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3207> 2025-11-24T21:09:47.242+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:18.153715+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4808 sent 4807 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:47.242819+0000 osd.1 (osd.1) 4808 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4807) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:46.201259+0000 osd.1 (osd.1) 4807 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3196> 2025-11-24T21:09:48.209+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:19.153993+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4809 sent 4808 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:48.209873+0000 osd.1 (osd.1) 4809 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3184> 2025-11-24T21:09:49.193+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4808) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:47.242819+0000 osd.1 (osd.1) 4808 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4809) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:48.209873+0000 osd.1 (osd.1) 4809 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:20.154220+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4810 sent 4809 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:49.194348+0000 osd.1 (osd.1) 4810 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3170> 2025-11-24T21:09:50.241+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4810) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:49.194348+0000 osd.1 (osd.1) 4810 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:21.154463+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4811 sent 4810 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:50.242080+0000 osd.1 (osd.1) 4811 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3159> 2025-11-24T21:09:51.237+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4811) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:50.242080+0000 osd.1 (osd.1) 4811 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:22.154765+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4812 sent 4811 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:51.238010+0000 osd.1 (osd.1) 4812 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3149> 2025-11-24T21:09:52.200+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4812) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:51.238010+0000 osd.1 (osd.1) 4812 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:23.155007+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4813 sent 4812 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:52.201639+0000 osd.1 (osd.1) 4813 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3138> 2025-11-24T21:09:53.195+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4813) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:52.201639+0000 osd.1 (osd.1) 4813 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:24.155221+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4814 sent 4813 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:53.196676+0000 osd.1 (osd.1) 4814 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3124> 2025-11-24T21:09:54.161+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4814) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:53.196676+0000 osd.1 (osd.1) 4814 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:25.155468+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4815 sent 4814 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:54.163074+0000 osd.1 (osd.1) 4815 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3112> 2025-11-24T21:09:55.204+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4815) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:54.163074+0000 osd.1 (osd.1) 4815 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:26.155772+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4816 sent 4815 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:55.205710+0000 osd.1 (osd.1) 4816 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3101> 2025-11-24T21:09:56.192+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4816) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:55.205710+0000 osd.1 (osd.1) 4816 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:27.156023+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4817 sent 4816 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:56.193106+0000 osd.1 (osd.1) 4817 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3090> 2025-11-24T21:09:57.212+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4817) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:56.193106+0000 osd.1 (osd.1) 4817 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:28.156269+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4818 sent 4817 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:57.213935+0000 osd.1 (osd.1) 4818 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3079> 2025-11-24T21:09:58.189+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4818) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:57.213935+0000 osd.1 (osd.1) 4818 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:29.156531+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4819 sent 4818 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:58.191252+0000 osd.1 (osd.1) 4819 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3064> 2025-11-24T21:09:59.163+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4819) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:58.191252+0000 osd.1 (osd.1) 4819 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3059> 2025-11-24T21:10:00.123+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:30.156856+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4821 sent 4819 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:09:59.165129+0000 osd.1 (osd.1) 4820 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:00.124978+0000 osd.1 (osd.1) 4821 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4821) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:09:59.165129+0000 osd.1 (osd.1) 4820 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:00.124978+0000 osd.1 (osd.1) 4821 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3046> 2025-11-24T21:10:01.145+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:31.157120+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4822 sent 4821 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:01.146832+0000 osd.1 (osd.1) 4822 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4822) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:01.146832+0000 osd.1 (osd.1) 4822 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3034> 2025-11-24T21:10:02.124+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:32.157346+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4823 sent 4822 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:02.126499+0000 osd.1 (osd.1) 4823 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4823) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:02.126499+0000 osd.1 (osd.1) 4823 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3023> 2025-11-24T21:10:03.090+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:33.157534+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4824 sent 4823 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:03.091947+0000 osd.1 (osd.1) 4824 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4824) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:03.091947+0000 osd.1 (osd.1) 4824 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -3007> 2025-11-24T21:10:04.072+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:34.157818+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4825 sent 4824 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:04.073830+0000 osd.1 (osd.1) 4825 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4825) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:04.073830+0000 osd.1 (osd.1) 4825 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2996> 2025-11-24T21:10:05.096+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:35.158144+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4826 sent 4825 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:05.097943+0000 osd.1 (osd.1) 4826 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4826) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:05.097943+0000 osd.1 (osd.1) 4826 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2985> 2025-11-24T21:10:06.128+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:36.158427+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4827 sent 4826 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:06.129321+0000 osd.1 (osd.1) 4827 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4827) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:06.129321+0000 osd.1 (osd.1) 4827 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:37.158757+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2971> 2025-11-24T21:10:07.178+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:38.158916+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4828 sent 4827 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:07.178884+0000 osd.1 (osd.1) 4828 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2962> 2025-11-24T21:10:08.178+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4828) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:07.178884+0000 osd.1 (osd.1) 4828 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2953> 2025-11-24T21:10:09.141+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:39.159158+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4830 sent 4828 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:08.179132+0000 osd.1 (osd.1) 4829 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:09.141779+0000 osd.1 (osd.1) 4830 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4830) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:08.179132+0000 osd.1 (osd.1) 4829 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:09.141779+0000 osd.1 (osd.1) 4830 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2940> 2025-11-24T21:10:10.115+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:40.159388+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4831 sent 4830 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:10.116238+0000 osd.1 (osd.1) 4831 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4831) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:10.116238+0000 osd.1 (osd.1) 4831 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2929> 2025-11-24T21:10:11.105+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 24 21:14:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065149123' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:41.159695+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4832 sent 4831 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:11.105708+0000 osd.1 (osd.1) 4832 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4832) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:11.105708+0000 osd.1 (osd.1) 4832 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2918> 2025-11-24T21:10:12.147+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:42.159961+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4833 sent 4832 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:12.148141+0000 osd.1 (osd.1) 4833 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4833) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:12.148141+0000 osd.1 (osd.1) 4833 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:43.160219+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2903> 2025-11-24T21:10:13.190+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2897> 2025-11-24T21:10:14.145+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:44.160399+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4835 sent 4833 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:13.190854+0000 osd.1 (osd.1) 4834 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:14.145722+0000 osd.1 (osd.1) 4835 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4835) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:13.190854+0000 osd.1 (osd.1) 4834 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:14.145722+0000 osd.1 (osd.1) 4835 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:45.160825+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2881> 2025-11-24T21:10:15.195+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:46.161100+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4836 sent 4835 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:15.195717+0000 osd.1 (osd.1) 4836 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2871> 2025-11-24T21:10:16.216+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4836) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:15.195717+0000 osd.1 (osd.1) 4836 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:47.161362+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4837 sent 4836 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:16.217274+0000 osd.1 (osd.1) 4837 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2859> 2025-11-24T21:10:17.170+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2856> 2025-11-24T21:10:18.133+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:48.161668+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 3 last_log 4839 sent 4837 num 3 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:17.170923+0000 osd.1 (osd.1) 4838 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:18.134222+0000 osd.1 (osd.1) 4839 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4837) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:16.217274+0000 osd.1 (osd.1) 4837 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2841> 2025-11-24T21:10:19.132+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:49.161866+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 3 last_log 4840 sent 4839 num 3 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:19.133315+0000 osd.1 (osd.1) 4840 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4839) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:17.170923+0000 osd.1 (osd.1) 4838 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:18.134222+0000 osd.1 (osd.1) 4839 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4840) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:19.133315+0000 osd.1 (osd.1) 4840 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:50.162029+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2824> 2025-11-24T21:10:20.167+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:51.162154+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4841 sent 4840 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:20.169170+0000 osd.1 (osd.1) 4841 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2815> 2025-11-24T21:10:21.165+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4841) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:20.169170+0000 osd.1 (osd.1) 4841 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2810> 2025-11-24T21:10:22.144+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:52.162317+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4843 sent 4841 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:21.166300+0000 osd.1 (osd.1) 4842 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:22.144945+0000 osd.1 (osd.1) 4843 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 54550528 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4843) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:21.166300+0000 osd.1 (osd.1) 4842 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:22.144945+0000 osd.1 (osd.1) 4843 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2796> 2025-11-24T21:10:23.095+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:53.162484+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4844 sent 4843 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:23.096667+0000 osd.1 (osd.1) 4844 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4844) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:23.096667+0000 osd.1 (osd.1) 4844 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2782> 2025-11-24T21:10:24.079+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:54.163171+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4845 sent 4844 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:24.080772+0000 osd.1 (osd.1) 4845 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4845) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:24.080772+0000 osd.1 (osd.1) 4845 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2771> 2025-11-24T21:10:25.050+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:55.163640+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4846 sent 4845 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:25.051466+0000 osd.1 (osd.1) 4846 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2762> 2025-11-24T21:10:26.080+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4846) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:25.051466+0000 osd.1 (osd.1) 4846 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:56.163849+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4847 sent 4846 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:26.081225+0000 osd.1 (osd.1) 4847 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2750> 2025-11-24T21:10:27.032+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4847) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:26.081225+0000 osd.1 (osd.1) 4847 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:57.164033+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4848 sent 4847 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:27.033487+0000 osd.1 (osd.1) 4848 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2739> 2025-11-24T21:10:28.058+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:58.164248+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4849 sent 4848 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:28.059386+0000 osd.1 (osd.1) 4849 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4848) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:27.033487+0000 osd.1 (osd.1) 4848 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2724> 2025-11-24T21:10:29.050+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:59.164466+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4850 sent 4849 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:29.051768+0000 osd.1 (osd.1) 4850 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4849) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:28.059386+0000 osd.1 (osd.1) 4849 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4850) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:29.051768+0000 osd.1 (osd.1) 4850 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2711> 2025-11-24T21:10:30.067+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:00.169955+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4851 sent 4850 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:30.068089+0000 osd.1 (osd.1) 4851 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4851) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:30.068089+0000 osd.1 (osd.1) 4851 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2700> 2025-11-24T21:10:31.090+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:01.170200+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4852 sent 4851 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:31.091554+0000 osd.1 (osd.1) 4852 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4852) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:31.091554+0000 osd.1 (osd.1) 4852 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2688> 2025-11-24T21:10:32.135+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:02.170390+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4853 sent 4852 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:32.137342+0000 osd.1 (osd.1) 4853 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4853) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:32.137342+0000 osd.1 (osd.1) 4853 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2677> 2025-11-24T21:10:33.135+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:03.170628+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4854 sent 4853 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:33.136671+0000 osd.1 (osd.1) 4854 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4854) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:33.136671+0000 osd.1 (osd.1) 4854 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2662> 2025-11-24T21:10:34.137+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:04.170868+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4855 sent 4854 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:34.138342+0000 osd.1 (osd.1) 4855 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4855) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:34.138342+0000 osd.1 (osd.1) 4855 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2651> 2025-11-24T21:10:35.161+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:05.171052+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4856 sent 4855 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:35.163239+0000 osd.1 (osd.1) 4856 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4856) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:35.163239+0000 osd.1 (osd.1) 4856 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:06.171303+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2637> 2025-11-24T21:10:36.193+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:07.171412+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4857 sent 4856 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:36.194556+0000 osd.1 (osd.1) 4857 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2628> 2025-11-24T21:10:37.177+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4857) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:36.194556+0000 osd.1 (osd.1) 4857 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2623> 2025-11-24T21:10:38.137+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:08.171715+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4859 sent 4857 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:37.178845+0000 osd.1 (osd.1) 4858 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:38.139021+0000 osd.1 (osd.1) 4859 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2609> 2025-11-24T21:10:39.116+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:09.171968+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 3 last_log 4860 sent 4859 num 3 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:39.118121+0000 osd.1 (osd.1) 4860 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4859) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:37.178845+0000 osd.1 (osd.1) 4858 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:38.139021+0000 osd.1 (osd.1) 4859 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2597> 2025-11-24T21:10:40.132+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:10.172234+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4861 sent 4860 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:40.134337+0000 osd.1 (osd.1) 4861 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4860) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:39.118121+0000 osd.1 (osd.1) 4860 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4861) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:40.134337+0000 osd.1 (osd.1) 4861 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2583> 2025-11-24T21:10:41.091+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:11.172516+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4862 sent 4861 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:41.092412+0000 osd.1 (osd.1) 4862 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4862) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:41.092412+0000 osd.1 (osd.1) 4862 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2572> 2025-11-24T21:10:42.073+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:12.172842+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4863 sent 4862 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:42.074473+0000 osd.1 (osd.1) 4863 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4863) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:42.074473+0000 osd.1 (osd.1) 4863 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2560> 2025-11-24T21:10:43.062+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:13.173143+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4864 sent 4863 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:43.064358+0000 osd.1 (osd.1) 4864 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4864) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:43.064358+0000 osd.1 (osd.1) 4864 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2546> 2025-11-24T21:10:44.087+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:14.173385+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4865 sent 4864 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:44.088288+0000 osd.1 (osd.1) 4865 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4865) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:44.088288+0000 osd.1 (osd.1) 4865 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2534> 2025-11-24T21:10:45.065+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:15.173683+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4866 sent 4865 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:45.066315+0000 osd.1 (osd.1) 4866 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2525> 2025-11-24T21:10:46.028+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4866) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:45.066315+0000 osd.1 (osd.1) 4866 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:16.174006+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4867 sent 4866 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:46.028423+0000 osd.1 (osd.1) 4867 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2514> 2025-11-24T21:10:46.983+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:17.174288+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4868 sent 4867 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:46.984365+0000 osd.1 (osd.1) 4868 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4867) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:46.028423+0000 osd.1 (osd.1) 4867 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2503> 2025-11-24T21:10:47.938+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:18.174514+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4869 sent 4868 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:47.938907+0000 osd.1 (osd.1) 4869 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4868) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:46.984365+0000 osd.1 (osd.1) 4868 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4869) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:47.938907+0000 osd.1 (osd.1) 4869 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2490> 2025-11-24T21:10:48.986+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:19.174811+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4870 sent 4869 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:48.987160+0000 osd.1 (osd.1) 4870 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4870) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:48.987160+0000 osd.1 (osd.1) 4870 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2475> 2025-11-24T21:10:50.011+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:20.175072+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4871 sent 4870 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:50.012482+0000 osd.1 (osd.1) 4871 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4871) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:50.012482+0000 osd.1 (osd.1) 4871 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2463> 2025-11-24T21:10:51.007+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:21.175302+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4872 sent 4871 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:51.007898+0000 osd.1 (osd.1) 4872 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4872) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:51.007898+0000 osd.1 (osd.1) 4872 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2452> 2025-11-24T21:10:52.050+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:22.175555+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4873 sent 4872 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:52.051739+0000 osd.1 (osd.1) 4873 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4873) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:52.051739+0000 osd.1 (osd.1) 4873 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2440> 2025-11-24T21:10:53.027+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:23.175846+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4874 sent 4873 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:53.028689+0000 osd.1 (osd.1) 4874 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4874) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:53.028689+0000 osd.1 (osd.1) 4874 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2429> 2025-11-24T21:10:54.063+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:24.176158+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4875 sent 4874 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:54.064366+0000 osd.1 (osd.1) 4875 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4875) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:54.064366+0000 osd.1 (osd.1) 4875 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2415> 2025-11-24T21:10:55.027+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:25.176458+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4876 sent 4875 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:55.028766+0000 osd.1 (osd.1) 4876 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4876) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:55.028766+0000 osd.1 (osd.1) 4876 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2403> 2025-11-24T21:10:56.019+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:26.176716+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4877 sent 4876 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:56.019728+0000 osd.1 (osd.1) 4877 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4877) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:56.019728+0000 osd.1 (osd.1) 4877 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2392> 2025-11-24T21:10:56.998+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:27.176932+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4878 sent 4877 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:56.999256+0000 osd.1 (osd.1) 4878 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4878) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:56.999256+0000 osd.1 (osd.1) 4878 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2381> 2025-11-24T21:10:58.012+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:28.177140+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4879 sent 4878 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:58.013550+0000 osd.1 (osd.1) 4879 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4879) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:58.013550+0000 osd.1 (osd.1) 4879 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2369> 2025-11-24T21:10:58.988+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:29.177334+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4880 sent 4879 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:58.989648+0000 osd.1 (osd.1) 4880 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4880) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:58.989648+0000 osd.1 (osd.1) 4880 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2355> 2025-11-24T21:10:59.999+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:30.177656+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4881 sent 4880 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:10:59.999741+0000 osd.1 (osd.1) 4881 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4881) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:10:59.999741+0000 osd.1 (osd.1) 4881 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2344> 2025-11-24T21:11:00.998+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:31.177898+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4882 sent 4881 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:00.998891+0000 osd.1 (osd.1) 4882 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4882) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:00.998891+0000 osd.1 (osd.1) 4882 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2333> 2025-11-24T21:11:02.047+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:32.178139+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4883 sent 4882 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:02.048313+0000 osd.1 (osd.1) 4883 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4883) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:02.048313+0000 osd.1 (osd.1) 4883 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2321> 2025-11-24T21:11:03.058+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:33.178413+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4884 sent 4883 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:03.059238+0000 osd.1 (osd.1) 4884 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4884) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:03.059238+0000 osd.1 (osd.1) 4884 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2310> 2025-11-24T21:11:04.059+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:34.178700+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4885 sent 4884 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:04.059915+0000 osd.1 (osd.1) 4885 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4885) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:04.059915+0000 osd.1 (osd.1) 4885 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2296> 2025-11-24T21:11:05.030+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:35.178922+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4886 sent 4885 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:05.030978+0000 osd.1 (osd.1) 4886 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4886) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:05.030978+0000 osd.1 (osd.1) 4886 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2285> 2025-11-24T21:11:06.028+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:36.179128+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4887 sent 4886 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:06.029320+0000 osd.1 (osd.1) 4887 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4887) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:06.029320+0000 osd.1 (osd.1) 4887 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2274> 2025-11-24T21:11:07.036+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:37.179374+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4888 sent 4887 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:07.037700+0000 osd.1 (osd.1) 4888 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4888) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:07.037700+0000 osd.1 (osd.1) 4888 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2263> 2025-11-24T21:11:08.063+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:38.179682+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4889 sent 4888 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:08.064720+0000 osd.1 (osd.1) 4889 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4889) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:08.064720+0000 osd.1 (osd.1) 4889 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2251> 2025-11-24T21:11:09.074+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:39.179912+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4890 sent 4889 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:09.075304+0000 osd.1 (osd.1) 4890 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4890) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:09.075304+0000 osd.1 (osd.1) 4890 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2237> 2025-11-24T21:11:10.079+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:40.180125+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4891 sent 4890 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:10.081301+0000 osd.1 (osd.1) 4891 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4891) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:10.081301+0000 osd.1 (osd.1) 4891 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2226> 2025-11-24T21:11:11.076+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:41.180323+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4892 sent 4891 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:11.076961+0000 osd.1 (osd.1) 4892 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4892) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:11.076961+0000 osd.1 (osd.1) 4892 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2215> 2025-11-24T21:11:12.063+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:42.180741+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4893 sent 4892 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:12.065206+0000 osd.1 (osd.1) 4893 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4893) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:12.065206+0000 osd.1 (osd.1) 4893 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2203> 2025-11-24T21:11:13.080+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:43.181001+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4894 sent 4893 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:13.081159+0000 osd.1 (osd.1) 4894 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4894) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:13.081159+0000 osd.1 (osd.1) 4894 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2192> 2025-11-24T21:11:14.093+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:44.181231+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4895 sent 4894 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:14.095174+0000 osd.1 (osd.1) 4895 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4895) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:14.095174+0000 osd.1 (osd.1) 4895 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2177> 2025-11-24T21:11:15.129+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:45.181472+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4896 sent 4895 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:15.130920+0000 osd.1 (osd.1) 4896 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2168> 2025-11-24T21:11:16.146+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:46.181774+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4897 sent 4896 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:16.148221+0000 osd.1 (osd.1) 4897 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4896) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:15.130920+0000 osd.1 (osd.1) 4896 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2157> 2025-11-24T21:11:17.171+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:47.181998+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4898 sent 4897 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:17.173107+0000 osd.1 (osd.1) 4898 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4897) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:16.148221+0000 osd.1 (osd.1) 4897 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4898) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:17.173107+0000 osd.1 (osd.1) 4898 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2144> 2025-11-24T21:11:18.132+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:48.182230+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4899 sent 4898 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:18.134234+0000 osd.1 (osd.1) 4899 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4899) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:18.134234+0000 osd.1 (osd.1) 4899 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2132> 2025-11-24T21:11:19.165+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:49.182485+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4900 sent 4899 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:19.166968+0000 osd.1 (osd.1) 4900 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4900) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:19.166968+0000 osd.1 (osd.1) 4900 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2118> 2025-11-24T21:11:20.167+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:50.182707+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4901 sent 4900 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:20.168503+0000 osd.1 (osd.1) 4901 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4901) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:20.168503+0000 osd.1 (osd.1) 4901 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2107> 2025-11-24T21:11:21.142+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:51.182959+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4902 sent 4901 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:21.143723+0000 osd.1 (osd.1) 4902 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4902) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:21.143723+0000 osd.1 (osd.1) 4902 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2096> 2025-11-24T21:11:22.100+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:52.183174+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4903 sent 4902 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:22.100883+0000 osd.1 (osd.1) 4903 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4903) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:22.100883+0000 osd.1 (osd.1) 4903 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2085> 2025-11-24T21:11:23.149+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:53.183407+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4904 sent 4903 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:23.149562+0000 osd.1 (osd.1) 4904 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4904) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:23.149562+0000 osd.1 (osd.1) 4904 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2072> 2025-11-24T21:11:24.166+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:54.183680+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4905 sent 4904 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:24.167290+0000 osd.1 (osd.1) 4905 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4905) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:24.167290+0000 osd.1 (osd.1) 4905 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2057> 2025-11-24T21:11:25.173+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:55.183921+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4906 sent 4905 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:25.174319+0000 osd.1 (osd.1) 4906 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4906) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:25.174319+0000 osd.1 (osd.1) 4906 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2046> 2025-11-24T21:11:26.149+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:56.184185+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4907 sent 4906 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:26.149467+0000 osd.1 (osd.1) 4907 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4907) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:26.149467+0000 osd.1 (osd.1) 4907 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2035> 2025-11-24T21:11:27.168+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:57.184463+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4908 sent 4907 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:27.169451+0000 osd.1 (osd.1) 4908 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4908) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:27.169451+0000 osd.1 (osd.1) 4908 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f8fb5000/0x0/0x4ffc00000, data 0x25a668d/0x26b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2022> 2025-11-24T21:11:28.160+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:58.184684+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4909 sent 4908 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:28.161038+0000 osd.1 (osd.1) 4909 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4909) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:28.161038+0000 osd.1 (osd.1) 4909 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -2011> 2025-11-24T21:11:29.181+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:59.184961+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4910 sent 4909 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:29.181960+0000 osd.1 (osd.1) 4910 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472679 data_alloc: 218103808 data_used: 520192
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4910) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:29.181960+0000 osd.1 (osd.1) 4910 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:00.185171+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1994> 2025-11-24T21:11:30.222+0000 7f1a67169640 -1 osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3f8bf400
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 127.351654053s of 127.384529114s, submitted: 10
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 54542336 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _renew_subs
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:01.185328+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4911 sent 4910 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:30.223303+0000 osd.1 (osd.1) 4911 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 193 ms_handle_reset con 0x55ba3f8bf400 session 0x55ba3cc02960
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1979> 2025-11-24T21:11:31.220+0000 7f1a67169640 -1 osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 193 heartbeat osd_stat(store_statfs(0x4f9423000/0x0/0x4ffc00000, data 0x2138281/0x224a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98746368 unmapped: 54517760 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4911) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:30.223303+0000 osd.1 (osd.1) 4911 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:02.185526+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4912 sent 4911 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:31.220682+0000 osd.1 (osd.1) 4912 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1967> 2025-11-24T21:11:32.263+0000 7f1a67169640 -1 osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98746368 unmapped: 54517760 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4912) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:31.220682+0000 osd.1 (osd.1) 4912 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:03.185813+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4913 sent 4912 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:32.263710+0000 osd.1 (osd.1) 4913 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3c5bac00
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1955> 2025-11-24T21:11:33.272+0000 7f1a67169640 -1 osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 193 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 54509568 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4913) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:32.263710+0000 osd.1 (osd.1) 4913 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _renew_subs
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 194 ms_handle_reset con 0x55ba3c5bac00 session 0x55ba3ba72780
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:04.186011+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4914 sent 4913 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:33.273168+0000 osd.1 (osd.1) 4914 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363794 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1937> 2025-11-24T21:11:34.287+0000 7f1a67169640 -1 osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 54460416 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4914) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:33.273168+0000 osd.1 (osd.1) 4914 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:05.186265+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4915 sent 4914 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:34.288242+0000 osd.1 (osd.1) 4915 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1926> 2025-11-24T21:11:35.257+0000 7f1a67169640 -1 osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 194 heartbeat osd_stat(store_statfs(0x4fa091000/0x0/0x4ffc00000, data 0x14c9e85/0x15dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 54460416 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4915) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:34.288242+0000 osd.1 (osd.1) 4915 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:06.186479+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4916 sent 4915 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:35.258256+0000 osd.1 (osd.1) 4916 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1914> 2025-11-24T21:11:36.219+0000 7f1a67169640 -1 osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 54460416 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4916) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:35.258256+0000 osd.1 (osd.1) 4916 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1909> 2025-11-24T21:11:37.173+0000 7f1a67169640 -1 osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 194 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:07.186673+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4918 sent 4916 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:36.219949+0000 osd.1 (osd.1) 4917 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:37.174365+0000 osd.1 (osd.1) 4918 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 54460416 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4918) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:36.219949+0000 osd.1 (osd.1) 4917 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:37.174365+0000 osd.1 (osd.1) 4918 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3d70a800
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _renew_subs
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1892> 2025-11-24T21:11:38.132+0000 7f1a67169640 -1 osd.1 195 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 195 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:08.186862+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4919 sent 4918 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:38.132763+0000 osd.1 (osd.1) 4919 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98836480 unmapped: 54427648 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4919) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:38.132763+0000 osd.1 (osd.1) 4919 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x14cb95a/0x15df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1880> 2025-11-24T21:11:39.159+0000 7f1a67169640 -1 osd.1 195 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 195 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:09.187140+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4920 sent 4919 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:39.160348+0000 osd.1 (osd.1) 4920 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476464 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 98836480 unmapped: 54427648 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4920) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:39.160348+0000 osd.1 (osd.1) 4920 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 196 ms_handle_reset con 0x55ba3d70a800 session 0x55ba3ba70780
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:10.187417+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1861> 2025-11-24T21:11:40.197+0000 7f1a67169640 -1 osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 53379072 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:11.187658+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4921 sent 4920 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:40.198053+0000 osd.1 (osd.1) 4921 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1852> 2025-11-24T21:11:41.201+0000 7f1a67169640 -1 osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 53379072 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4921) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:40.198053+0000 osd.1 (osd.1) 4921 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:12.187875+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4922 sent 4921 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:41.201811+0000 osd.1 (osd.1) 4922 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1841> 2025-11-24T21:11:42.207+0000 7f1a67169640 -1 osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 53379072 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4922) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:41.201811+0000 osd.1 (osd.1) 4922 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1836> 2025-11-24T21:11:43.168+0000 7f1a67169640 -1 osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 196 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:13.188140+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4924 sent 4922 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:42.207914+0000 osd.1 (osd.1) 4923 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:43.168960+0000 osd.1 (osd.1) 4924 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3d70ac00
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 196 ms_handle_reset con 0x55ba3d70ac00 session 0x55ba3cc034a0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.555040359s of 12.978030205s, submitted: 74
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4924) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:42.207914+0000 osd.1 (osd.1) 4923 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:43.168960+0000 osd.1 (osd.1) 4924 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:14.188436+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1816> 2025-11-24T21:11:44.197+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:15.188706+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4925 sent 4924 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:44.198065+0000 osd.1 (osd.1) 4925 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1803> 2025-11-24T21:11:45.195+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4925) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:44.198065+0000 osd.1 (osd.1) 4925 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1798> 2025-11-24T21:11:46.148+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:16.188984+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4927 sent 4925 num 2 unsent 2 sending 2
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:45.196839+0000 osd.1 (osd.1) 4926 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:46.150156+0000 osd.1 (osd.1) 4927 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4927) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:45.196839+0000 osd.1 (osd.1) 4926 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:46.150156+0000 osd.1 (osd.1) 4927 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1785> 2025-11-24T21:11:47.125+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:17.189271+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4928 sent 4927 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:47.126219+0000 osd.1 (osd.1) 4928 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4928) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:47.126219+0000 osd.1 (osd.1) 4928 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1774> 2025-11-24T21:11:48.168+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:18.189493+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4929 sent 4928 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:48.170670+0000 osd.1 (osd.1) 4929 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4929) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:48.170670+0000 osd.1 (osd.1) 4929 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:19.189704+0000)
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1760> 2025-11-24T21:11:49.204+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:20.189928+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4930 sent 4929 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:49.206051+0000 osd.1 (osd.1) 4930 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1747> 2025-11-24T21:11:50.251+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4930) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:49.206051+0000 osd.1 (osd.1) 4930 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:21.190216+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4931 sent 4930 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:50.252679+0000 osd.1 (osd.1) 4931 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1736> 2025-11-24T21:11:51.292+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4931) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:50.252679+0000 osd.1 (osd.1) 4931 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:22.190440+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4932 sent 4931 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:51.293746+0000 osd.1 (osd.1) 4932 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1725> 2025-11-24T21:11:52.322+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4932) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:51.293746+0000 osd.1 (osd.1) 4932 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:23.190643+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4933 sent 4932 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:52.323881+0000 osd.1 (osd.1) 4933 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1713> 2025-11-24T21:11:53.337+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4933) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:52.323881+0000 osd.1 (osd.1) 4933 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:24.190864+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4934 sent 4933 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:53.339049+0000 osd.1 (osd.1) 4934 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1698> 2025-11-24T21:11:54.342+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4934) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:53.339049+0000 osd.1 (osd.1) 4934 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:25.191095+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4935 sent 4934 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:54.343981+0000 osd.1 (osd.1) 4935 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1687> 2025-11-24T21:11:55.334+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4935) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:54.343981+0000 osd.1 (osd.1) 4935 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:26.191297+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4936 sent 4935 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:55.336200+0000 osd.1 (osd.1) 4936 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1676> 2025-11-24T21:11:56.350+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4936) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:55.336200+0000 osd.1 (osd.1) 4936 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:27.191555+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4937 sent 4936 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:56.352215+0000 osd.1 (osd.1) 4937 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1665> 2025-11-24T21:11:57.318+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4937) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:56.352215+0000 osd.1 (osd.1) 4937 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:28.191852+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4938 sent 4937 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:57.320635+0000 osd.1 (osd.1) 4938 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1654> 2025-11-24T21:11:58.365+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4938) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:57.320635+0000 osd.1 (osd.1) 4938 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:29.192050+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4939 sent 4938 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:58.366926+0000 osd.1 (osd.1) 4939 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1642> 2025-11-24T21:11:59.335+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4939) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:58.366926+0000 osd.1 (osd.1) 4939 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:30.192285+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4940 sent 4939 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:11:59.336223+0000 osd.1 (osd.1) 4940 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1628> 2025-11-24T21:12:00.369+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4940) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:11:59.336223+0000 osd.1 (osd.1) 4940 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:31.192513+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4941 sent 4940 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:00.370011+0000 osd.1 (osd.1) 4941 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1617> 2025-11-24T21:12:01.412+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4941) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:00.370011+0000 osd.1 (osd.1) 4941 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:32.192760+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4942 sent 4941 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:01.413046+0000 osd.1 (osd.1) 4942 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1606> 2025-11-24T21:12:02.451+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4942) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:01.413046+0000 osd.1 (osd.1) 4942 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:33.231133+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4943 sent 4942 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:02.452362+0000 osd.1 (osd.1) 4943 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1595> 2025-11-24T21:12:03.405+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4943) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:02.452362+0000 osd.1 (osd.1) 4943 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:34.231414+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4944 sent 4943 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:03.406264+0000 osd.1 (osd.1) 4944 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1581> 2025-11-24T21:12:04.437+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4944) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:03.406264+0000 osd.1 (osd.1) 4944 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:35.231693+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4945 sent 4944 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:04.437794+0000 osd.1 (osd.1) 4945 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1568> 2025-11-24T21:12:05.456+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4945) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:04.437794+0000 osd.1 (osd.1) 4945 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:36.232007+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4946 sent 4945 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:05.457323+0000 osd.1 (osd.1) 4946 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1556> 2025-11-24T21:12:06.422+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4946) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:05.457323+0000 osd.1 (osd.1) 4946 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:37.232318+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4947 sent 4946 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:06.423252+0000 osd.1 (osd.1) 4947 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1544> 2025-11-24T21:12:07.424+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4947) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:06.423252+0000 osd.1 (osd.1) 4947 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:38.232632+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4948 sent 4947 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:07.425674+0000 osd.1 (osd.1) 4948 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1533> 2025-11-24T21:12:08.436+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4948) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:07.425674+0000 osd.1 (osd.1) 4948 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:39.232837+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4949 sent 4948 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:08.436813+0000 osd.1 (osd.1) 4949 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1519> 2025-11-24T21:12:09.442+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4949) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:08.436813+0000 osd.1 (osd.1) 4949 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:40.233092+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4950 sent 4949 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:09.443073+0000 osd.1 (osd.1) 4950 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1507> 2025-11-24T21:12:10.424+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:41.233367+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4951 sent 4950 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:10.425038+0000 osd.1 (osd.1) 4951 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4950) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:09.443073+0000 osd.1 (osd.1) 4950 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1496> 2025-11-24T21:12:11.377+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:42.233638+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4952 sent 4951 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:11.378074+0000 osd.1 (osd.1) 4952 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4951) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:10.425038+0000 osd.1 (osd.1) 4951 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4952) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:11.378074+0000 osd.1 (osd.1) 4952 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1483> 2025-11-24T21:12:12.425+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:43.233950+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4953 sent 4952 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:12.426206+0000 osd.1 (osd.1) 4953 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4953) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:12.426206+0000 osd.1 (osd.1) 4953 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1472> 2025-11-24T21:12:13.437+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:44.234245+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4954 sent 4953 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:13.438005+0000 osd.1 (osd.1) 4954 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4954) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:13.438005+0000 osd.1 (osd.1) 4954 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1458> 2025-11-24T21:12:14.437+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:45.235080+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4955 sent 4954 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:14.438366+0000 osd.1 (osd.1) 4955 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4955) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:14.438366+0000 osd.1 (osd.1) 4955 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1446> 2025-11-24T21:12:15.465+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:46.235239+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4956 sent 4955 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:15.465748+0000 osd.1 (osd.1) 4956 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4956) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:15.465748+0000 osd.1 (osd.1) 4956 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1435> 2025-11-24T21:12:16.476+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:47.235504+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4957 sent 4956 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:16.476850+0000 osd.1 (osd.1) 4957 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4957) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:16.476850+0000 osd.1 (osd.1) 4957 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1424> 2025-11-24T21:12:17.518+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:48.235722+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4958 sent 4957 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:17.519149+0000 osd.1 (osd.1) 4958 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4958) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:17.519149+0000 osd.1 (osd.1) 4958 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1413> 2025-11-24T21:12:18.567+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:49.236193+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4959 sent 4958 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:18.568433+0000 osd.1 (osd.1) 4959 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4959) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:18.568433+0000 osd.1 (osd.1) 4959 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1399> 2025-11-24T21:12:19.593+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:50.236468+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4960 sent 4959 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:19.593811+0000 osd.1 (osd.1) 4960 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4960) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:19.593811+0000 osd.1 (osd.1) 4960 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1387> 2025-11-24T21:12:20.580+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:51.236712+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4961 sent 4960 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:20.580771+0000 osd.1 (osd.1) 4961 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4961) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:20.580771+0000 osd.1 (osd.1) 4961 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1376> 2025-11-24T21:12:21.620+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:52.236965+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4962 sent 4961 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:21.622003+0000 osd.1 (osd.1) 4962 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1367> 2025-11-24T21:12:22.601+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4962) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:21.622003+0000 osd.1 (osd.1) 4962 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:53.237295+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4963 sent 4962 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:22.602082+0000 osd.1 (osd.1) 4963 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1356> 2025-11-24T21:12:23.625+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4963) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:22.602082+0000 osd.1 (osd.1) 4963 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:54.237609+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4964 sent 4963 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:23.627349+0000 osd.1 (osd.1) 4964 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1341> 2025-11-24T21:12:24.659+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4964) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:23.627349+0000 osd.1 (osd.1) 4964 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:55.237961+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4965 sent 4964 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:24.660309+0000 osd.1 (osd.1) 4965 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4965) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:24.660309+0000 osd.1 (osd.1) 4965 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1328> 2025-11-24T21:12:25.690+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:56.238250+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4966 sent 4965 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:25.692016+0000 osd.1 (osd.1) 4966 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4966) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:25.692016+0000 osd.1 (osd.1) 4966 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1317> 2025-11-24T21:12:26.715+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:57.238535+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4967 sent 4966 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:26.716688+0000 osd.1 (osd.1) 4967 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4967) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:26.716688+0000 osd.1 (osd.1) 4967 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1305> 2025-11-24T21:12:27.724+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:58.238792+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4968 sent 4967 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:27.725653+0000 osd.1 (osd.1) 4968 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1296> 2025-11-24T21:12:28.698+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4968) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:27.725653+0000 osd.1 (osd.1) 4968 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:59.239047+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4969 sent 4968 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:28.699379+0000 osd.1 (osd.1) 4969 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1282> 2025-11-24T21:12:29.690+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4969) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:28.699379+0000 osd.1 (osd.1) 4969 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:00.239237+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4970 sent 4969 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:29.691773+0000 osd.1 (osd.1) 4970 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1271> 2025-11-24T21:12:30.728+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4970) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:29.691773+0000 osd.1 (osd.1) 4970 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:01.239465+0000)
Nov 24 21:14:15 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 24 21:14:15 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2273099073' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4971 sent 4970 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:30.729567+0000 osd.1 (osd.1) 4971 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9088000/0x0/0x4ffc00000, data 0x24cf002/0x25e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1258> 2025-11-24T21:12:31.680+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4971) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:30.729567+0000 osd.1 (osd.1) 4971 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:02.239710+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4972 sent 4971 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:31.681840+0000 osd.1 (osd.1) 4972 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1247> 2025-11-24T21:12:32.702+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4972) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:31.681840+0000 osd.1 (osd.1) 4972 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:03.239952+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4973 sent 4972 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:32.703992+0000 osd.1 (osd.1) 4973 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1236> 2025-11-24T21:12:33.685+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4973) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:32.703992+0000 osd.1 (osd.1) 4973 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:04.240198+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4974 sent 4973 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:33.686306+0000 osd.1 (osd.1) 4974 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482412 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1222> 2025-11-24T21:12:34.713+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4974) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:33.686306+0000 osd.1 (osd.1) 4974 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3d70b000
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:05.240469+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4975 sent 4974 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:34.714561+0000 osd.1 (osd.1) 4975 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1210> 2025-11-24T21:12:35.687+0000 7f1a67169640 -1 osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4975) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:34.714561+0000 osd.1 (osd.1) 4975 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 52.143589020s of 52.155838013s, submitted: 13
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 ms_handle_reset con 0x55ba3d70b000 session 0x55ba3cf6f4a0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:06.240662+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4976 sent 4975 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:35.688301+0000 osd.1 (osd.1) 4976 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1196> 2025-11-24T21:12:36.642+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 heartbeat osd_stat(store_statfs(0x4fa085000/0x0/0x4ffc00000, data 0x14d0c29/0x15e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4976) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:35.688301+0000 osd.1 (osd.1) 4976 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:07.240875+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4977 sent 4976 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:36.643866+0000 osd.1 (osd.1) 4977 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1184> 2025-11-24T21:12:37.641+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4977) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:36.643866+0000 osd.1 (osd.1) 4977 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:08.241154+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4978 sent 4977 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:37.641798+0000 osd.1 (osd.1) 4978 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1173> 2025-11-24T21:12:38.633+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4978) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:37.641798+0000 osd.1 (osd.1) 4978 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 heartbeat osd_stat(store_statfs(0x4fa085000/0x0/0x4ffc00000, data 0x14d0c29/0x15e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:09.241350+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4979 sent 4978 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:38.633704+0000 osd.1 (osd.1) 4979 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378234 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1158> 2025-11-24T21:12:39.603+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4979) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:38.633704+0000 osd.1 (osd.1) 4979 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:10.241543+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4980 sent 4979 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:39.603730+0000 osd.1 (osd.1) 4980 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1147> 2025-11-24T21:12:40.610+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4980) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:39.603730+0000 osd.1 (osd.1) 4980 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:11.241716+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4981 sent 4980 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:40.610792+0000 osd.1 (osd.1) 4981 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 heartbeat osd_stat(store_statfs(0x4fa085000/0x0/0x4ffc00000, data 0x14d0c29/0x15e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1135> 2025-11-24T21:12:41.591+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4981) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:40.610792+0000 osd.1 (osd.1) 4981 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:12.241918+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4982 sent 4981 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:41.592125+0000 osd.1 (osd.1) 4982 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1124> 2025-11-24T21:12:42.620+0000 7f1a67169640 -1 osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 53370880 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4982) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:41.592125+0000 osd.1 (osd.1) 4982 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:13.242113+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4983 sent 4982 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:42.620758+0000 osd.1 (osd.1) 4983 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3f8bf400
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 ms_handle_reset con 0x55ba3f8bf400 session 0x55ba3dc0da40
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1110> 2025-11-24T21:12:43.574+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4983) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:42.620758+0000 osd.1 (osd.1) 4983 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:14.242944+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4984 sent 4983 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:43.575445+0000 osd.1 (osd.1) 4984 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1095> 2025-11-24T21:12:44.604+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4984) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:43.575445+0000 osd.1 (osd.1) 4984 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:15.243141+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4985 sent 4984 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:44.605251+0000 osd.1 (osd.1) 4985 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1084> 2025-11-24T21:12:45.645+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4985) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:44.605251+0000 osd.1 (osd.1) 4985 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:16.243334+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4986 sent 4985 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:45.646816+0000 osd.1 (osd.1) 4986 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1073> 2025-11-24T21:12:46.642+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:17.243628+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4987 sent 4986 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:46.643659+0000 osd.1 (osd.1) 4987 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4986) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:45.646816+0000 osd.1 (osd.1) 4986 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1062> 2025-11-24T21:12:47.618+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:18.243948+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 4988 sent 4987 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:47.618796+0000 osd.1 (osd.1) 4988 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4987) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:46.643659+0000 osd.1 (osd.1) 4987 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4988) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:47.618796+0000 osd.1 (osd.1) 4988 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1049> 2025-11-24T21:12:48.616+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:19.244258+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4989 sent 4988 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:48.617368+0000 osd.1 (osd.1) 4989 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4989) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:48.617368+0000 osd.1 (osd.1) 4989 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1034> 2025-11-24T21:12:49.619+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:20.244627+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4990 sent 4989 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:49.619904+0000 osd.1 (osd.1) 4990 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4990) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:49.619904+0000 osd.1 (osd.1) 4990 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1023> 2025-11-24T21:12:50.644+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:21.245029+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4991 sent 4990 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:50.645326+0000 osd.1 (osd.1) 4991 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4991) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:50.645326+0000 osd.1 (osd.1) 4991 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1012> 2025-11-24T21:12:51.620+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:22.245410+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4992 sent 4991 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:51.621152+0000 osd.1 (osd.1) 4992 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4992) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:51.621152+0000 osd.1 (osd.1) 4992 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:  -1000> 2025-11-24T21:12:52.627+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:23.245815+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4993 sent 4992 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:52.628146+0000 osd.1 (osd.1) 4993 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4993) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:52.628146+0000 osd.1 (osd.1) 4993 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -989> 2025-11-24T21:12:53.648+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:24.246268+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4994 sent 4993 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:53.648859+0000 osd.1 (osd.1) 4994 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4994) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:53.648859+0000 osd.1 (osd.1) 4994 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -975> 2025-11-24T21:12:54.679+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:25.246678+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4995 sent 4994 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:54.680315+0000 osd.1 (osd.1) 4995 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4995) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:54.680315+0000 osd.1 (osd.1) 4995 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -963> 2025-11-24T21:12:55.705+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:26.247116+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4996 sent 4995 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:55.706254+0000 osd.1 (osd.1) 4996 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4996) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:55.706254+0000 osd.1 (osd.1) 4996 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -952> 2025-11-24T21:12:56.738+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:27.247492+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4997 sent 4996 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:56.739840+0000 osd.1 (osd.1) 4997 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4997) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:56.739840+0000 osd.1 (osd.1) 4997 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -940> 2025-11-24T21:12:57.772+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:28.247862+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4998 sent 4997 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:57.773365+0000 osd.1 (osd.1) 4998 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4998) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:57.773365+0000 osd.1 (osd.1) 4998 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -929> 2025-11-24T21:12:58.743+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:29.248179+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 4999 sent 4998 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:58.744562+0000 osd.1 (osd.1) 4999 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 4999) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:58.744562+0000 osd.1 (osd.1) 4999 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -914> 2025-11-24T21:12:59.703+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:30.248707+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5000 sent 4999 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:12:59.704686+0000 osd.1 (osd.1) 5000 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5000) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:12:59.704686+0000 osd.1 (osd.1) 5000 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -902> 2025-11-24T21:13:00.682+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:31.248911+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5001 sent 5000 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:00.683954+0000 osd.1 (osd.1) 5001 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5001) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:00.683954+0000 osd.1 (osd.1) 5001 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -891> 2025-11-24T21:13:01.674+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:32.249129+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5002 sent 5001 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:01.676478+0000 osd.1 (osd.1) 5002 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5002) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:01.676478+0000 osd.1 (osd.1) 5002 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -880> 2025-11-24T21:13:02.650+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:33.249357+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5003 sent 5002 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:02.651819+0000 osd.1 (osd.1) 5003 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5003) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:02.651819+0000 osd.1 (osd.1) 5003 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -868> 2025-11-24T21:13:03.677+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:34.249582+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5004 sent 5003 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:03.679246+0000 osd.1 (osd.1) 5004 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5004) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:03.679246+0000 osd.1 (osd.1) 5004 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -854> 2025-11-24T21:13:04.671+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:35.249874+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5005 sent 5004 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:04.672263+0000 osd.1 (osd.1) 5005 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5005) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:04.672263+0000 osd.1 (osd.1) 5005 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -841> 2025-11-24T21:13:05.657+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:36.250128+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5006 sent 5005 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:05.658678+0000 osd.1 (osd.1) 5006 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5006) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:05.658678+0000 osd.1 (osd.1) 5006 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -830> 2025-11-24T21:13:06.701+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:37.250426+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5007 sent 5006 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:06.703225+0000 osd.1 (osd.1) 5007 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5007) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:06.703225+0000 osd.1 (osd.1) 5007 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -818> 2025-11-24T21:13:07.669+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:38.250674+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5008 sent 5007 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:07.671471+0000 osd.1 (osd.1) 5008 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5008) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:07.671471+0000 osd.1 (osd.1) 5008 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -806> 2025-11-24T21:13:08.641+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:39.250900+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5009 sent 5008 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:08.642264+0000 osd.1 (osd.1) 5009 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5009) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:08.642264+0000 osd.1 (osd.1) 5009 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -792> 2025-11-24T21:13:09.629+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:40.251197+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5010 sent 5009 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:09.630261+0000 osd.1 (osd.1) 5010 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -783> 2025-11-24T21:13:10.674+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5010) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:09.630261+0000 osd.1 (osd.1) 5010 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:41.251455+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5011 sent 5010 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:10.675365+0000 osd.1 (osd.1) 5011 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5011) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:10.675365+0000 osd.1 (osd.1) 5011 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -770> 2025-11-24T21:13:11.702+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 53354496 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:42.251746+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5012 sent 5011 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:11.704167+0000 osd.1 (osd.1) 5012 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 ms_handle_reset con 0x55ba3d70b800 session 0x55ba3bb085a0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: handle_auth_request added challenge on 0x55ba3acf3800
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -758> 2025-11-24T21:13:12.661+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5012) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:11.704167+0000 osd.1 (osd.1) 5012 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:43.251994+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5013 sent 5012 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:12.662559+0000 osd.1 (osd.1) 5013 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -747> 2025-11-24T21:13:13.675+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5013) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:12.662559+0000 osd.1 (osd.1) 5013 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:44.252213+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5014 sent 5013 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:13.677434+0000 osd.1 (osd.1) 5014 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -733> 2025-11-24T21:13:14.676+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5014) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:13.677434+0000 osd.1 (osd.1) 5014 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:45.252496+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5015 sent 5014 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:14.676951+0000 osd.1 (osd.1) 5015 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -722> 2025-11-24T21:13:15.693+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5015) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:14.676951+0000 osd.1 (osd.1) 5015 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:46.252780+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5016 sent 5015 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:15.693929+0000 osd.1 (osd.1) 5016 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -711> 2025-11-24T21:13:16.709+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5016) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:15.693929+0000 osd.1 (osd.1) 5016 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:47.253047+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5017 sent 5016 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:16.709873+0000 osd.1 (osd.1) 5017 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -699> 2025-11-24T21:13:17.722+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5017) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:16.709873+0000 osd.1 (osd.1) 5017 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:48.253287+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5018 sent 5017 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:17.722816+0000 osd.1 (osd.1) 5018 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -688> 2025-11-24T21:13:18.673+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5018) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:17.722816+0000 osd.1 (osd.1) 5018 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:49.253516+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5019 sent 5018 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:18.674079+0000 osd.1 (osd.1) 5019 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -674> 2025-11-24T21:13:19.674+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5019) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:18.674079+0000 osd.1 (osd.1) 5019 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:50.253792+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5020 sent 5019 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:19.674881+0000 osd.1 (osd.1) 5020 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -663> 2025-11-24T21:13:20.688+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5020) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:19.674881+0000 osd.1 (osd.1) 5020 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:51.254029+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5021 sent 5020 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:20.688908+0000 osd.1 (osd.1) 5021 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -652> 2025-11-24T21:13:21.685+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5021) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:20.688908+0000 osd.1 (osd.1) 5021 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:52.254311+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5022 sent 5021 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:21.686140+0000 osd.1 (osd.1) 5022 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -640> 2025-11-24T21:13:22.706+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5022) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:21.686140+0000 osd.1 (osd.1) 5022 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:53.254567+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5023 sent 5022 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:22.707122+0000 osd.1 (osd.1) 5023 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -628> 2025-11-24T21:13:23.730+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5023) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:22.707122+0000 osd.1 (osd.1) 5023 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:54.254862+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5024 sent 5023 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:23.731029+0000 osd.1 (osd.1) 5024 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -614> 2025-11-24T21:13:24.745+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5024) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:23.731029+0000 osd.1 (osd.1) 5024 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:55.255119+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5025 sent 5024 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:24.745857+0000 osd.1 (osd.1) 5025 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -603> 2025-11-24T21:13:25.728+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5025) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:24.745857+0000 osd.1 (osd.1) 5025 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:56.255329+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5026 sent 5025 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:25.729025+0000 osd.1 (osd.1) 5026 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -592> 2025-11-24T21:13:26.764+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:57.255533+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 5027 sent 5026 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:26.765217+0000 osd.1 (osd.1) 5027 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5026) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:25.729025+0000 osd.1 (osd.1) 5026 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -580> 2025-11-24T21:13:27.720+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:58.255864+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 5028 sent 5027 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:27.721311+0000 osd.1 (osd.1) 5028 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5027) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:26.765217+0000 osd.1 (osd.1) 5027 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5028) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:27.721311+0000 osd.1 (osd.1) 5028 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -567> 2025-11-24T21:13:28.680+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:59.256082+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5029 sent 5028 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:28.680818+0000 osd.1 (osd.1) 5029 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5029) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:28.680818+0000 osd.1 (osd.1) 5029 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -552> 2025-11-24T21:13:29.727+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:00.256289+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5030 sent 5029 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:29.728346+0000 osd.1 (osd.1) 5030 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5030) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:29.728346+0000 osd.1 (osd.1) 5030 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -540> 2025-11-24T21:13:30.692+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:01.256489+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5031 sent 5030 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:30.693631+0000 osd.1 (osd.1) 5031 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5031) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:30.693631+0000 osd.1 (osd.1) 5031 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -529> 2025-11-24T21:13:31.732+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:02.256667+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5032 sent 5031 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:31.732839+0000 osd.1 (osd.1) 5032 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5032) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:31.732839+0000 osd.1 (osd.1) 5032 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -517> 2025-11-24T21:13:32.777+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:03.256871+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5033 sent 5032 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:32.778088+0000 osd.1 (osd.1) 5033 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5033) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:32.778088+0000 osd.1 (osd.1) 5033 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -506> 2025-11-24T21:13:33.821+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:04.257062+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5034 sent 5033 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:33.822142+0000 osd.1 (osd.1) 5034 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5034) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:33.822142+0000 osd.1 (osd.1) 5034 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -491> 2025-11-24T21:13:34.829+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:05.257337+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5035 sent 5034 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:34.829896+0000 osd.1 (osd.1) 5035 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5035) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:34.829896+0000 osd.1 (osd.1) 5035 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -479> 2025-11-24T21:13:35.824+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:06.258827+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5036 sent 5035 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:35.825556+0000 osd.1 (osd.1) 5036 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5036) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:35.825556+0000 osd.1 (osd.1) 5036 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -468> 2025-11-24T21:13:36.860+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:07.259062+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5037 sent 5036 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:36.862346+0000 osd.1 (osd.1) 5037 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5037) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:36.862346+0000 osd.1 (osd.1) 5037 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -457> 2025-11-24T21:13:37.870+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:08.259262+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5038 sent 5037 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:37.871791+0000 osd.1 (osd.1) 5038 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5038) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:37.871791+0000 osd.1 (osd.1) 5038 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -446> 2025-11-24T21:13:38.854+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:09.259467+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5039 sent 5038 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:38.855269+0000 osd.1 (osd.1) 5039 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5039) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:38.855269+0000 osd.1 (osd.1) 5039 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -432> 2025-11-24T21:13:39.890+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:10.259668+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5040 sent 5039 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:39.891691+0000 osd.1 (osd.1) 5040 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5040) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:39.891691+0000 osd.1 (osd.1) 5040 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -420> 2025-11-24T21:13:40.869+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:11.259836+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5041 sent 5040 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:40.870443+0000 osd.1 (osd.1) 5041 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5041) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:40.870443+0000 osd.1 (osd.1) 5041 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -409> 2025-11-24T21:13:41.852+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:12.260104+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5042 sent 5041 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:41.853517+0000 osd.1 (osd.1) 5042 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -400> 2025-11-24T21:13:42.899+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5042) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:41.853517+0000 osd.1 (osd.1) 5042 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:13.260342+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5043 sent 5042 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:42.900514+0000 osd.1 (osd.1) 5043 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -389> 2025-11-24T21:13:43.933+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5043) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:42.900514+0000 osd.1 (osd.1) 5043 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:14.260536+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5044 sent 5043 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:43.935077+0000 osd.1 (osd.1) 5044 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -374> 2025-11-24T21:13:44.945+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5044) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:43.935077+0000 osd.1 (osd.1) 5044 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:15.260948+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5045 sent 5044 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:44.946245+0000 osd.1 (osd.1) 5045 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -363> 2025-11-24T21:13:45.935+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:16.261159+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 5046 sent 5045 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:45.936282+0000 osd.1 (osd.1) 5046 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5045) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:44.946245+0000 osd.1 (osd.1) 5045 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -351> 2025-11-24T21:13:46.931+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:17.261789+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 5047 sent 5046 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:46.932767+0000 osd.1 (osd.1) 5047 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5046) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:45.936282+0000 osd.1 (osd.1) 5046 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5047) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:46.932767+0000 osd.1 (osd.1) 5047 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -338> 2025-11-24T21:13:47.953+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:18.262039+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5048 sent 5047 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:47.955220+0000 osd.1 (osd.1) 5048 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5048) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:47.955220+0000 osd.1 (osd.1) 5048 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -327> 2025-11-24T21:13:48.954+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:19.262268+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5049 sent 5048 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:48.956911+0000 osd.1 (osd.1) 5049 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5049) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:48.956911+0000 osd.1 (osd.1) 5049 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -312> 2025-11-24T21:13:49.950+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:20.262455+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5050 sent 5049 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:49.953614+0000 osd.1 (osd.1) 5050 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5050) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:49.953614+0000 osd.1 (osd.1) 5050 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -301> 2025-11-24T21:13:50.957+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:21.262648+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5051 sent 5050 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:50.959546+0000 osd.1 (osd.1) 5051 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5051) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:50.959546+0000 osd.1 (osd.1) 5051 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -290> 2025-11-24T21:13:51.920+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:22.262864+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5052 sent 5051 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:51.922902+0000 osd.1 (osd.1) 5052 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5052) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:51.922902+0000 osd.1 (osd.1) 5052 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -279> 2025-11-24T21:13:52.912+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:23.263772+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5053 sent 5052 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:52.913095+0000 osd.1 (osd.1) 5053 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5053) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:52.913095+0000 osd.1 (osd.1) 5053 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -268> 2025-11-24T21:13:53.961+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:24.263976+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5054 sent 5053 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:53.962809+0000 osd.1 (osd.1) 5054 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5054) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:53.962809+0000 osd.1 (osd.1) 5054 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -254> 2025-11-24T21:13:54.922+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:25.264177+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5055 sent 5054 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:54.923863+0000 osd.1 (osd.1) 5055 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5055) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:54.923863+0000 osd.1 (osd.1) 5055 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -242> 2025-11-24T21:13:55.929+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:26.264364+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5056 sent 5055 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:55.930144+0000 osd.1 (osd.1) 5056 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5056) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:55.930144+0000 osd.1 (osd.1) 5056 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -231> 2025-11-24T21:13:56.919+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:27.265559+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5057 sent 5056 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:56.920319+0000 osd.1 (osd.1) 5057 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5057) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:56.920319+0000 osd.1 (osd.1) 5057 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -220> 2025-11-24T21:13:57.954+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:28.265866+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5058 sent 5057 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:57.955469+0000 osd.1 (osd.1) 5058 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5058) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:57.955469+0000 osd.1 (osd.1) 5058 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -209> 2025-11-24T21:13:58.977+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:29.266076+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5059 sent 5058 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:58.977983+0000 osd.1 (osd.1) 5059 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5059) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:58.977983+0000 osd.1 (osd.1) 5059 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -194> 2025-11-24T21:13:59.928+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:30.266299+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5060 sent 5059 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:13:59.928960+0000 osd.1 (osd.1) 5060 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5060) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:13:59.928960+0000 osd.1 (osd.1) 5060 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -182> 2025-11-24T21:14:00.938+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:31.266575+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5061 sent 5060 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:00.939236+0000 osd.1 (osd.1) 5061 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -173> 2025-11-24T21:14:01.898+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5061) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:00.939236+0000 osd.1 (osd.1) 5061 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:32.266909+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5062 sent 5061 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:01.899795+0000 osd.1 (osd.1) 5062 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -161> 2025-11-24T21:14:02.899+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5062) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:01.899795+0000 osd.1 (osd.1) 5062 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:33.267147+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5063 sent 5062 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:02.900485+0000 osd.1 (osd.1) 5063 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -150> 2025-11-24T21:14:03.892+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5063) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:02.900485+0000 osd.1 (osd.1) 5063 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:34.267414+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5064 sent 5063 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:03.893806+0000 osd.1 (osd.1) 5064 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -136> 2025-11-24T21:14:04.900+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5064) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:03.893806+0000 osd.1 (osd.1) 5064 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:35.267649+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5065 sent 5064 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:04.902267+0000 osd.1 (osd.1) 5065 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -124> 2025-11-24T21:14:05.921+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5065) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:04.902267+0000 osd.1 (osd.1) 5065 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:36.267844+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5066 sent 5065 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:05.922033+0000 osd.1 (osd.1) 5066 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -113> 2025-11-24T21:14:06.968+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5066) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:05.922033+0000 osd.1 (osd.1) 5066 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:37.268015+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5067 sent 5066 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:06.969508+0000 osd.1 (osd.1) 5067 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:   -102> 2025-11-24T21:14:07.997+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5067) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:06.969508+0000 osd.1 (osd.1) 5067 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:38.268176+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5068 sent 5067 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:07.998336+0000 osd.1 (osd.1) 5068 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:    -91> 2025-11-24T21:14:08.961+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5068) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:07.998336+0000 osd.1 (osd.1) 5068 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:39.268325+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5069 sent 5068 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:08.962450+0000 osd.1 (osd.1) 5069 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:15 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:15 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:    -76> 2025-11-24T21:14:09.981+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5069) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:08.962450+0000 osd.1 (osd.1) 5069 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:40.268489+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5070 sent 5069 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:09.982411+0000 osd.1 (osd.1) 5070 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:    -65> 2025-11-24T21:14:11.030+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5070) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:09.982411+0000 osd.1 (osd.1) 5070 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:41.268640+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 1 last_log 5071 sent 5070 num 1 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:11.030698+0000 osd.1 (osd.1) 5071 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 53338112 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:    -54> 2025-11-24T21:14:12.004+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:42.268885+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 5072 sent 5071 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:12.005010+0000 osd.1 (osd.1) 5072 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 100057088 unmapped: 53207040 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5071) v1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:11.030698+0000 osd.1 (osd.1) 5071 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: do_command 'config diff' '{prefix=config diff}'
Nov 24 21:14:15 compute-0 ceph-osd[89640]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 21:14:15 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:    -41> 2025-11-24T21:14:12.997+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: do_command 'config show' '{prefix=config show}'
Nov 24 21:14:15 compute-0 ceph-osd[89640]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 21:14:15 compute-0 ceph-osd[89640]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 21:14:15 compute-0 ceph-osd[89640]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:43.269064+0000)
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  log_queue is 2 last_log 5073 sent 5072 num 2 unsent 1 sending 1
Nov 24 21:14:15 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:12.998036+0000 osd.1 (osd.1) 5073 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:15 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:15 compute-0 ceph-osd[89640]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 21:14:16 compute-0 ceph-osd[89640]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 21:14:16 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 53288960 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:16 compute-0 ceph-osd[89640]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0x14d26e2/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,19])
Nov 24 21:14:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:    -25> 2025-11-24T21:14:14.032+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:16 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:16 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:16 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:16 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:44.269227+0000)
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_client  log_queue is 3 last_log 5074 sent 5073 num 3 unsent 1 sending 1
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:14.033059+0000 osd.1 (osd.1) 5074 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:16 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:16 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 53477376 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:16 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:16 compute-0 ceph-osd[89640]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:16 compute-0 ceph-osd[89640]: bluestore.MempoolThread(0x55ba3944bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381208 data_alloc: 218103808 data_used: 528384
Nov 24 21:14:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]:    -13> 2025-11-24T21:14:15.055+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:16 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:16 compute-0 ceph-osd[89640]: monclient: tick
Nov 24 21:14:16 compute-0 ceph-osd[89640]: monclient: _check_auth_tickets
Nov 24 21:14:16 compute-0 ceph-osd[89640]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:45.269405+0000)
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_client  log_queue is 4 last_log 5075 sent 5074 num 4 unsent 1 sending 1
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_client  will send 2025-11-24T21:14:15.056984+0000 osd.1 (osd.1) 5075 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:16 compute-0 ceph-osd[89640]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5072) v1
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:12.005010+0000 osd.1 (osd.1) 5072 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_client handle_log_ack log(last 5073) v1
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_client  logged 2025-11-24T21:14:12.998036+0000 osd.1 (osd.1) 5073 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:16 compute-0 ceph-osd[89640]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 53559296 heap: 153264128 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:16 compute-0 ceph-osd[89640]: do_command 'log dump' '{prefix=log dump}'
Nov 24 21:14:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:16.029+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:16 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:16 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:16 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:16 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:16.076+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:16 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:16 compute-0 rsyslogd[1003]: imjournal from <np0005534003:ceph-osd>: begin to drop messages due to rate-limiting
Nov 24 21:14:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 24 21:14:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839332390' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 21:14:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 24 21:14:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3771320544' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 21:14:16 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 21:14:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 21:14:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2142076393' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:14:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 21:14:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2142076393' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:14:16 compute-0 ceph-mon[75677]: pgmap v2793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:16 compute-0 ceph-mon[75677]: from='client.15389 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:16 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:16 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:16 compute-0 ceph-mon[75677]: pgmap v2794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3272774697' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 21:14:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3669587596' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 21:14:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/4065149123' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 21:14:16 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2273099073' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 21:14:16 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:16 compute-0 podman[327884]: 2025-11-24 21:14:16.868773624 +0000 UTC m=+0.101303899 container health_status 8b2e0eff366c1e82bb9c9d8e59e6b82a04a55a2f9eb1370b97c8c746822ce4e2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 21:14:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 24 21:14:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1123645676' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 21:14:16 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 24 21:14:16 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2605184251' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:17.064+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:17 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:17 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:17 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:17 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:17 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:17.126+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 24 21:14:17 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1949622804' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 24 21:14:17 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1102070859' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/839332390' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3771320544' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2142076393' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.10:0/2142076393' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1123645676' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2605184251' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:17 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1949622804' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1102070859' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 24 21:14:17 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207780311' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 21:14:17 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 24 21:14:17 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565506704' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 21:14:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:18.040+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:18 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:18 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:18 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:18.153+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:18 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:18 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 24 21:14:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1908193300' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 21:14:18 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 48 slow ops, oldest one blocked for 4977 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:14:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 24 21:14:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715666384' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 21:14:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 21:14:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806770680' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 21:14:18 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15427 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:18 compute-0 ceph-mon[75677]: pgmap v2795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:18 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/207780311' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 21:14:18 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2565506704' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 21:14:18 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:18 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:18 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1908193300' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 21:14:18 compute-0 ceph-mon[75677]: Health check update: 48 slow ops, oldest one blocked for 4977 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:18 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3715666384' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 21:14:18 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3806770680' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 21:14:18 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 24 21:14:18 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1750457755' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 21:14:19 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15431 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:19.069+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:19 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:19 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:19 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:19.120+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:19 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:19 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:19 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:19 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15433 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:19 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15435 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:19 compute-0 ceph-mon[75677]: from='client.15427 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:19 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1750457755' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 21:14:19 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:19 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:19 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15438 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:19 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15439 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:20.105+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:20 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:20 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:20.113+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15441 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9999> 2025-11-24T21:00:21.643+0000 7f2ca3ee7640 -1 osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 191 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4217) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:20.596974+0000 osd.0 (osd.0) 4217 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416004 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 191 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x2194201/0x22aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,0,3,1,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:52.008257+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4218 sent 4217 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:21.644462+0000 osd.0 (osd.0) 4218 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _renew_subs
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x2194201/0x22aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,0,3,1,4,9])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9980> 2025-11-24T21:00:22.614+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4218) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:21.644462+0000 osd.0 (osd.0) 4218 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:53.008485+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4219 sent 4218 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:22.615005+0000 osd.0 (osd.0) 4219 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9969> 2025-11-24T21:00:23.599+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4219) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:22.615005+0000 osd.0 (osd.0) 4219 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:54.008673+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4220 sent 4219 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:23.600531+0000 osd.0 (osd.0) 4220 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9958> 2025-11-24T21:00:24.591+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4220) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:23.600531+0000 osd.0 (osd.0) 4220 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:55.008871+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4221 sent 4220 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:24.592277+0000 osd.0 (osd.0) 4221 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,3,1,4,9])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9946> 2025-11-24T21:00:25.599+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4221) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:24.592277+0000 osd.0 (osd.0) 4221 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:56.009100+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4222 sent 4221 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:25.600357+0000 osd.0 (osd.0) 4222 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9935> 2025-11-24T21:00:26.636+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4222) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:25.600357+0000 osd.0 (osd.0) 4222 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:57.009321+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4223 sent 4222 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:26.636797+0000 osd.0 (osd.0) 4223 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9921> 2025-11-24T21:00:27.603+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4223) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:26.636797+0000 osd.0 (osd.0) 4223 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:58.009645+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4224 sent 4223 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:27.605037+0000 osd.0 (osd.0) 4224 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,3,1,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9909> 2025-11-24T21:00:28.603+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4224) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:27.605037+0000 osd.0 (osd.0) 4224 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T20:59:59.009923+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4225 sent 4224 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:28.604430+0000 osd.0 (osd.0) 4225 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,3,1,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9897> 2025-11-24T21:00:29.600+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4225) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:28.604430+0000 osd.0 (osd.0) 4225 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:00.010186+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4226 sent 4225 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:29.601237+0000 osd.0 (osd.0) 4226 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9886> 2025-11-24T21:00:30.650+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4226) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:29.601237+0000 osd.0 (osd.0) 4226 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:01.010425+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4227 sent 4226 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:30.651202+0000 osd.0 (osd.0) 4227 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9875> 2025-11-24T21:00:31.636+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4227) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:30.651202+0000 osd.0 (osd.0) 4227 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:02.010678+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4228 sent 4227 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:31.637967+0000 osd.0 (osd.0) 4228 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9861> 2025-11-24T21:00:32.624+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4228) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:31.637967+0000 osd.0 (osd.0) 4228 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:03.010935+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4229 sent 4228 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:32.626055+0000 osd.0 (osd.0) 4229 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9850> 2025-11-24T21:00:33.628+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4229) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:32.626055+0000 osd.0 (osd.0) 4229 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:04.011227+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4230 sent 4229 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:33.629315+0000 osd.0 (osd.0) 4230 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9839> 2025-11-24T21:00:34.670+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4230) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:33.629315+0000 osd.0 (osd.0) 4230 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:05.011470+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4231 sent 4230 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:34.671430+0000 osd.0 (osd.0) 4231 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,3,1,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9827> 2025-11-24T21:00:35.713+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4231) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:34.671430+0000 osd.0 (osd.0) 4231 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:06.011786+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4232 sent 4231 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:35.715096+0000 osd.0 (osd.0) 4232 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9816> 2025-11-24T21:00:36.722+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4232) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:35.715096+0000 osd.0 (osd.0) 4232 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:07.012061+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4233 sent 4232 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:36.724114+0000 osd.0 (osd.0) 4233 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9801> 2025-11-24T21:00:37.713+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4233) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:36.724114+0000 osd.0 (osd.0) 4233 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:08.012286+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4234 sent 4233 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:37.714676+0000 osd.0 (osd.0) 4234 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9790> 2025-11-24T21:00:38.683+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4234) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:37.714676+0000 osd.0 (osd.0) 4234 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:09.012507+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4235 sent 4234 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:38.685222+0000 osd.0 (osd.0) 4235 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9778> 2025-11-24T21:00:39.724+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4235) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:38.685222+0000 osd.0 (osd.0) 4235 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:10.012823+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4236 sent 4235 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:39.725358+0000 osd.0 (osd.0) 4236 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9767> 2025-11-24T21:00:40.683+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4236) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:39.725358+0000 osd.0 (osd.0) 4236 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:11.013050+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4237 sent 4236 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:40.685042+0000 osd.0 (osd.0) 4237 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9756> 2025-11-24T21:00:41.659+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4237) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:40.685042+0000 osd.0 (osd.0) 4237 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:12.013332+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4238 sent 4237 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:41.661292+0000 osd.0 (osd.0) 4238 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9741> 2025-11-24T21:00:42.678+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:13.013575+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4239 sent 4238 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:42.678256+0000 osd.0 (osd.0) 4239 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4238) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:41.661292+0000 osd.0 (osd.0) 4238 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9730> 2025-11-24T21:00:43.714+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:14.013843+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4240 sent 4239 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:43.714811+0000 osd.0 (osd.0) 4240 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4239) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:42.678256+0000 osd.0 (osd.0) 4239 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4240) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:43.714811+0000 osd.0 (osd.0) 4240 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9717> 2025-11-24T21:00:44.735+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:15.014050+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4241 sent 4240 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:44.736398+0000 osd.0 (osd.0) 4241 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4241) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:44.736398+0000 osd.0 (osd.0) 4241 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9705> 2025-11-24T21:00:45.727+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:16.014250+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4242 sent 4241 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:45.728380+0000 osd.0 (osd.0) 4242 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4242) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:45.728380+0000 osd.0 (osd.0) 4242 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9694> 2025-11-24T21:00:46.736+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:17.014464+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4243 sent 4242 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:46.737147+0000 osd.0 (osd.0) 4243 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4243) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:46.737147+0000 osd.0 (osd.0) 4243 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9680> 2025-11-24T21:00:47.699+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:18.014718+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4244 sent 4243 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:47.699795+0000 osd.0 (osd.0) 4244 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4244) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:47.699795+0000 osd.0 (osd.0) 4244 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9668> 2025-11-24T21:00:48.724+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:19.014982+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4245 sent 4244 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:48.725425+0000 osd.0 (osd.0) 4245 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4245) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:48.725425+0000 osd.0 (osd.0) 4245 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9657> 2025-11-24T21:00:49.753+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:20.015305+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4246 sent 4245 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:49.754397+0000 osd.0 (osd.0) 4246 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4246) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:49.754397+0000 osd.0 (osd.0) 4246 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9646> 2025-11-24T21:00:50.777+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:21.015533+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4247 sent 4246 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:50.777820+0000 osd.0 (osd.0) 4247 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4247) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:50.777820+0000 osd.0 (osd.0) 4247 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9635> 2025-11-24T21:00:51.786+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:22.015884+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4248 sent 4247 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:51.787486+0000 osd.0 (osd.0) 4248 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4248) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:51.787486+0000 osd.0 (osd.0) 4248 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9621> 2025-11-24T21:00:52.751+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:23.016109+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4249 sent 4248 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:52.752338+0000 osd.0 (osd.0) 4249 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4249) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:52.752338+0000 osd.0 (osd.0) 4249 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9609> 2025-11-24T21:00:53.796+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:24.016319+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4250 sent 4249 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:53.797414+0000 osd.0 (osd.0) 4250 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4250) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:53.797414+0000 osd.0 (osd.0) 4250 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9598> 2025-11-24T21:00:54.800+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:25.016534+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4251 sent 4250 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:54.801463+0000 osd.0 (osd.0) 4251 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4251) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:54.801463+0000 osd.0 (osd.0) 4251 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9586> 2025-11-24T21:00:55.762+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:26.016860+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4252 sent 4251 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:55.762725+0000 osd.0 (osd.0) 4252 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4252) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:55.762725+0000 osd.0 (osd.0) 4252 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9574> 2025-11-24T21:00:56.790+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:27.017167+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4253 sent 4252 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:56.791541+0000 osd.0 (osd.0) 4253 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4253) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:56.791541+0000 osd.0 (osd.0) 4253 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9560> 2025-11-24T21:00:57.835+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:28.017396+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4254 sent 4253 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:57.836577+0000 osd.0 (osd.0) 4254 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4254) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:57.836577+0000 osd.0 (osd.0) 4254 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9549> 2025-11-24T21:00:58.814+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:29.017703+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4255 sent 4254 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:58.814831+0000 osd.0 (osd.0) 4255 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4255) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:58.814831+0000 osd.0 (osd.0) 4255 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9538> 2025-11-24T21:00:59.771+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:30.018001+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4256 sent 4255 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:00:59.771841+0000 osd.0 (osd.0) 4256 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4256) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:00:59.771841+0000 osd.0 (osd.0) 4256 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9527> 2025-11-24T21:01:00.748+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:31.018190+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4257 sent 4256 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:00.749248+0000 osd.0 (osd.0) 4257 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4257) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:00.749248+0000 osd.0 (osd.0) 4257 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9515> 2025-11-24T21:01:01.756+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:32.018466+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4258 sent 4257 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:01.757115+0000 osd.0 (osd.0) 4258 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4258) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:01.757115+0000 osd.0 (osd.0) 4258 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9501> 2025-11-24T21:01:02.735+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:33.018747+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4259 sent 4258 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:02.736321+0000 osd.0 (osd.0) 4259 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4259) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:02.736321+0000 osd.0 (osd.0) 4259 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9489> 2025-11-24T21:01:03.751+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:34.018942+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4260 sent 4259 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:03.752628+0000 osd.0 (osd.0) 4260 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4260) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:03.752628+0000 osd.0 (osd.0) 4260 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9478> 2025-11-24T21:01:04.799+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:35.019173+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4261 sent 4260 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:04.800732+0000 osd.0 (osd.0) 4261 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4261) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:04.800732+0000 osd.0 (osd.0) 4261 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9467> 2025-11-24T21:01:05.769+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:36.019395+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4262 sent 4261 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:05.771129+0000 osd.0 (osd.0) 4262 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4262) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:05.771129+0000 osd.0 (osd.0) 4262 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9456> 2025-11-24T21:01:06.783+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:37.019653+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4263 sent 4262 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:06.784964+0000 osd.0 (osd.0) 4263 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4263) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:06.784964+0000 osd.0 (osd.0) 4263 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9442> 2025-11-24T21:01:07.824+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:38.020639+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4264 sent 4263 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:07.826179+0000 osd.0 (osd.0) 4264 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4264) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:07.826179+0000 osd.0 (osd.0) 4264 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9430> 2025-11-24T21:01:08.850+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:39.021253+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4265 sent 4264 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:08.851804+0000 osd.0 (osd.0) 4265 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4265) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:08.851804+0000 osd.0 (osd.0) 4265 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9418> 2025-11-24T21:01:09.805+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:40.021497+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4266 sent 4265 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:09.807381+0000 osd.0 (osd.0) 4266 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4266) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:09.807381+0000 osd.0 (osd.0) 4266 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9407> 2025-11-24T21:01:10.780+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:41.021723+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4267 sent 4266 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:10.782049+0000 osd.0 (osd.0) 4267 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4267) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:10.782049+0000 osd.0 (osd.0) 4267 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9396> 2025-11-24T21:01:11.787+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:42.021953+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4268 sent 4267 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:11.789425+0000 osd.0 (osd.0) 4268 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4268) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:11.789425+0000 osd.0 (osd.0) 4268 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9381> 2025-11-24T21:01:12.761+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:43.022213+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4269 sent 4268 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:12.763083+0000 osd.0 (osd.0) 4269 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4269) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:12.763083+0000 osd.0 (osd.0) 4269 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9370> 2025-11-24T21:01:13.763+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:44.022471+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4270 sent 4269 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:13.765177+0000 osd.0 (osd.0) 4270 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4270) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:13.765177+0000 osd.0 (osd.0) 4270 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,9])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9358> 2025-11-24T21:01:14.771+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:45.023529+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4271 sent 4270 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:14.772905+0000 osd.0 (osd.0) 4271 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4271) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:14.772905+0000 osd.0 (osd.0) 4271 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9347> 2025-11-24T21:01:15.813+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:46.023727+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4272 sent 4271 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:15.814962+0000 osd.0 (osd.0) 4272 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4272) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:15.814962+0000 osd.0 (osd.0) 4272 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9336> 2025-11-24T21:01:16.834+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:47.023917+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4273 sent 4272 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:16.835988+0000 osd.0 (osd.0) 4273 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4273) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:16.835988+0000 osd.0 (osd.0) 4273 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9322> 2025-11-24T21:01:17.823+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:48.024166+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4274 sent 4273 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:17.825418+0000 osd.0 (osd.0) 4274 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4274) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:17.825418+0000 osd.0 (osd.0) 4274 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9311> 2025-11-24T21:01:18.785+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:49.024367+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4275 sent 4274 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:18.787550+0000 osd.0 (osd.0) 4275 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4275) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:18.787550+0000 osd.0 (osd.0) 4275 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9299> 2025-11-24T21:01:19.805+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:50.024665+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4276 sent 4275 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:19.806568+0000 osd.0 (osd.0) 4276 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4276) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:19.806568+0000 osd.0 (osd.0) 4276 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9288> 2025-11-24T21:01:20.826+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:51.024888+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4277 sent 4276 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:20.826942+0000 osd.0 (osd.0) 4277 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4277) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:20.826942+0000 osd.0 (osd.0) 4277 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9277> 2025-11-24T21:01:21.862+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:52.025090+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4278 sent 4277 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:21.862893+0000 osd.0 (osd.0) 4278 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4278) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:21.862893+0000 osd.0 (osd.0) 4278 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9263> 2025-11-24T21:01:22.879+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:53.025323+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4279 sent 4278 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:22.879964+0000 osd.0 (osd.0) 4279 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4279) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:22.879964+0000 osd.0 (osd.0) 4279 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,8,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9250> 2025-11-24T21:01:23.927+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:54.025530+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4280 sent 4279 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:23.927965+0000 osd.0 (osd.0) 4280 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4280) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:23.927965+0000 osd.0 (osd.0) 4280 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9239> 2025-11-24T21:01:24.936+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:55.025774+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4281 sent 4280 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:24.937467+0000 osd.0 (osd.0) 4281 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4281) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:24.937467+0000 osd.0 (osd.0) 4281 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9227> 2025-11-24T21:01:25.962+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:56.026018+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4282 sent 4281 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:25.963679+0000 osd.0 (osd.0) 4282 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4282) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:25.963679+0000 osd.0 (osd.0) 4282 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9216> 2025-11-24T21:01:26.937+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:57.026223+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4283 sent 4282 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:26.938415+0000 osd.0 (osd.0) 4283 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4283) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:26.938415+0000 osd.0 (osd.0) 4283 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9202> 2025-11-24T21:01:27.955+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:58.026429+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4284 sent 4283 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:27.956541+0000 osd.0 (osd.0) 4284 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4284) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:27.956541+0000 osd.0 (osd.0) 4284 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9191> 2025-11-24T21:01:28.913+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:00:59.026908+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4285 sent 4284 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:28.914223+0000 osd.0 (osd.0) 4285 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4285) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:28.914223+0000 osd.0 (osd.0) 4285 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9180> 2025-11-24T21:01:29.943+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:00.027500+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4286 sent 4285 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:29.944217+0000 osd.0 (osd.0) 4286 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4286) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:29.944217+0000 osd.0 (osd.0) 4286 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,2,4,8,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9168> 2025-11-24T21:01:30.968+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:01.028086+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4287 sent 4286 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:30.968691+0000 osd.0 (osd.0) 4287 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4287) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:30.968691+0000 osd.0 (osd.0) 4287 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9157> 2025-11-24T21:01:31.945+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:02.028470+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4288 sent 4287 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:31.946367+0000 osd.0 (osd.0) 4288 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4288) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:31.946367+0000 osd.0 (osd.0) 4288 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9143> 2025-11-24T21:01:32.942+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:03.029093+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4289 sent 4288 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:32.943388+0000 osd.0 (osd.0) 4289 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4289) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:32.943388+0000 osd.0 (osd.0) 4289 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9132> 2025-11-24T21:01:33.923+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:04.029696+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4290 sent 4289 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:33.924030+0000 osd.0 (osd.0) 4290 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4290) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:33.924030+0000 osd.0 (osd.0) 4290 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9121> 2025-11-24T21:01:34.948+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:05.029945+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4291 sent 4290 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:34.948940+0000 osd.0 (osd.0) 4291 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4291) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:34.948940+0000 osd.0 (osd.0) 4291 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9110> 2025-11-24T21:01:35.998+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:06.030220+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4292 sent 4291 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:35.998929+0000 osd.0 (osd.0) 4292 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4292) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:35.998929+0000 osd.0 (osd.0) 4292 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,1,3,4,8,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9098> 2025-11-24T21:01:36.955+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:07.030435+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4293 sent 4292 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:36.955787+0000 osd.0 (osd.0) 4293 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4293) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:36.955787+0000 osd.0 (osd.0) 4293 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9084> 2025-11-24T21:01:37.990+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:08.030775+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4294 sent 4293 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:37.991113+0000 osd.0 (osd.0) 4294 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4294) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:37.991113+0000 osd.0 (osd.0) 4294 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9073> 2025-11-24T21:01:38.951+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:09.031032+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4295 sent 4294 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:38.952402+0000 osd.0 (osd.0) 4295 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,1,3,4,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4295) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:38.952402+0000 osd.0 (osd.0) 4295 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9061> 2025-11-24T21:01:39.957+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:10.031630+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4296 sent 4295 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:39.958294+0000 osd.0 (osd.0) 4296 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4296) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:39.958294+0000 osd.0 (osd.0) 4296 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9050> 2025-11-24T21:01:40.974+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:11.031790+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4297 sent 4296 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:40.974875+0000 osd.0 (osd.0) 4297 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4297) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:40.974875+0000 osd.0 (osd.0) 4297 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9039> 2025-11-24T21:01:41.977+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:12.031957+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4298 sent 4297 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:41.977778+0000 osd.0 (osd.0) 4298 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,1,3,4,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4298) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:41.977778+0000 osd.0 (osd.0) 4298 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9024> 2025-11-24T21:01:43.017+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:13.032164+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4299 sent 4298 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:43.018890+0000 osd.0 (osd.0) 4299 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4299) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:43.018890+0000 osd.0 (osd.0) 4299 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9013> 2025-11-24T21:01:43.976+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:14.032449+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4300 sent 4299 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:43.977532+0000 osd.0 (osd.0) 4300 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,1,3,4,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4300) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:43.977532+0000 osd.0 (osd.0) 4300 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -9001> 2025-11-24T21:01:44.934+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:15.032697+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4301 sent 4300 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:44.936032+0000 osd.0 (osd.0) 4301 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4301) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:44.936032+0000 osd.0 (osd.0) 4301 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8990> 2025-11-24T21:01:45.915+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:16.032845+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4302 sent 4301 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:45.916919+0000 osd.0 (osd.0) 4302 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4302) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:45.916919+0000 osd.0 (osd.0) 4302 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8979> 2025-11-24T21:01:46.925+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:17.033088+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4303 sent 4302 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:46.926780+0000 osd.0 (osd.0) 4303 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4303) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:46.926780+0000 osd.0 (osd.0) 4303 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8965> 2025-11-24T21:01:47.942+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:18.033288+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4304 sent 4303 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:47.944357+0000 osd.0 (osd.0) 4304 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4304) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:47.944357+0000 osd.0 (osd.0) 4304 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8954> 2025-11-24T21:01:48.912+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,1,3,4,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:19.033675+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4305 sent 4304 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:48.913627+0000 osd.0 (osd.0) 4305 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4305) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:48.913627+0000 osd.0 (osd.0) 4305 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8942> 2025-11-24T21:01:49.950+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:20.033963+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4306 sent 4305 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:49.952194+0000 osd.0 (osd.0) 4306 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4306) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:49.952194+0000 osd.0 (osd.0) 4306 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8931> 2025-11-24T21:01:50.946+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:21.034153+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4307 sent 4306 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:50.947747+0000 osd.0 (osd.0) 4307 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4307) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:50.947747+0000 osd.0 (osd.0) 4307 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8920> 2025-11-24T21:01:51.943+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:22.034380+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4308 sent 4307 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:51.945168+0000 osd.0 (osd.0) 4308 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,1,3,4,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4308) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:51.945168+0000 osd.0 (osd.0) 4308 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8905> 2025-11-24T21:01:52.982+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:23.034555+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4309 sent 4308 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:52.983980+0000 osd.0 (osd.0) 4309 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,1,3,4,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4309) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:52.983980+0000 osd.0 (osd.0) 4309 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8893> 2025-11-24T21:01:53.974+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:24.034844+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4310 sent 4309 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:53.976010+0000 osd.0 (osd.0) 4310 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4310) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:53.976010+0000 osd.0 (osd.0) 4310 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8882> 2025-11-24T21:01:54.988+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:25.035074+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4311 sent 4310 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:54.989522+0000 osd.0 (osd.0) 4311 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4311) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:54.989522+0000 osd.0 (osd.0) 4311 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8871> 2025-11-24T21:01:55.978+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:26.035311+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4312 sent 4311 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:55.979965+0000 osd.0 (osd.0) 4312 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4312) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:55.979965+0000 osd.0 (osd.0) 4312 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8860> 2025-11-24T21:01:57.026+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:27.035544+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4313 sent 4312 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:57.027751+0000 osd.0 (osd.0) 4313 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4313) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:57.027751+0000 osd.0 (osd.0) 4313 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8846> 2025-11-24T21:01:58.031+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:28.035771+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4314 sent 4313 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:58.031743+0000 osd.0 (osd.0) 4314 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4314) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:58.031743+0000 osd.0 (osd.0) 4314 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:29.035992+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8832> 2025-11-24T21:01:59.071+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,1,3,4,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:30.036229+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4315 sent 4314 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:01:59.071728+0000 osd.0 (osd.0) 4315 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8822> 2025-11-24T21:02:00.093+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4315) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:01:59.071728+0000 osd.0 (osd.0) 4315 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:31.036540+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4316 sent 4315 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:00.093896+0000 osd.0 (osd.0) 4316 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8811> 2025-11-24T21:02:01.113+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4316) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:00.093896+0000 osd.0 (osd.0) 4316 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:32.036838+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4317 sent 4316 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:01.114038+0000 osd.0 (osd.0) 4317 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8800> 2025-11-24T21:02:02.145+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4317) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:01.114038+0000 osd.0 (osd.0) 4317 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:33.037052+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4318 sent 4317 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:02.146431+0000 osd.0 (osd.0) 4318 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8786> 2025-11-24T21:02:03.109+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4318) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:02.146431+0000 osd.0 (osd.0) 4318 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:34.037260+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4319 sent 4318 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:03.110111+0000 osd.0 (osd.0) 4319 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8775> 2025-11-24T21:02:04.066+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4319) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:03.110111+0000 osd.0 (osd.0) 4319 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:35.037528+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4320 sent 4319 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:04.066916+0000 osd.0 (osd.0) 4320 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8764> 2025-11-24T21:02:05.047+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4320) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:04.066916+0000 osd.0 (osd.0) 4320 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:36.037908+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4321 sent 4320 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:05.047890+0000 osd.0 (osd.0) 4321 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8752> 2025-11-24T21:02:06.041+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4321) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:05.047890+0000 osd.0 (osd.0) 4321 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8747> 2025-11-24T21:02:07.002+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:37.038193+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4323 sent 4321 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:06.042284+0000 osd.0 (osd.0) 4322 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:07.003307+0000 osd.0 (osd.0) 4323 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4323) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:06.042284+0000 osd.0 (osd.0) 4322 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:07.003307+0000 osd.0 (osd.0) 4323 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8731> 2025-11-24T21:02:08.032+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:38.038369+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4324 sent 4323 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:08.033830+0000 osd.0 (osd.0) 4324 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4324) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:08.033830+0000 osd.0 (osd.0) 4324 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:39.038566+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8717> 2025-11-24T21:02:09.041+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8713> 2025-11-24T21:02:10.003+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:40.038852+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4326 sent 4324 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:09.041645+0000 osd.0 (osd.0) 4325 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:10.004188+0000 osd.0 (osd.0) 4326 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4326) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:09.041645+0000 osd.0 (osd.0) 4325 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:10.004188+0000 osd.0 (osd.0) 4326 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:41.039070+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8697> 2025-11-24T21:02:11.047+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:42.039247+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4327 sent 4326 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:11.048572+0000 osd.0 (osd.0) 4327 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8688> 2025-11-24T21:02:12.073+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4327) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:11.048572+0000 osd.0 (osd.0) 4327 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8680> 2025-11-24T21:02:13.027+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:43.039429+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4329 sent 4327 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:12.073775+0000 osd.0 (osd.0) 4328 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:13.028125+0000 osd.0 (osd.0) 4329 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4329) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:12.073775+0000 osd.0 (osd.0) 4328 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:13.028125+0000 osd.0 (osd.0) 4329 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8667> 2025-11-24T21:02:14.031+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:44.039684+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4330 sent 4329 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:14.032034+0000 osd.0 (osd.0) 4330 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4330) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:14.032034+0000 osd.0 (osd.0) 4330 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:45.040059+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8652> 2025-11-24T21:02:15.063+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8649> 2025-11-24T21:02:16.023+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:46.040221+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4332 sent 4330 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:15.064305+0000 osd.0 (osd.0) 4331 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:16.024305+0000 osd.0 (osd.0) 4332 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4332) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:15.064305+0000 osd.0 (osd.0) 4331 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:16.024305+0000 osd.0 (osd.0) 4332 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8634> 2025-11-24T21:02:17.031+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:47.040414+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4333 sent 4332 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:17.032579+0000 osd.0 (osd.0) 4333 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4333) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:17.032579+0000 osd.0 (osd.0) 4333 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8620> 2025-11-24T21:02:18.018+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:48.040614+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4334 sent 4333 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:18.018886+0000 osd.0 (osd.0) 4334 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4334) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:18.018886+0000 osd.0 (osd.0) 4334 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8609> 2025-11-24T21:02:19.028+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:49.040802+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4335 sent 4334 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:19.029149+0000 osd.0 (osd.0) 4335 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4335) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:19.029149+0000 osd.0 (osd.0) 4335 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:50.041011+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8595> 2025-11-24T21:02:20.059+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8592> 2025-11-24T21:02:21.025+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:51.041128+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4337 sent 4335 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:20.059788+0000 osd.0 (osd.0) 4336 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:21.026814+0000 osd.0 (osd.0) 4337 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8582> 2025-11-24T21:02:21.982+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:52.041323+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 3 last_log 4338 sent 4337 num 3 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:21.984633+0000 osd.0 (osd.0) 4338 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4337) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:20.059788+0000 osd.0 (osd.0) 4336 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:21.026814+0000 osd.0 (osd.0) 4337 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4338) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:21.984633+0000 osd.0 (osd.0) 4338 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8564> 2025-11-24T21:02:22.958+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:53.041523+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4339 sent 4338 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:22.959740+0000 osd.0 (osd.0) 4339 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4339) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:22.959740+0000 osd.0 (osd.0) 4339 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8553> 2025-11-24T21:02:23.911+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:54.041747+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4340 sent 4339 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:23.913001+0000 osd.0 (osd.0) 4340 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4340) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:23.913001+0000 osd.0 (osd.0) 4340 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8542> 2025-11-24T21:02:24.925+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:55.042000+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4341 sent 4340 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:24.926130+0000 osd.0 (osd.0) 4341 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4341) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:24.926130+0000 osd.0 (osd.0) 4341 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8531> 2025-11-24T21:02:25.953+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:56.042183+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4342 sent 4341 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:25.955150+0000 osd.0 (osd.0) 4342 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4342) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:25.955150+0000 osd.0 (osd.0) 4342 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8519> 2025-11-24T21:02:26.977+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:57.042426+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4343 sent 4342 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:26.977932+0000 osd.0 (osd.0) 4343 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4343) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:26.977932+0000 osd.0 (osd.0) 4343 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8505> 2025-11-24T21:02:27.983+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:58.042673+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4344 sent 4343 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:27.984825+0000 osd.0 (osd.0) 4344 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4344) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:27.984825+0000 osd.0 (osd.0) 4344 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8493> 2025-11-24T21:02:28.995+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:01:59.042878+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4345 sent 4344 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:28.996472+0000 osd.0 (osd.0) 4345 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4345) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:28.996472+0000 osd.0 (osd.0) 4345 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8481> 2025-11-24T21:02:30.020+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:00.043070+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4346 sent 4345 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:30.022539+0000 osd.0 (osd.0) 4346 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4346) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:30.022539+0000 osd.0 (osd.0) 4346 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8470> 2025-11-24T21:02:31.040+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:01.043264+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4347 sent 4346 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:31.041246+0000 osd.0 (osd.0) 4347 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4347) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:31.041246+0000 osd.0 (osd.0) 4347 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:02.043510+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8455> 2025-11-24T21:02:32.076+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:03.043685+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4348 sent 4347 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:32.077903+0000 osd.0 (osd.0) 4348 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8443> 2025-11-24T21:02:33.080+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4348) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:32.077903+0000 osd.0 (osd.0) 4348 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8438> 2025-11-24T21:02:34.030+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:04.043974+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4350 sent 4348 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:33.081695+0000 osd.0 (osd.0) 4349 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:34.032374+0000 osd.0 (osd.0) 4350 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4350) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:33.081695+0000 osd.0 (osd.0) 4349 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:34.032374+0000 osd.0 (osd.0) 4350 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8425> 2025-11-24T21:02:35.003+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:05.044219+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4351 sent 4350 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:35.004672+0000 osd.0 (osd.0) 4351 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4351) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:35.004672+0000 osd.0 (osd.0) 4351 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8414> 2025-11-24T21:02:35.986+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:06.044455+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4352 sent 4351 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:35.986433+0000 osd.0 (osd.0) 4352 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4352) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:35.986433+0000 osd.0 (osd.0) 4352 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8402> 2025-11-24T21:02:36.942+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:07.044732+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4353 sent 4352 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:36.943061+0000 osd.0 (osd.0) 4353 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4353) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:36.943061+0000 osd.0 (osd.0) 4353 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8387> 2025-11-24T21:02:37.978+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:08.044968+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4354 sent 4353 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:37.978561+0000 osd.0 (osd.0) 4354 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4354) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:37.978561+0000 osd.0 (osd.0) 4354 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8376> 2025-11-24T21:02:39.007+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:09.045259+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4355 sent 4354 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:39.007933+0000 osd.0 (osd.0) 4355 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4355) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:39.007933+0000 osd.0 (osd.0) 4355 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8365> 2025-11-24T21:02:40.029+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:10.045632+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4356 sent 4355 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:40.030443+0000 osd.0 (osd.0) 4356 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4356) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:40.030443+0000 osd.0 (osd.0) 4356 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8354> 2025-11-24T21:02:40.984+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:11.045860+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4357 sent 4356 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:40.984423+0000 osd.0 (osd.0) 4357 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4357) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:40.984423+0000 osd.0 (osd.0) 4357 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8343> 2025-11-24T21:02:42.024+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:12.046074+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4358 sent 4357 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:42.024840+0000 osd.0 (osd.0) 4358 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4358) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:42.024840+0000 osd.0 (osd.0) 4358 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8328> 2025-11-24T21:02:42.985+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:13.046258+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4359 sent 4358 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:42.986056+0000 osd.0 (osd.0) 4359 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4359) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:42.986056+0000 osd.0 (osd.0) 4359 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8316> 2025-11-24T21:02:43.997+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:14.046469+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4360 sent 4359 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:43.997747+0000 osd.0 (osd.0) 4360 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4360) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:43.997747+0000 osd.0 (osd.0) 4360 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8305> 2025-11-24T21:02:44.990+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:15.046731+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4361 sent 4360 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:44.991354+0000 osd.0 (osd.0) 4361 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4361) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:44.991354+0000 osd.0 (osd.0) 4361 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8294> 2025-11-24T21:02:46.006+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:16.046963+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4362 sent 4361 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:46.007573+0000 osd.0 (osd.0) 4362 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4362) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:46.007573+0000 osd.0 (osd.0) 4362 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8283> 2025-11-24T21:02:46.968+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:17.047211+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4363 sent 4362 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:46.969966+0000 osd.0 (osd.0) 4363 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4363) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:46.969966+0000 osd.0 (osd.0) 4363 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8268> 2025-11-24T21:02:47.985+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:18.047432+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4364 sent 4363 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:47.986315+0000 osd.0 (osd.0) 4364 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4364) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:47.986315+0000 osd.0 (osd.0) 4364 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8257> 2025-11-24T21:02:49.023+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:19.047635+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4365 sent 4364 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:49.024982+0000 osd.0 (osd.0) 4365 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4365) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:49.024982+0000 osd.0 (osd.0) 4365 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8246> 2025-11-24T21:02:50.029+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:20.047840+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4366 sent 4365 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:50.030535+0000 osd.0 (osd.0) 4366 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4366) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:50.030535+0000 osd.0 (osd.0) 4366 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8235> 2025-11-24T21:02:51.042+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:21.048026+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4367 sent 4366 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:51.043164+0000 osd.0 (osd.0) 4367 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4367) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:51.043164+0000 osd.0 (osd.0) 4367 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8224> 2025-11-24T21:02:52.046+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:22.048168+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4368 sent 4367 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:52.047235+0000 osd.0 (osd.0) 4368 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4368) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:52.047235+0000 osd.0 (osd.0) 4368 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8209> 2025-11-24T21:02:53.000+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:23.048302+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4369 sent 4368 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:53.001581+0000 osd.0 (osd.0) 4369 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4369) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:53.001581+0000 osd.0 (osd.0) 4369 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8198> 2025-11-24T21:02:54.027+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:24.048551+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4370 sent 4369 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:54.028371+0000 osd.0 (osd.0) 4370 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4370) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:54.028371+0000 osd.0 (osd.0) 4370 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:25.048829+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8184> 2025-11-24T21:02:55.073+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:26.049056+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4371 sent 4370 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:55.074200+0000 osd.0 (osd.0) 4371 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8174> 2025-11-24T21:02:56.098+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4371) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:55.074200+0000 osd.0 (osd.0) 4371 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:27.049375+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4372 sent 4371 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:56.099683+0000 osd.0 (osd.0) 4372 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8163> 2025-11-24T21:02:57.051+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4372) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:56.099683+0000 osd.0 (osd.0) 4372 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:28.049660+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4373 sent 4372 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:57.051726+0000 osd.0 (osd.0) 4373 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8149> 2025-11-24T21:02:58.068+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4373) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:57.051726+0000 osd.0 (osd.0) 4373 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8144> 2025-11-24T21:02:59.048+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:29.049821+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4375 sent 4373 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:58.069118+0000 osd.0 (osd.0) 4374 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:02:59.049443+0000 osd.0 (osd.0) 4375 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4375) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:58.069118+0000 osd.0 (osd.0) 4374 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:02:59.049443+0000 osd.0 (osd.0) 4375 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8131> 2025-11-24T21:03:00.016+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:30.050003+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4376 sent 4375 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:00.017872+0000 osd.0 (osd.0) 4376 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4376) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:00.017872+0000 osd.0 (osd.0) 4376 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:31.050194+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8116> 2025-11-24T21:03:01.053+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8113> 2025-11-24T21:03:02.008+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:32.050359+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4378 sent 4376 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:01.054923+0000 osd.0 (osd.0) 4377 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:02.009900+0000 osd.0 (osd.0) 4378 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4378) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:01.054923+0000 osd.0 (osd.0) 4377 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:02.009900+0000 osd.0 (osd.0) 4378 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8097> 2025-11-24T21:03:03.008+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:33.050669+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4379 sent 4378 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:03.010286+0000 osd.0 (osd.0) 4379 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4379) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:03.010286+0000 osd.0 (osd.0) 4379 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8086> 2025-11-24T21:03:04.021+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:34.050981+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4380 sent 4379 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:04.022697+0000 osd.0 (osd.0) 4380 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4380) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:04.022697+0000 osd.0 (osd.0) 4380 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:35.051282+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8072> 2025-11-24T21:03:05.070+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8069> 2025-11-24T21:03:06.026+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:36.051705+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4382 sent 4380 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:05.072999+0000 osd.0 (osd.0) 4381 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:06.028660+0000 osd.0 (osd.0) 4382 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4382) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:05.072999+0000 osd.0 (osd.0) 4381 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:06.028660+0000 osd.0 (osd.0) 4382 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8055> 2025-11-24T21:03:06.985+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:37.052088+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4383 sent 4382 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:06.987486+0000 osd.0 (osd.0) 4383 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4383) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:06.987486+0000 osd.0 (osd.0) 4383 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8041> 2025-11-24T21:03:07.937+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:38.052377+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4384 sent 4383 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:07.938567+0000 osd.0 (osd.0) 4384 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4384) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:07.938567+0000 osd.0 (osd.0) 4384 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8030> 2025-11-24T21:03:08.936+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:39.052669+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4385 sent 4384 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:08.937560+0000 osd.0 (osd.0) 4385 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,5,8,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4385) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:08.937560+0000 osd.0 (osd.0) 4385 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8018> 2025-11-24T21:03:09.951+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:40.052918+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4386 sent 4385 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:09.953034+0000 osd.0 (osd.0) 4386 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4386) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:09.953034+0000 osd.0 (osd.0) 4386 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -8007> 2025-11-24T21:03:10.943+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:41.053261+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4387 sent 4386 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:10.944493+0000 osd.0 (osd.0) 4387 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4387) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:10.944493+0000 osd.0 (osd.0) 4387 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7996> 2025-11-24T21:03:11.966+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:42.053666+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4388 sent 4387 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:11.968063+0000 osd.0 (osd.0) 4388 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4388) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:11.968063+0000 osd.0 (osd.0) 4388 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7982> 2025-11-24T21:03:12.991+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:43.053952+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4389 sent 4388 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:12.992513+0000 osd.0 (osd.0) 4389 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4389) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:12.992513+0000 osd.0 (osd.0) 4389 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7971> 2025-11-24T21:03:13.966+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:44.054234+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4390 sent 4389 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:13.966716+0000 osd.0 (osd.0) 4390 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4390) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:13.966716+0000 osd.0 (osd.0) 4390 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7959> 2025-11-24T21:03:14.962+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:45.054505+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4391 sent 4390 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:14.963006+0000 osd.0 (osd.0) 4391 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4391) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:14.963006+0000 osd.0 (osd.0) 4391 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7947> 2025-11-24T21:03:16.003+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:46.054803+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4392 sent 4391 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:16.004408+0000 osd.0 (osd.0) 4392 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4392) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:16.004408+0000 osd.0 (osd.0) 4392 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7936> 2025-11-24T21:03:16.981+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:47.055047+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4393 sent 4392 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:16.981507+0000 osd.0 (osd.0) 4393 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4393) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:16.981507+0000 osd.0 (osd.0) 4393 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7922> 2025-11-24T21:03:17.989+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:48.056117+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4394 sent 4393 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:17.989995+0000 osd.0 (osd.0) 4394 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4394) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:17.989995+0000 osd.0 (osd.0) 4394 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7911> 2025-11-24T21:03:18.949+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:49.056552+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4395 sent 4394 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:18.950277+0000 osd.0 (osd.0) 4395 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4395) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:18.950277+0000 osd.0 (osd.0) 4395 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7899> 2025-11-24T21:03:19.962+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:50.057653+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4396 sent 4395 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:19.962902+0000 osd.0 (osd.0) 4396 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4396) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:19.962902+0000 osd.0 (osd.0) 4396 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7888> 2025-11-24T21:03:20.960+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:51.058451+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4397 sent 4396 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:20.960695+0000 osd.0 (osd.0) 4397 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4397) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:20.960695+0000 osd.0 (osd.0) 4397 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7877> 2025-11-24T21:03:21.998+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:52.059144+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4398 sent 4397 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:21.999334+0000 osd.0 (osd.0) 4398 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7864> 2025-11-24T21:03:23.009+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:53.059432+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4399 sent 4398 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:23.009997+0000 osd.0 (osd.0) 4399 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4398) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:21.999334+0000 osd.0 (osd.0) 4398 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4399) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:23.009997+0000 osd.0 (osd.0) 4399 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7851> 2025-11-24T21:03:24.018+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:54.060273+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4400 sent 4399 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:24.018793+0000 osd.0 (osd.0) 4400 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4400) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:24.018793+0000 osd.0 (osd.0) 4400 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7840> 2025-11-24T21:03:24.995+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:55.060531+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4401 sent 4400 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:24.996168+0000 osd.0 (osd.0) 4401 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4401) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:24.996168+0000 osd.0 (osd.0) 4401 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7829> 2025-11-24T21:03:25.985+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:56.060833+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4402 sent 4401 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:25.985976+0000 osd.0 (osd.0) 4402 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4402) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:25.985976+0000 osd.0 (osd.0) 4402 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7817> 2025-11-24T21:03:26.983+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:57.061916+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4403 sent 4402 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:26.984143+0000 osd.0 (osd.0) 4403 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4403) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:26.984143+0000 osd.0 (osd.0) 4403 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7803> 2025-11-24T21:03:28.031+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:58.062191+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4404 sent 4403 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:28.031797+0000 osd.0 (osd.0) 4404 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4404) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:28.031797+0000 osd.0 (osd.0) 4404 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: mgrc ms_handle_reset ms_handle_reset con 0x560fd0e3c400
Nov 24 21:14:20 compute-0 ceph-osd[88624]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/103018990
Nov 24 21:14:20 compute-0 ceph-osd[88624]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/103018990,v1:192.168.122.100:6801/103018990]
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: get_auth_request con 0x560fd4318400 auth_method 0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: mgrc handle_mgr_configure stats_period=5
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7787> 2025-11-24T21:03:29.058+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:02:59.062420+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4405 sent 4404 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:29.059307+0000 osd.0 (osd.0) 4405 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4405) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:29.059307+0000 osd.0 (osd.0) 4405 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:00.063281+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7773> 2025-11-24T21:03:30.092+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:01.063467+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4406 sent 4405 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:30.092925+0000 osd.0 (osd.0) 4406 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7762> 2025-11-24T21:03:31.088+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4406) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:30.092925+0000 osd.0 (osd.0) 4406 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7757> 2025-11-24T21:03:32.054+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:02.063912+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4408 sent 4406 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:31.089051+0000 osd.0 (osd.0) 4407 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:32.055174+0000 osd.0 (osd.0) 4408 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4408) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:31.089051+0000 osd.0 (osd.0) 4407 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:32.055174+0000 osd.0 (osd.0) 4408 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7741> 2025-11-24T21:03:33.014+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:03.064379+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4409 sent 4408 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:33.015016+0000 osd.0 (osd.0) 4409 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4409) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:33.015016+0000 osd.0 (osd.0) 4409 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7730> 2025-11-24T21:03:34.050+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:04.064605+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4410 sent 4409 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:34.051352+0000 osd.0 (osd.0) 4410 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4410) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:34.051352+0000 osd.0 (osd.0) 4410 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7719> 2025-11-24T21:03:35.016+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:05.064838+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4411 sent 4410 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:35.017454+0000 osd.0 (osd.0) 4411 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4411) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:35.017454+0000 osd.0 (osd.0) 4411 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7708> 2025-11-24T21:03:35.999+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:06.065260+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4412 sent 4411 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:36.000854+0000 osd.0 (osd.0) 4412 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4412) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:36.000854+0000 osd.0 (osd.0) 4412 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7697> 2025-11-24T21:03:36.962+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:07.065549+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4413 sent 4412 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:36.963830+0000 osd.0 (osd.0) 4413 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7685> 2025-11-24T21:03:37.936+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4413) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:36.963830+0000 osd.0 (osd.0) 4413 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:08.065914+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4414 sent 4413 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:37.937832+0000 osd.0 (osd.0) 4414 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7674> 2025-11-24T21:03:38.925+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4414) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:37.937832+0000 osd.0 (osd.0) 4414 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:09.066192+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4415 sent 4414 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:38.926653+0000 osd.0 (osd.0) 4415 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 ms_handle_reset con 0x560fd34c0c00 session 0x560fd132c960
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: handle_auth_request added challenge on 0x560fd0e3d400
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7661> 2025-11-24T21:03:39.890+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4415) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:38.926653+0000 osd.0 (osd.0) 4415 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:10.066658+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4416 sent 4415 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:39.891668+0000 osd.0 (osd.0) 4416 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7650> 2025-11-24T21:03:40.894+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4416) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:39.891668+0000 osd.0 (osd.0) 4416 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:11.066936+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4417 sent 4416 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:40.896150+0000 osd.0 (osd.0) 4417 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7638> 2025-11-24T21:03:41.875+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4417) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:40.896150+0000 osd.0 (osd.0) 4417 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:12.067249+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4418 sent 4417 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:41.876868+0000 osd.0 (osd.0) 4418 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7623> 2025-11-24T21:03:42.839+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4418) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:41.876868+0000 osd.0 (osd.0) 4418 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:13.067548+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4419 sent 4418 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:42.841070+0000 osd.0 (osd.0) 4419 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7612> 2025-11-24T21:03:43.878+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4419) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:42.841070+0000 osd.0 (osd.0) 4419 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:14.067858+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4420 sent 4419 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:43.879383+0000 osd.0 (osd.0) 4420 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7601> 2025-11-24T21:03:44.870+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4420) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:43.879383+0000 osd.0 (osd.0) 4420 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:15.068186+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4421 sent 4420 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:44.872382+0000 osd.0 (osd.0) 4421 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7589> 2025-11-24T21:03:45.914+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4421) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:44.872382+0000 osd.0 (osd.0) 4421 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:16.068430+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4422 sent 4421 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:45.916396+0000 osd.0 (osd.0) 4422 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7578> 2025-11-24T21:03:46.914+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:17.068659+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4423 sent 4422 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:46.915929+0000 osd.0 (osd.0) 4423 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4422) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:45.916396+0000 osd.0 (osd.0) 4422 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7563> 2025-11-24T21:03:47.867+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:18.068928+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4424 sent 4423 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:47.869134+0000 osd.0 (osd.0) 4424 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4423) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:46.915929+0000 osd.0 (osd.0) 4423 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4424) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:47.869134+0000 osd.0 (osd.0) 4424 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7550> 2025-11-24T21:03:48.826+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:19.069210+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4425 sent 4424 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:48.827936+0000 osd.0 (osd.0) 4425 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4425) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:48.827936+0000 osd.0 (osd.0) 4425 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7539> 2025-11-24T21:03:49.788+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:20.069462+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4426 sent 4425 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:49.790316+0000 osd.0 (osd.0) 4426 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4426) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:49.790316+0000 osd.0 (osd.0) 4426 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7528> 2025-11-24T21:03:50.771+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 36691968 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:21.069690+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4427 sent 4426 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:50.772052+0000 osd.0 (osd.0) 4427 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4427) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:50.772052+0000 osd.0 (osd.0) 4427 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7516> 2025-11-24T21:03:51.808+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:22.069938+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4428 sent 4427 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:51.808881+0000 osd.0 (osd.0) 4428 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4428) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:51.808881+0000 osd.0 (osd.0) 4428 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7501> 2025-11-24T21:03:52.818+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:23.070117+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4429 sent 4428 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:52.818798+0000 osd.0 (osd.0) 4429 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4429) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:52.818798+0000 osd.0 (osd.0) 4429 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7490> 2025-11-24T21:03:53.857+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:24.070315+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4430 sent 4429 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:53.857861+0000 osd.0 (osd.0) 4430 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4430) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:53.857861+0000 osd.0 (osd.0) 4430 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7479> 2025-11-24T21:03:54.881+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:25.070535+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4431 sent 4430 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:54.881831+0000 osd.0 (osd.0) 4431 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4431) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:54.881831+0000 osd.0 (osd.0) 4431 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7468> 2025-11-24T21:03:55.868+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:26.070781+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4432 sent 4431 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:55.869258+0000 osd.0 (osd.0) 4432 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4432) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:55.869258+0000 osd.0 (osd.0) 4432 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7457> 2025-11-24T21:03:56.868+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:27.071033+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4433 sent 4432 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:56.869023+0000 osd.0 (osd.0) 4433 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4433) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:56.869023+0000 osd.0 (osd.0) 4433 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7443> 2025-11-24T21:03:57.886+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:28.071250+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4434 sent 4433 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:57.886877+0000 osd.0 (osd.0) 4434 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4434) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:57.886877+0000 osd.0 (osd.0) 4434 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7431> 2025-11-24T21:03:58.857+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:29.071475+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4435 sent 4434 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:58.858012+0000 osd.0 (osd.0) 4435 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4435) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:58.858012+0000 osd.0 (osd.0) 4435 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7419> 2025-11-24T21:03:59.831+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:30.071757+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4436 sent 4435 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:03:59.832016+0000 osd.0 (osd.0) 4436 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4436) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:03:59.832016+0000 osd.0 (osd.0) 4436 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7408> 2025-11-24T21:04:00.814+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:31.071996+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4437 sent 4436 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:00.814651+0000 osd.0 (osd.0) 4437 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4437) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:00.814651+0000 osd.0 (osd.0) 4437 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7397> 2025-11-24T21:04:01.820+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:32.072281+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4438 sent 4437 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:01.821062+0000 osd.0 (osd.0) 4438 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4438) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:01.821062+0000 osd.0 (osd.0) 4438 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7383> 2025-11-24T21:04:02.823+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:33.072685+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4439 sent 4438 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:02.824001+0000 osd.0 (osd.0) 4439 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4439) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:02.824001+0000 osd.0 (osd.0) 4439 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7372> 2025-11-24T21:04:03.832+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:34.072951+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4440 sent 4439 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:03.832931+0000 osd.0 (osd.0) 4440 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4440) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:03.832931+0000 osd.0 (osd.0) 4440 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7360> 2025-11-24T21:04:04.803+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:35.073273+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4441 sent 4440 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:04.804394+0000 osd.0 (osd.0) 4441 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4441) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:04.804394+0000 osd.0 (osd.0) 4441 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7349> 2025-11-24T21:04:05.775+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:36.073533+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4442 sent 4441 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:05.776528+0000 osd.0 (osd.0) 4442 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4442) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:05.776528+0000 osd.0 (osd.0) 4442 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7337> 2025-11-24T21:04:06.730+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:37.073761+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4443 sent 4442 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:06.731348+0000 osd.0 (osd.0) 4443 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4443) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:06.731348+0000 osd.0 (osd.0) 4443 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7323> 2025-11-24T21:04:07.777+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:38.073981+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4444 sent 4443 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:07.778267+0000 osd.0 (osd.0) 4444 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4444) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:07.778267+0000 osd.0 (osd.0) 4444 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7312> 2025-11-24T21:04:08.748+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:39.074213+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4445 sent 4444 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:08.749140+0000 osd.0 (osd.0) 4445 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4445) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:08.749140+0000 osd.0 (osd.0) 4445 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7301> 2025-11-24T21:04:09.796+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:40.074500+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4446 sent 4445 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:09.796802+0000 osd.0 (osd.0) 4446 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4446) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:09.796802+0000 osd.0 (osd.0) 4446 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7289> 2025-11-24T21:04:10.764+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:41.074725+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4447 sent 4446 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:10.765377+0000 osd.0 (osd.0) 4447 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4447) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:10.765377+0000 osd.0 (osd.0) 4447 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7278> 2025-11-24T21:04:11.771+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:42.074993+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4448 sent 4447 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:11.771764+0000 osd.0 (osd.0) 4448 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4448) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:11.771764+0000 osd.0 (osd.0) 4448 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7263> 2025-11-24T21:04:12.755+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:43.075258+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4449 sent 4448 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:12.756935+0000 osd.0 (osd.0) 4449 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4449) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:12.756935+0000 osd.0 (osd.0) 4449 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7252> 2025-11-24T21:04:13.738+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:44.075522+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4450 sent 4449 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:13.740244+0000 osd.0 (osd.0) 4450 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4450) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:13.740244+0000 osd.0 (osd.0) 4450 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7241> 2025-11-24T21:04:14.733+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:45.075832+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4451 sent 4450 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:14.735740+0000 osd.0 (osd.0) 4451 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4451) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:14.735740+0000 osd.0 (osd.0) 4451 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7229> 2025-11-24T21:04:15.694+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:46.076081+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4452 sent 4451 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:15.695477+0000 osd.0 (osd.0) 4452 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4452) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:15.695477+0000 osd.0 (osd.0) 4452 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7217> 2025-11-24T21:04:16.694+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:47.076306+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4453 sent 4452 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:16.695204+0000 osd.0 (osd.0) 4453 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4453) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:16.695204+0000 osd.0 (osd.0) 4453 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7202> 2025-11-24T21:04:17.674+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:48.076574+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4454 sent 4453 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:17.675454+0000 osd.0 (osd.0) 4454 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4454) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:17.675454+0000 osd.0 (osd.0) 4454 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7191> 2025-11-24T21:04:18.682+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:49.076904+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4455 sent 4454 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:18.683318+0000 osd.0 (osd.0) 4455 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4455) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:18.683318+0000 osd.0 (osd.0) 4455 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7180> 2025-11-24T21:04:19.730+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:50.077191+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4456 sent 4455 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:19.731747+0000 osd.0 (osd.0) 4456 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4456) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:19.731747+0000 osd.0 (osd.0) 4456 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7169> 2025-11-24T21:04:20.697+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:51.077476+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4457 sent 4456 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:20.698742+0000 osd.0 (osd.0) 4457 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4457) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:20.698742+0000 osd.0 (osd.0) 4457 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7157> 2025-11-24T21:04:21.696+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:52.077729+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4458 sent 4457 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:21.698197+0000 osd.0 (osd.0) 4458 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4458) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:21.698197+0000 osd.0 (osd.0) 4458 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7143> 2025-11-24T21:04:22.703+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:53.077923+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4459 sent 4458 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:22.705014+0000 osd.0 (osd.0) 4459 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4459) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:22.705014+0000 osd.0 (osd.0) 4459 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7132> 2025-11-24T21:04:23.753+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:54.078146+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4460 sent 4459 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:23.755022+0000 osd.0 (osd.0) 4460 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4460) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:23.755022+0000 osd.0 (osd.0) 4460 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7121> 2025-11-24T21:04:24.706+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:55.078390+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4461 sent 4460 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:24.708434+0000 osd.0 (osd.0) 4461 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4461) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:24.708434+0000 osd.0 (osd.0) 4461 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7110> 2025-11-24T21:04:25.670+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 ms_handle_reset con 0x560fd3851800 session 0x560fd151b680
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: handle_auth_request added challenge on 0x560fd2104400
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:56.078548+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4462 sent 4461 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:25.671714+0000 osd.0 (osd.0) 4462 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4462) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:25.671714+0000 osd.0 (osd.0) 4462 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7096> 2025-11-24T21:04:26.643+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:57.078751+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4463 sent 4462 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:26.644434+0000 osd.0 (osd.0) 4463 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4463) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:26.644434+0000 osd.0 (osd.0) 4463 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7084> 2025-11-24T21:04:27.619+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:58.078968+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4464 sent 4463 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:27.620372+0000 osd.0 (osd.0) 4464 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4464) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:27.620372+0000 osd.0 (osd.0) 4464 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7070> 2025-11-24T21:04:28.625+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:03:59.079175+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4465 sent 4464 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:28.625417+0000 osd.0 (osd.0) 4465 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4465) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:28.625417+0000 osd.0 (osd.0) 4465 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7059> 2025-11-24T21:04:29.660+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:00.079421+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4466 sent 4465 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:29.660956+0000 osd.0 (osd.0) 4466 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4466) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:29.660956+0000 osd.0 (osd.0) 4466 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7048> 2025-11-24T21:04:30.649+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:01.079708+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4467 sent 4466 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:30.650000+0000 osd.0 (osd.0) 4467 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4467) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:30.650000+0000 osd.0 (osd.0) 4467 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7036> 2025-11-24T21:04:31.627+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,2,11,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:02.079945+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4468 sent 4467 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:31.627810+0000 osd.0 (osd.0) 4468 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4468) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:31.627810+0000 osd.0 (osd.0) 4468 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7024> 2025-11-24T21:04:32.587+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:03.080154+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4469 sent 4468 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:32.587882+0000 osd.0 (osd.0) 4469 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -7012> 2025-11-24T21:04:33.545+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4469) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:32.587882+0000 osd.0 (osd.0) 4469 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:04.080411+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4470 sent 4469 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:33.545956+0000 osd.0 (osd.0) 4470 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4470) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:33.545956+0000 osd.0 (osd.0) 4470 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6999> 2025-11-24T21:04:34.579+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:05.080654+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4471 sent 4470 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:34.580181+0000 osd.0 (osd.0) 4471 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6990> 2025-11-24T21:04:35.533+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4471) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:34.580181+0000 osd.0 (osd.0) 4471 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:06.080884+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4472 sent 4471 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:35.534501+0000 osd.0 (osd.0) 4472 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6978> 2025-11-24T21:04:36.525+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4472) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:35.534501+0000 osd.0 (osd.0) 4472 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:07.081124+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4473 sent 4472 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:36.526808+0000 osd.0 (osd.0) 4473 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6967> 2025-11-24T21:04:37.537+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4473) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:36.526808+0000 osd.0 (osd.0) 4473 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:08.081387+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4474 sent 4473 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:37.538122+0000 osd.0 (osd.0) 4474 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6953> 2025-11-24T21:04:38.554+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4474) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:37.538122+0000 osd.0 (osd.0) 4474 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:09.081664+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4475 sent 4474 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:38.554727+0000 osd.0 (osd.0) 4475 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6942> 2025-11-24T21:04:39.578+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4475) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:38.554727+0000 osd.0 (osd.0) 4475 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:10.081871+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4476 sent 4475 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:39.579364+0000 osd.0 (osd.0) 4476 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6931> 2025-11-24T21:04:40.558+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4476) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:39.579364+0000 osd.0 (osd.0) 4476 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:11.082169+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4477 sent 4476 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:40.558685+0000 osd.0 (osd.0) 4477 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6919> 2025-11-24T21:04:41.543+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4477) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:40.558685+0000 osd.0 (osd.0) 4477 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:12.082413+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4478 sent 4477 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:41.543849+0000 osd.0 (osd.0) 4478 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6907> 2025-11-24T21:04:42.503+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4478) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:41.543849+0000 osd.0 (osd.0) 4478 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:13.082692+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4479 sent 4478 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:42.504635+0000 osd.0 (osd.0) 4479 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6893> 2025-11-24T21:04:43.463+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4479) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:42.504635+0000 osd.0 (osd.0) 4479 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:14.082903+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4480 sent 4479 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:43.464098+0000 osd.0 (osd.0) 4480 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6882> 2025-11-24T21:04:44.505+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4480) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:43.464098+0000 osd.0 (osd.0) 4480 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:15.083144+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4481 sent 4480 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:44.506653+0000 osd.0 (osd.0) 4481 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6870> 2025-11-24T21:04:45.540+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4481) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:44.506653+0000 osd.0 (osd.0) 4481 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:16.083439+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4482 sent 4481 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:45.541650+0000 osd.0 (osd.0) 4482 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6859> 2025-11-24T21:04:46.500+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4482) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:45.541650+0000 osd.0 (osd.0) 4482 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:17.083687+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4483 sent 4482 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:46.501415+0000 osd.0 (osd.0) 4483 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6847> 2025-11-24T21:04:47.460+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4483) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:46.501415+0000 osd.0 (osd.0) 4483 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:18.083920+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4484 sent 4483 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:47.461282+0000 osd.0 (osd.0) 4484 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6833> 2025-11-24T21:04:48.432+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4484) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:47.461282+0000 osd.0 (osd.0) 4484 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:19.084147+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4485 sent 4484 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:48.432966+0000 osd.0 (osd.0) 4485 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6822> 2025-11-24T21:04:49.397+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4485) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:48.432966+0000 osd.0 (osd.0) 4485 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:20.084711+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4486 sent 4485 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:49.397764+0000 osd.0 (osd.0) 4486 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6811> 2025-11-24T21:04:50.357+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4486) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:49.397764+0000 osd.0 (osd.0) 4486 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:21.084985+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4487 sent 4486 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:50.358778+0000 osd.0 (osd.0) 4487 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6799> 2025-11-24T21:04:51.310+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4487) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:50.358778+0000 osd.0 (osd.0) 4487 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:22.085238+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4488 sent 4487 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:51.311489+0000 osd.0 (osd.0) 4488 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6788> 2025-11-24T21:04:52.356+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4488) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:51.311489+0000 osd.0 (osd.0) 4488 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:23.085545+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4489 sent 4488 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:52.357515+0000 osd.0 (osd.0) 4489 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6774> 2025-11-24T21:04:53.371+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4489) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:52.357515+0000 osd.0 (osd.0) 4489 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:24.085937+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4490 sent 4489 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:53.372541+0000 osd.0 (osd.0) 4490 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6763> 2025-11-24T21:04:54.413+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4490) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:53.372541+0000 osd.0 (osd.0) 4490 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:25.086198+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4491 sent 4490 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:54.414909+0000 osd.0 (osd.0) 4491 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6752> 2025-11-24T21:04:55.389+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4491) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:54.414909+0000 osd.0 (osd.0) 4491 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:26.086421+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4492 sent 4491 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:55.390308+0000 osd.0 (osd.0) 4492 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6741> 2025-11-24T21:04:56.363+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4492) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:55.390308+0000 osd.0 (osd.0) 4492 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:27.086649+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4493 sent 4492 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:56.365107+0000 osd.0 (osd.0) 4493 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6729> 2025-11-24T21:04:57.376+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4493) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:56.365107+0000 osd.0 (osd.0) 4493 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:28.086934+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4494 sent 4493 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:57.377981+0000 osd.0 (osd.0) 4494 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6715> 2025-11-24T21:04:58.354+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4494) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:57.377981+0000 osd.0 (osd.0) 4494 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:29.087248+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4495 sent 4494 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:58.355264+0000 osd.0 (osd.0) 4495 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6704> 2025-11-24T21:04:59.392+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4495) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:58.355264+0000 osd.0 (osd.0) 4495 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:30.087525+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4496 sent 4495 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:04:59.393339+0000 osd.0 (osd.0) 4496 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6693> 2025-11-24T21:05:00.342+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4496) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:04:59.393339+0000 osd.0 (osd.0) 4496 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:31.087830+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4497 sent 4496 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:00.344159+0000 osd.0 (osd.0) 4497 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6681> 2025-11-24T21:05:01.348+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4497) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:00.344159+0000 osd.0 (osd.0) 4497 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:32.088077+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4498 sent 4497 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:01.349977+0000 osd.0 (osd.0) 4498 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6670> 2025-11-24T21:05:02.323+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4498) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:01.349977+0000 osd.0 (osd.0) 4498 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:33.088487+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4499 sent 4498 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:02.324557+0000 osd.0 (osd.0) 4499 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6656> 2025-11-24T21:05:03.288+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4499) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:02.324557+0000 osd.0 (osd.0) 4499 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:34.088758+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4500 sent 4499 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:03.289279+0000 osd.0 (osd.0) 4500 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6644> 2025-11-24T21:05:04.327+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4500) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:03.289279+0000 osd.0 (osd.0) 4500 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:35.089050+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4501 sent 4500 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:04.328921+0000 osd.0 (osd.0) 4501 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6633> 2025-11-24T21:05:05.304+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4501) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:04.328921+0000 osd.0 (osd.0) 4501 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:36.089307+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4502 sent 4501 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:05.305873+0000 osd.0 (osd.0) 4502 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6621> 2025-11-24T21:05:06.340+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4502) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:05.305873+0000 osd.0 (osd.0) 4502 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:37.089706+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4503 sent 4502 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:06.340678+0000 osd.0 (osd.0) 4503 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6610> 2025-11-24T21:05:07.304+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4503) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:06.340678+0000 osd.0 (osd.0) 4503 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:38.090006+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4504 sent 4503 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:07.305315+0000 osd.0 (osd.0) 4504 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6597> 2025-11-24T21:05:08.267+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4504) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:07.305315+0000 osd.0 (osd.0) 4504 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:39.090281+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4505 sent 4504 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:08.268125+0000 osd.0 (osd.0) 4505 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6586> 2025-11-24T21:05:09.295+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4505) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:08.268125+0000 osd.0 (osd.0) 4505 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:40.090562+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4506 sent 4505 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:09.295948+0000 osd.0 (osd.0) 4506 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6575> 2025-11-24T21:05:10.306+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4506) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:09.295948+0000 osd.0 (osd.0) 4506 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:41.090891+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4507 sent 4506 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:10.306830+0000 osd.0 (osd.0) 4507 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6563> 2025-11-24T21:05:11.307+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4507) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:10.306830+0000 osd.0 (osd.0) 4507 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:42.091198+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4508 sent 4507 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:11.307687+0000 osd.0 (osd.0) 4508 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6551> 2025-11-24T21:05:12.346+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4508) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:11.307687+0000 osd.0 (osd.0) 4508 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:43.091468+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4509 sent 4508 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:12.347172+0000 osd.0 (osd.0) 4509 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6538> 2025-11-24T21:05:13.306+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4509) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:12.347172+0000 osd.0 (osd.0) 4509 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:44.091661+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4510 sent 4509 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:13.307241+0000 osd.0 (osd.0) 4510 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6526> 2025-11-24T21:05:14.349+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4510) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:13.307241+0000 osd.0 (osd.0) 4510 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:45.091850+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4511 sent 4510 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:14.350197+0000 osd.0 (osd.0) 4511 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6515> 2025-11-24T21:05:15.374+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4511) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:14.350197+0000 osd.0 (osd.0) 4511 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:46.092040+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4512 sent 4511 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:15.375325+0000 osd.0 (osd.0) 4512 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6503> 2025-11-24T21:05:16.372+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:47.092265+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4513 sent 4512 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:16.372963+0000 osd.0 (osd.0) 4513 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4512) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:15.375325+0000 osd.0 (osd.0) 4512 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6493> 2025-11-24T21:05:17.324+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:48.092504+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4514 sent 4513 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:17.325018+0000 osd.0 (osd.0) 4514 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4513) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:16.372963+0000 osd.0 (osd.0) 4513 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4514) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:17.325018+0000 osd.0 (osd.0) 4514 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6477> 2025-11-24T21:05:18.303+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:49.092858+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4515 sent 4514 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:18.303976+0000 osd.0 (osd.0) 4515 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4515) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:18.303976+0000 osd.0 (osd.0) 4515 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6464> 2025-11-24T21:05:19.278+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:50.093115+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4516 sent 4515 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:19.278720+0000 osd.0 (osd.0) 4516 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4516) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:19.278720+0000 osd.0 (osd.0) 4516 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6453> 2025-11-24T21:05:20.275+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:51.093344+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4517 sent 4516 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:20.276555+0000 osd.0 (osd.0) 4517 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4517) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:20.276555+0000 osd.0 (osd.0) 4517 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6442> 2025-11-24T21:05:21.271+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:52.093653+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4518 sent 4517 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:21.272038+0000 osd.0 (osd.0) 4518 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4518) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:21.272038+0000 osd.0 (osd.0) 4518 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6431> 2025-11-24T21:05:22.316+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:53.093978+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4519 sent 4518 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:22.317080+0000 osd.0 (osd.0) 4519 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4519) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:22.317080+0000 osd.0 (osd.0) 4519 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6417> 2025-11-24T21:05:23.280+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:54.094238+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4520 sent 4519 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:23.281427+0000 osd.0 (osd.0) 4520 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4520) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:23.281427+0000 osd.0 (osd.0) 4520 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6405> 2025-11-24T21:05:24.319+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:55.094401+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4521 sent 4520 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:24.320375+0000 osd.0 (osd.0) 4521 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4521) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:24.320375+0000 osd.0 (osd.0) 4521 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6394> 2025-11-24T21:05:25.280+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:56.094779+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4522 sent 4521 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:25.280972+0000 osd.0 (osd.0) 4522 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4522) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:25.280972+0000 osd.0 (osd.0) 4522 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6383> 2025-11-24T21:05:26.301+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:57.095254+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4523 sent 4522 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:26.301820+0000 osd.0 (osd.0) 4523 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4523) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:26.301820+0000 osd.0 (osd.0) 4523 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6372> 2025-11-24T21:05:27.297+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:58.095952+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4524 sent 4523 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:27.297880+0000 osd.0 (osd.0) 4524 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4524) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:27.297880+0000 osd.0 (osd.0) 4524 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6358> 2025-11-24T21:05:28.315+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:04:59.096948+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4525 sent 4524 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:28.316314+0000 osd.0 (osd.0) 4525 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4525) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:28.316314+0000 osd.0 (osd.0) 4525 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6346> 2025-11-24T21:05:29.339+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:00.097515+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4526 sent 4525 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:29.340784+0000 osd.0 (osd.0) 4526 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4526) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:29.340784+0000 osd.0 (osd.0) 4526 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6335> 2025-11-24T21:05:30.314+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:01.098107+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4527 sent 4526 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:30.315038+0000 osd.0 (osd.0) 4527 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4527) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:30.315038+0000 osd.0 (osd.0) 4527 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6324> 2025-11-24T21:05:31.323+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:02.098445+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4528 sent 4527 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:31.324646+0000 osd.0 (osd.0) 4528 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4528) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:31.324646+0000 osd.0 (osd.0) 4528 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6313> 2025-11-24T21:05:32.312+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:03.098879+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4529 sent 4528 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:32.313867+0000 osd.0 (osd.0) 4529 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4529) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:32.313867+0000 osd.0 (osd.0) 4529 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6299> 2025-11-24T21:05:33.280+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:04.099157+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4530 sent 4529 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:33.281580+0000 osd.0 (osd.0) 4530 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4530) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:33.281580+0000 osd.0 (osd.0) 4530 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6287> 2025-11-24T21:05:34.296+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:05.099359+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4531 sent 4530 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:34.298100+0000 osd.0 (osd.0) 4531 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4531) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:34.298100+0000 osd.0 (osd.0) 4531 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6276> 2025-11-24T21:05:35.269+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:06.099816+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4532 sent 4531 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:35.270937+0000 osd.0 (osd.0) 4532 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6267> 2025-11-24T21:05:36.227+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4532) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:35.270937+0000 osd.0 (osd.0) 4532 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:07.100141+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4533 sent 4532 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:36.228393+0000 osd.0 (osd.0) 4533 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6256> 2025-11-24T21:05:37.220+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4533) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:36.228393+0000 osd.0 (osd.0) 4533 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:08.100549+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4534 sent 4533 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:37.221173+0000 osd.0 (osd.0) 4534 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6242> 2025-11-24T21:05:38.172+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4534) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:37.221173+0000 osd.0 (osd.0) 4534 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:09.100970+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4535 sent 4534 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:38.173408+0000 osd.0 (osd.0) 4535 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6231> 2025-11-24T21:05:39.187+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4535) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:38.173408+0000 osd.0 (osd.0) 4535 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:10.101301+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4536 sent 4535 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:39.188799+0000 osd.0 (osd.0) 4536 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6219> 2025-11-24T21:05:40.225+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4536) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:39.188799+0000 osd.0 (osd.0) 4536 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:11.101577+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4537 sent 4536 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:40.226690+0000 osd.0 (osd.0) 4537 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6207> 2025-11-24T21:05:41.243+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4537) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:40.226690+0000 osd.0 (osd.0) 4537 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:12.101869+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4538 sent 4537 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:41.244337+0000 osd.0 (osd.0) 4538 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6194> 2025-11-24T21:05:42.232+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4538) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:41.244337+0000 osd.0 (osd.0) 4538 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:13.102248+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4539 sent 4538 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:42.234317+0000 osd.0 (osd.0) 4539 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6179> 2025-11-24T21:05:43.230+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4539) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:42.234317+0000 osd.0 (osd.0) 4539 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:14.102442+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4540 sent 4539 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:43.230700+0000 osd.0 (osd.0) 4540 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6168> 2025-11-24T21:05:44.187+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4540) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:43.230700+0000 osd.0 (osd.0) 4540 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:15.102635+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4541 sent 4540 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:44.187789+0000 osd.0 (osd.0) 4541 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6157> 2025-11-24T21:05:45.179+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4541) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:44.187789+0000 osd.0 (osd.0) 4541 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:16.102837+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4542 sent 4541 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:45.180525+0000 osd.0 (osd.0) 4542 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6146> 2025-11-24T21:05:46.198+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4542) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:45.180525+0000 osd.0 (osd.0) 4542 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:17.103036+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4543 sent 4542 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:46.198866+0000 osd.0 (osd.0) 4543 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6135> 2025-11-24T21:05:47.159+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4543) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:46.198866+0000 osd.0 (osd.0) 4543 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:18.103188+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4544 sent 4543 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:47.160536+0000 osd.0 (osd.0) 4544 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6121> 2025-11-24T21:05:48.113+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4544) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:47.160536+0000 osd.0 (osd.0) 4544 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:19.103675+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4545 sent 4544 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:48.113781+0000 osd.0 (osd.0) 4545 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6109> 2025-11-24T21:05:49.120+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4545) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:48.113781+0000 osd.0 (osd.0) 4545 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:20.103923+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4546 sent 4545 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:49.121018+0000 osd.0 (osd.0) 4546 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6098> 2025-11-24T21:05:50.123+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4546) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:49.121018+0000 osd.0 (osd.0) 4546 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:21.104113+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4547 sent 4546 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:50.124321+0000 osd.0 (osd.0) 4547 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6087> 2025-11-24T21:05:51.144+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4547) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:50.124321+0000 osd.0 (osd.0) 4547 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6080> 2025-11-24T21:05:52.101+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:22.104262+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4549 sent 4547 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:51.145260+0000 osd.0 (osd.0) 4548 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:52.102140+0000 osd.0 (osd.0) 4549 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4549) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:51.145260+0000 osd.0 (osd.0) 4548 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:52.102140+0000 osd.0 (osd.0) 4549 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:23.105347+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6064> 2025-11-24T21:05:53.104+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:24.105561+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4550 sent 4549 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:53.105572+0000 osd.0 (osd.0) 4550 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6052> 2025-11-24T21:05:54.119+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4550) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:53.105572+0000 osd.0 (osd.0) 4550 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:25.105859+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4551 sent 4550 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:54.119671+0000 osd.0 (osd.0) 4551 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6041> 2025-11-24T21:05:55.115+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4551) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:54.119671+0000 osd.0 (osd.0) 4551 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:26.106130+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4552 sent 4551 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:55.116512+0000 osd.0 (osd.0) 4552 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6029> 2025-11-24T21:05:56.119+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4552) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:55.116512+0000 osd.0 (osd.0) 4552 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:27.106441+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4553 sent 4552 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:56.120114+0000 osd.0 (osd.0) 4553 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6018> 2025-11-24T21:05:57.155+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4553) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:56.120114+0000 osd.0 (osd.0) 4553 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:28.106687+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4554 sent 4553 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:57.155720+0000 osd.0 (osd.0) 4554 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -6003> 2025-11-24T21:05:58.175+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4554) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:57.155720+0000 osd.0 (osd.0) 4554 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:29.107001+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4555 sent 4554 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:58.175922+0000 osd.0 (osd.0) 4555 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5992> 2025-11-24T21:05:59.222+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:30.108031+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4556 sent 4555 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:05:59.223569+0000 osd.0 (osd.0) 4556 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5983> 2025-11-24T21:06:00.273+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:31.109368+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 3 last_log 4557 sent 4556 num 3 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:00.273938+0000 osd.0 (osd.0) 4557 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5974> 2025-11-24T21:06:01.242+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:32.110251+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 4 last_log 4558 sent 4557 num 4 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:01.242879+0000 osd.0 (osd.0) 4558 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5964> 2025-11-24T21:06:02.229+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4555) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:58.175922+0000 osd.0 (osd.0) 4555 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:33.111090+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 4 last_log 4559 sent 4558 num 4 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:02.229905+0000 osd.0 (osd.0) 4559 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5950> 2025-11-24T21:06:03.220+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:34.111341+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 5 last_log 4560 sent 4559 num 5 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:03.220814+0000 osd.0 (osd.0) 4560 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4556) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:05:59.223569+0000 osd.0 (osd.0) 4556 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4557) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:00.273938+0000 osd.0 (osd.0) 4557 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4558) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:01.242879+0000 osd.0 (osd.0) 4558 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4559) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:02.229905+0000 osd.0 (osd.0) 4559 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5933> 2025-11-24T21:06:04.252+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:35.111627+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4561 sent 4560 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:04.253281+0000 osd.0 (osd.0) 4561 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5923> 2025-11-24T21:06:05.213+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4560) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:03.220814+0000 osd.0 (osd.0) 4560 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4561) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:04.253281+0000 osd.0 (osd.0) 4561 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:36.112360+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4562 sent 4561 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:05.213902+0000 osd.0 (osd.0) 4562 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5910> 2025-11-24T21:06:06.231+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4562) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:05.213902+0000 osd.0 (osd.0) 4562 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:37.112713+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4563 sent 4562 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:06.232489+0000 osd.0 (osd.0) 4563 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5898> 2025-11-24T21:06:07.235+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4563) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:06.232489+0000 osd.0 (osd.0) 4563 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:38.113323+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4564 sent 4563 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:07.236186+0000 osd.0 (osd.0) 4564 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5884> 2025-11-24T21:06:08.263+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:39.113758+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4565 sent 4564 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:08.264778+0000 osd.0 (osd.0) 4565 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4564) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:07.236186+0000 osd.0 (osd.0) 4564 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5872> 2025-11-24T21:06:09.241+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:40.114290+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4566 sent 4565 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:09.242127+0000 osd.0 (osd.0) 4566 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5863> 2025-11-24T21:06:10.248+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:41.114755+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 3 last_log 4567 sent 4566 num 3 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:10.249254+0000 osd.0 (osd.0) 4567 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5854> 2025-11-24T21:06:11.294+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4565) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:08.264778+0000 osd.0 (osd.0) 4565 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4566) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:09.242127+0000 osd.0 (osd.0) 4566 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:42.115031+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4568 sent 4567 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:11.295966+0000 osd.0 (osd.0) 4568 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5841> 2025-11-24T21:06:12.336+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4567) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:10.249254+0000 osd.0 (osd.0) 4567 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4568) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:11.295966+0000 osd.0 (osd.0) 4568 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:43.115325+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4569 sent 4568 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:12.337772+0000 osd.0 (osd.0) 4569 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5825> 2025-11-24T21:06:13.313+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4569) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:12.337772+0000 osd.0 (osd.0) 4569 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:44.115770+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4570 sent 4569 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:13.315459+0000 osd.0 (osd.0) 4570 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5814> 2025-11-24T21:06:14.336+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4570) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:13.315459+0000 osd.0 (osd.0) 4570 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:45.116194+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4571 sent 4570 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:14.337664+0000 osd.0 (osd.0) 4571 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5802> 2025-11-24T21:06:15.354+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4571) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:14.337664+0000 osd.0 (osd.0) 4571 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:46.116421+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4572 sent 4571 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:15.355777+0000 osd.0 (osd.0) 4572 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5791> 2025-11-24T21:06:16.313+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:47.116626+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4573 sent 4572 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:16.315197+0000 osd.0 (osd.0) 4573 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4572) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:15.355777+0000 osd.0 (osd.0) 4572 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5780> 2025-11-24T21:06:17.288+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:48.116911+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4574 sent 4573 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:17.290223+0000 osd.0 (osd.0) 4574 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5768> 2025-11-24T21:06:18.325+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4573) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:16.315197+0000 osd.0 (osd.0) 4573 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4574) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:17.290223+0000 osd.0 (osd.0) 4574 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:49.117202+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4575 sent 4574 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:18.327668+0000 osd.0 (osd.0) 4575 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5754> 2025-11-24T21:06:19.311+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4575) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:18.327668+0000 osd.0 (osd.0) 4575 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:50.117521+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4576 sent 4575 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:19.312986+0000 osd.0 (osd.0) 4576 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5743> 2025-11-24T21:06:20.267+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4576) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:19.312986+0000 osd.0 (osd.0) 4576 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:51.117789+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4577 sent 4576 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:20.269040+0000 osd.0 (osd.0) 4577 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5732> 2025-11-24T21:06:21.258+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4577) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:20.269040+0000 osd.0 (osd.0) 4577 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:52.118015+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4578 sent 4577 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:21.259413+0000 osd.0 (osd.0) 4578 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5721> 2025-11-24T21:06:22.346+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4578) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:21.259413+0000 osd.0 (osd.0) 4578 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:53.118246+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4579 sent 4578 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:22.347336+0000 osd.0 (osd.0) 4579 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5707> 2025-11-24T21:06:23.318+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4579) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:22.347336+0000 osd.0 (osd.0) 4579 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:54.118500+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4580 sent 4579 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:23.319330+0000 osd.0 (osd.0) 4580 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5695> 2025-11-24T21:06:24.361+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4580) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:23.319330+0000 osd.0 (osd.0) 4580 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:55.118730+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4581 sent 4580 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:24.361979+0000 osd.0 (osd.0) 4581 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5683> 2025-11-24T21:06:25.403+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4581) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:24.361979+0000 osd.0 (osd.0) 4581 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:56.118967+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4582 sent 4581 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:25.404279+0000 osd.0 (osd.0) 4582 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5671> 2025-11-24T21:06:26.443+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4582) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:25.404279+0000 osd.0 (osd.0) 4582 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:57.119215+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4583 sent 4582 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:26.444296+0000 osd.0 (osd.0) 4583 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5660> 2025-11-24T21:06:27.478+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4583) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:26.444296+0000 osd.0 (osd.0) 4583 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:58.119462+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4584 sent 4583 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:27.479380+0000 osd.0 (osd.0) 4584 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5646> 2025-11-24T21:06:28.482+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4584) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:27.479380+0000 osd.0 (osd.0) 4584 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:05:59.119672+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4585 sent 4584 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:28.483237+0000 osd.0 (osd.0) 4585 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5635> 2025-11-24T21:06:29.439+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4585) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:28.483237+0000 osd.0 (osd.0) 4585 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:00.119903+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4586 sent 4585 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:29.439848+0000 osd.0 (osd.0) 4586 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5624> 2025-11-24T21:06:30.417+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4586) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:29.439848+0000 osd.0 (osd.0) 4586 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:01.120126+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4587 sent 4586 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:30.417890+0000 osd.0 (osd.0) 4587 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5612> 2025-11-24T21:06:31.385+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4587) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:30.417890+0000 osd.0 (osd.0) 4587 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:02.120358+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4588 sent 4587 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:31.385948+0000 osd.0 (osd.0) 4588 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5601> 2025-11-24T21:06:32.427+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4588) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:31.385948+0000 osd.0 (osd.0) 4588 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:03.120652+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4589 sent 4588 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:32.428089+0000 osd.0 (osd.0) 4589 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5587> 2025-11-24T21:06:33.474+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4589) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:32.428089+0000 osd.0 (osd.0) 4589 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:04.121016+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4590 sent 4589 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:33.475290+0000 osd.0 (osd.0) 4590 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5575> 2025-11-24T21:06:34.465+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4590) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:33.475290+0000 osd.0 (osd.0) 4590 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:05.121308+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4591 sent 4590 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:34.465922+0000 osd.0 (osd.0) 4591 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5564> 2025-11-24T21:06:35.454+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4591) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:34.465922+0000 osd.0 (osd.0) 4591 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:06.121665+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4592 sent 4591 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:35.454821+0000 osd.0 (osd.0) 4592 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5553> 2025-11-24T21:06:36.459+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4592) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:35.454821+0000 osd.0 (osd.0) 4592 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:07.121882+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4593 sent 4592 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:36.460141+0000 osd.0 (osd.0) 4593 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5541> 2025-11-24T21:06:37.472+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4593) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:36.460141+0000 osd.0 (osd.0) 4593 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:08.122094+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4594 sent 4593 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:37.473402+0000 osd.0 (osd.0) 4594 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5526> 2025-11-24T21:06:38.480+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4594) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:37.473402+0000 osd.0 (osd.0) 4594 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:09.122311+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4595 sent 4594 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:38.481113+0000 osd.0 (osd.0) 4595 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5514> 2025-11-24T21:06:39.477+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4595) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:38.481113+0000 osd.0 (osd.0) 4595 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:10.122581+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4596 sent 4595 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:39.478286+0000 osd.0 (osd.0) 4596 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5502> 2025-11-24T21:06:40.492+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4596) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:39.478286+0000 osd.0 (osd.0) 4596 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:11.122843+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4597 sent 4596 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:40.492931+0000 osd.0 (osd.0) 4597 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5491> 2025-11-24T21:06:41.446+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4597) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:40.492931+0000 osd.0 (osd.0) 4597 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:12.123145+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4598 sent 4597 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:41.446874+0000 osd.0 (osd.0) 4598 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5480> 2025-11-24T21:06:42.475+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4598) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:41.446874+0000 osd.0 (osd.0) 4598 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:13.123356+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4599 sent 4598 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:42.475763+0000 osd.0 (osd.0) 4599 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5466> 2025-11-24T21:06:43.513+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4599) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:42.475763+0000 osd.0 (osd.0) 4599 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:14.123750+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4600 sent 4599 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:43.514874+0000 osd.0 (osd.0) 4600 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5455> 2025-11-24T21:06:44.530+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4600) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:43.514874+0000 osd.0 (osd.0) 4600 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:15.123975+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4601 sent 4600 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:44.532013+0000 osd.0 (osd.0) 4601 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5444> 2025-11-24T21:06:45.559+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4601) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:44.532013+0000 osd.0 (osd.0) 4601 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:16.124208+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4602 sent 4601 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:45.561620+0000 osd.0 (osd.0) 4602 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5432> 2025-11-24T21:06:46.583+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4602) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:45.561620+0000 osd.0 (osd.0) 4602 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:17.124453+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4603 sent 4602 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:46.584701+0000 osd.0 (osd.0) 4603 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5421> 2025-11-24T21:06:47.623+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4603) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:46.584701+0000 osd.0 (osd.0) 4603 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:18.124712+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4604 sent 4603 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:47.625358+0000 osd.0 (osd.0) 4604 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5405> 2025-11-24T21:06:48.606+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4604) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:47.625358+0000 osd.0 (osd.0) 4604 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:19.124931+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4605 sent 4604 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:48.607138+0000 osd.0 (osd.0) 4605 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5394> 2025-11-24T21:06:49.584+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4605) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:48.607138+0000 osd.0 (osd.0) 4605 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:20.125185+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4606 sent 4605 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:49.585551+0000 osd.0 (osd.0) 4606 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5383> 2025-11-24T21:06:50.584+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 36683776 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:21.125414+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4607 sent 4606 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:50.586245+0000 osd.0 (osd.0) 4607 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4606) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:49.585551+0000 osd.0 (osd.0) 4606 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5372> 2025-11-24T21:06:51.622+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 ms_handle_reset con 0x560fd0e3c800 session 0x560fd2136f00
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: handle_auth_request added challenge on 0x560fd3d29000
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:22.125643+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4608 sent 4607 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:51.624445+0000 osd.0 (osd.0) 4608 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4607) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:50.586245+0000 osd.0 (osd.0) 4607 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4608) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:51.624445+0000 osd.0 (osd.0) 4608 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 21 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5357> 2025-11-24T21:06:52.589+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:23.125825+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4609 sent 4608 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:52.591231+0000 osd.0 (osd.0) 4609 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4609) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:52.591231+0000 osd.0 (osd.0) 4609 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5342> 2025-11-24T21:06:53.559+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:24.125971+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4610 sent 4609 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:53.560721+0000 osd.0 (osd.0) 4610 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4610) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:53.560721+0000 osd.0 (osd.0) 4610 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5331> 2025-11-24T21:06:54.602+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:25.126106+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4611 sent 4610 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:54.604367+0000 osd.0 (osd.0) 4611 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4611) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:54.604367+0000 osd.0 (osd.0) 4611 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,4,3,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5319> 2025-11-24T21:06:55.583+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:26.126338+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4612 sent 4611 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:55.584875+0000 osd.0 (osd.0) 4612 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4612) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:55.584875+0000 osd.0 (osd.0) 4612 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5308> 2025-11-24T21:06:56.595+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:27.126566+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4613 sent 4612 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:56.597528+0000 osd.0 (osd.0) 4613 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4613) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:56.597528+0000 osd.0 (osd.0) 4613 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5297> 2025-11-24T21:06:57.565+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:28.126779+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4614 sent 4613 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:57.567100+0000 osd.0 (osd.0) 4614 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4614) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:57.567100+0000 osd.0 (osd.0) 4614 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5283> 2025-11-24T21:06:58.566+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:29.127010+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4615 sent 4614 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:58.567369+0000 osd.0 (osd.0) 4615 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4615) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:58.567369+0000 osd.0 (osd.0) 4615 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5272> 2025-11-24T21:06:59.595+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:30.127319+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4616 sent 4615 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:06:59.596072+0000 osd.0 (osd.0) 4616 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4616) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:06:59.596072+0000 osd.0 (osd.0) 4616 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5260> 2025-11-24T21:07:00.603+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:31.127560+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4617 sent 4616 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:00.604542+0000 osd.0 (osd.0) 4617 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5251> 2025-11-24T21:07:01.645+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4617) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:00.604542+0000 osd.0 (osd.0) 4617 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:32.127856+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4618 sent 4617 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:01.645452+0000 osd.0 (osd.0) 4618 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5240> 2025-11-24T21:07:02.616+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4618) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:01.645452+0000 osd.0 (osd.0) 4618 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:33.128042+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4619 sent 4618 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:02.616555+0000 osd.0 (osd.0) 4619 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5226> 2025-11-24T21:07:03.630+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4619) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:02.616555+0000 osd.0 (osd.0) 4619 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:34.128239+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4620 sent 4619 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:03.631276+0000 osd.0 (osd.0) 4620 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5215> 2025-11-24T21:07:04.645+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4620) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:03.631276+0000 osd.0 (osd.0) 4620 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:35.128482+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4621 sent 4620 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:04.646094+0000 osd.0 (osd.0) 4621 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5203> 2025-11-24T21:07:05.647+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4621) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:04.646094+0000 osd.0 (osd.0) 4621 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:36.128714+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4622 sent 4621 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:05.647686+0000 osd.0 (osd.0) 4622 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5192> 2025-11-24T21:07:06.612+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4622) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:05.647686+0000 osd.0 (osd.0) 4622 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:37.128937+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4623 sent 4622 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:06.613237+0000 osd.0 (osd.0) 4623 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5180> 2025-11-24T21:07:07.585+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4623) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:06.613237+0000 osd.0 (osd.0) 4623 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:38.129148+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4624 sent 4623 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:07.585957+0000 osd.0 (osd.0) 4624 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5166> 2025-11-24T21:07:08.629+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4624) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:07.585957+0000 osd.0 (osd.0) 4624 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:39.129433+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4625 sent 4624 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:08.629706+0000 osd.0 (osd.0) 4625 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5155> 2025-11-24T21:07:09.600+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4625) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:08.629706+0000 osd.0 (osd.0) 4625 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:40.129682+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4626 sent 4625 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:09.600914+0000 osd.0 (osd.0) 4626 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5143> 2025-11-24T21:07:10.602+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4626) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:09.600914+0000 osd.0 (osd.0) 4626 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:41.129930+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4627 sent 4626 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:10.603094+0000 osd.0 (osd.0) 4627 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5132> 2025-11-24T21:07:11.584+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4627) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:10.603094+0000 osd.0 (osd.0) 4627 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:42.130208+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4628 sent 4627 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:11.585439+0000 osd.0 (osd.0) 4628 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5121> 2025-11-24T21:07:12.540+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4628) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:11.585439+0000 osd.0 (osd.0) 4628 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:43.130490+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4629 sent 4628 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:12.540745+0000 osd.0 (osd.0) 4629 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5106> 2025-11-24T21:07:13.520+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4629) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:12.540745+0000 osd.0 (osd.0) 4629 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:44.130713+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4630 sent 4629 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:13.520884+0000 osd.0 (osd.0) 4630 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5094> 2025-11-24T21:07:14.546+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4630) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:13.520884+0000 osd.0 (osd.0) 4630 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:45.130915+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4631 sent 4630 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:14.547032+0000 osd.0 (osd.0) 4631 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5082> 2025-11-24T21:07:15.541+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4631) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:14.547032+0000 osd.0 (osd.0) 4631 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:46.131151+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4632 sent 4631 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:15.542470+0000 osd.0 (osd.0) 4632 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5071> 2025-11-24T21:07:16.523+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4632) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:15.542470+0000 osd.0 (osd.0) 4632 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:47.131402+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4633 sent 4632 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:16.523681+0000 osd.0 (osd.0) 4633 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5060> 2025-11-24T21:07:17.563+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4633) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:16.523681+0000 osd.0 (osd.0) 4633 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:48.131698+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4634 sent 4633 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:17.563878+0000 osd.0 (osd.0) 4634 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5046> 2025-11-24T21:07:18.593+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4634) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:17.563878+0000 osd.0 (osd.0) 4634 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:49.131999+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4635 sent 4634 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:18.593881+0000 osd.0 (osd.0) 4635 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5034> 2025-11-24T21:07:19.598+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4635) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:18.593881+0000 osd.0 (osd.0) 4635 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:50.132671+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4636 sent 4635 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:19.598876+0000 osd.0 (osd.0) 4636 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5023> 2025-11-24T21:07:20.574+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4636) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:19.598876+0000 osd.0 (osd.0) 4636 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:51.133017+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4637 sent 4636 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:20.575089+0000 osd.0 (osd.0) 4637 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5012> 2025-11-24T21:07:21.553+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 16 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4637) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:20.575089+0000 osd.0 (osd.0) 4637 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:52.133278+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4638 sent 4637 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:21.554527+0000 osd.0 (osd.0) 4638 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -5001> 2025-11-24T21:07:22.525+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4638) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:21.554527+0000 osd.0 (osd.0) 4638 : cluster [WRN] 16 slow requests (by type [ 'delayed' : 16 ] most affected pool [ 'vms' : 16 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:53.133566+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4639 sent 4638 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:22.526523+0000 osd.0 (osd.0) 4639 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4989> 2025-11-24T21:07:23.520+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4639) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:22.526523+0000 osd.0 (osd.0) 4639 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:54.133871+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4640 sent 4639 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:23.521269+0000 osd.0 (osd.0) 4640 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4974> 2025-11-24T21:07:24.535+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4640) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:23.521269+0000 osd.0 (osd.0) 4640 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:55.134122+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4641 sent 4640 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:24.536971+0000 osd.0 (osd.0) 4641 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4963> 2025-11-24T21:07:25.530+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4641) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:24.536971+0000 osd.0 (osd.0) 4641 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:56.134390+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4642 sent 4641 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:25.531346+0000 osd.0 (osd.0) 4642 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4951> 2025-11-24T21:07:26.551+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4642) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:25.531346+0000 osd.0 (osd.0) 4642 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:57.134794+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4643 sent 4642 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:26.552162+0000 osd.0 (osd.0) 4643 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4940> 2025-11-24T21:07:27.589+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4643) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:26.552162+0000 osd.0 (osd.0) 4643 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:58.135180+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4644 sent 4643 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:27.590759+0000 osd.0 (osd.0) 4644 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4926> 2025-11-24T21:07:28.595+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4644) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:27.590759+0000 osd.0 (osd.0) 4644 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:06:59.135461+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4645 sent 4644 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:28.597133+0000 osd.0 (osd.0) 4645 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4915> 2025-11-24T21:07:29.645+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4645) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:28.597133+0000 osd.0 (osd.0) 4645 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:00.135782+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4646 sent 4645 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:29.646946+0000 osd.0 (osd.0) 4646 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4904> 2025-11-24T21:07:30.648+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4646) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:29.646946+0000 osd.0 (osd.0) 4646 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:01.136048+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4647 sent 4646 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:30.650069+0000 osd.0 (osd.0) 4647 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4892> 2025-11-24T21:07:31.611+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4647) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:30.650069+0000 osd.0 (osd.0) 4647 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:02.136305+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4648 sent 4647 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:31.612813+0000 osd.0 (osd.0) 4648 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4880> 2025-11-24T21:07:32.651+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4648) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:31.612813+0000 osd.0 (osd.0) 4648 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:03.136574+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4649 sent 4648 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:32.653131+0000 osd.0 (osd.0) 4649 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4866> 2025-11-24T21:07:33.700+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4649) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:32.653131+0000 osd.0 (osd.0) 4649 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:04.136942+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4650 sent 4649 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:33.701476+0000 osd.0 (osd.0) 4650 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4855> 2025-11-24T21:07:34.737+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4650) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:33.701476+0000 osd.0 (osd.0) 4650 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:05.137202+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4651 sent 4650 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:34.738377+0000 osd.0 (osd.0) 4651 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4844> 2025-11-24T21:07:35.723+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4651) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:34.738377+0000 osd.0 (osd.0) 4651 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:06.137402+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4652 sent 4651 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:35.724573+0000 osd.0 (osd.0) 4652 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4833> 2025-11-24T21:07:36.676+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:07.137722+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4653 sent 4652 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:36.676692+0000 osd.0 (osd.0) 4653 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4652) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:35.724573+0000 osd.0 (osd.0) 4652 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4821> 2025-11-24T21:07:37.633+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:08.138119+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4654 sent 4653 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:37.633698+0000 osd.0 (osd.0) 4654 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4653) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:36.676692+0000 osd.0 (osd.0) 4653 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4654) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:37.633698+0000 osd.0 (osd.0) 4654 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4805> 2025-11-24T21:07:38.667+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:09.138683+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4655 sent 4654 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:38.668474+0000 osd.0 (osd.0) 4655 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4655) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:38.668474+0000 osd.0 (osd.0) 4655 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4794> 2025-11-24T21:07:39.683+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:10.139151+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4656 sent 4655 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:39.684056+0000 osd.0 (osd.0) 4656 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4656) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:39.684056+0000 osd.0 (osd.0) 4656 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4783> 2025-11-24T21:07:40.694+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:11.139459+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4657 sent 4656 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:40.695223+0000 osd.0 (osd.0) 4657 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4657) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:40.695223+0000 osd.0 (osd.0) 4657 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4771> 2025-11-24T21:07:41.689+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:12.139809+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4658 sent 4657 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:41.690481+0000 osd.0 (osd.0) 4658 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4658) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:41.690481+0000 osd.0 (osd.0) 4658 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4760> 2025-11-24T21:07:42.671+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:13.140152+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4659 sent 4658 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:42.672558+0000 osd.0 (osd.0) 4659 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4659) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:42.672558+0000 osd.0 (osd.0) 4659 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4746> 2025-11-24T21:07:43.703+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:14.140402+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4660 sent 4659 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:43.703980+0000 osd.0 (osd.0) 4660 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4660) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:43.703980+0000 osd.0 (osd.0) 4660 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4735> 2025-11-24T21:07:44.725+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:15.140690+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4661 sent 4660 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:44.726007+0000 osd.0 (osd.0) 4661 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4661) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:44.726007+0000 osd.0 (osd.0) 4661 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4724> 2025-11-24T21:07:45.722+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:16.141020+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4662 sent 4661 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:45.723386+0000 osd.0 (osd.0) 4662 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4662) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:45.723386+0000 osd.0 (osd.0) 4662 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4712> 2025-11-24T21:07:46.728+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:17.141263+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4663 sent 4662 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:46.729130+0000 osd.0 (osd.0) 4663 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 ms_handle_reset con 0x560fd0e3cc00 session 0x560fd2091c20
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: handle_auth_request added challenge on 0x560fd161c400
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4663) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:46.729130+0000 osd.0 (osd.0) 4663 : cluster [WRN] 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 22 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4699> 2025-11-24T21:07:47.695+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:18.141525+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4664 sent 4663 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:47.696235+0000 osd.0 (osd.0) 4664 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4664) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:47.696235+0000 osd.0 (osd.0) 4664 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4685> 2025-11-24T21:07:48.700+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:19.141856+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4665 sent 4664 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:48.701459+0000 osd.0 (osd.0) 4665 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4665) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:48.701459+0000 osd.0 (osd.0) 4665 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4673> 2025-11-24T21:07:49.745+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:20.142189+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4666 sent 4665 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:49.746183+0000 osd.0 (osd.0) 4666 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4666) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:49.746183+0000 osd.0 (osd.0) 4666 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4662> 2025-11-24T21:07:50.724+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:21.142422+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4667 sent 4666 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:50.725675+0000 osd.0 (osd.0) 4667 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4667) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:50.725675+0000 osd.0 (osd.0) 4667 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4651> 2025-11-24T21:07:51.687+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:22.142676+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4668 sent 4667 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:51.687995+0000 osd.0 (osd.0) 4668 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4668) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:51.687995+0000 osd.0 (osd.0) 4668 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4640> 2025-11-24T21:07:52.670+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:23.142916+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4669 sent 4668 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:52.671466+0000 osd.0 (osd.0) 4669 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4669) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:52.671466+0000 osd.0 (osd.0) 4669 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4629> 2025-11-24T21:07:53.656+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:24.143148+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4670 sent 4669 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:53.656747+0000 osd.0 (osd.0) 4670 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4670) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:53.656747+0000 osd.0 (osd.0) 4670 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4614> 2025-11-24T21:07:54.694+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:25.143459+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4671 sent 4670 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:54.694959+0000 osd.0 (osd.0) 4671 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4671) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:54.694959+0000 osd.0 (osd.0) 4671 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4603> 2025-11-24T21:07:55.649+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:26.143724+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4672 sent 4671 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:55.650547+0000 osd.0 (osd.0) 4672 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4672) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:55.650547+0000 osd.0 (osd.0) 4672 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4592> 2025-11-24T21:07:56.685+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:27.144002+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4673 sent 4672 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:56.686420+0000 osd.0 (osd.0) 4673 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4673) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:56.686420+0000 osd.0 (osd.0) 4673 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4581> 2025-11-24T21:07:57.719+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:28.144288+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4674 sent 4673 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:57.720271+0000 osd.0 (osd.0) 4674 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4674) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:57.720271+0000 osd.0 (osd.0) 4674 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4567> 2025-11-24T21:07:58.752+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:29.144549+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4675 sent 4674 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:58.753066+0000 osd.0 (osd.0) 4675 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4675) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:58.753066+0000 osd.0 (osd.0) 4675 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4556> 2025-11-24T21:07:59.744+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:30.144812+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4676 sent 4675 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:07:59.745852+0000 osd.0 (osd.0) 4676 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4676) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:07:59.745852+0000 osd.0 (osd.0) 4676 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4544> 2025-11-24T21:08:00.714+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:31.144993+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4677 sent 4676 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:00.715370+0000 osd.0 (osd.0) 4677 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4677) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:00.715370+0000 osd.0 (osd.0) 4677 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4533> 2025-11-24T21:08:01.674+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:32.145192+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4678 sent 4677 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:01.676563+0000 osd.0 (osd.0) 4678 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4678) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:01.676563+0000 osd.0 (osd.0) 4678 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4522> 2025-11-24T21:08:02.713+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:33.145433+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4679 sent 4678 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:02.714852+0000 osd.0 (osd.0) 4679 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4679) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:02.714852+0000 osd.0 (osd.0) 4679 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4510> 2025-11-24T21:08:03.718+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:34.145784+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4680 sent 4679 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:03.719705+0000 osd.0 (osd.0) 4680 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4680) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:03.719705+0000 osd.0 (osd.0) 4680 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4496> 2025-11-24T21:08:04.732+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:35.146087+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4681 sent 4680 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:04.734001+0000 osd.0 (osd.0) 4681 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4681) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:04.734001+0000 osd.0 (osd.0) 4681 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4485> 2025-11-24T21:08:05.775+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:36.146336+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4682 sent 4681 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:05.776911+0000 osd.0 (osd.0) 4682 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4682) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:05.776911+0000 osd.0 (osd.0) 4682 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4474> 2025-11-24T21:08:06.823+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:37.146622+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4683 sent 4682 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:06.824994+0000 osd.0 (osd.0) 4683 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4683) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:06.824994+0000 osd.0 (osd.0) 4683 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4462> 2025-11-24T21:08:07.816+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:38.146850+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4684 sent 4683 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:07.817871+0000 osd.0 (osd.0) 4684 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4684) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:07.817871+0000 osd.0 (osd.0) 4684 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4448> 2025-11-24T21:08:08.769+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:39.147076+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4685 sent 4684 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:08.770853+0000 osd.0 (osd.0) 4685 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4685) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:08.770853+0000 osd.0 (osd.0) 4685 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4437> 2025-11-24T21:08:09.740+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:40.147291+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4686 sent 4685 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:09.742620+0000 osd.0 (osd.0) 4686 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4686) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:09.742620+0000 osd.0 (osd.0) 4686 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4425> 2025-11-24T21:08:10.783+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:41.147541+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4687 sent 4686 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:10.784514+0000 osd.0 (osd.0) 4687 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4687) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:10.784514+0000 osd.0 (osd.0) 4687 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4414> 2025-11-24T21:08:11.825+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:42.147848+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4688 sent 4687 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:11.827284+0000 osd.0 (osd.0) 4688 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4688) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:11.827284+0000 osd.0 (osd.0) 4688 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4403> 2025-11-24T21:08:12.831+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:43.148121+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4689 sent 4688 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:12.833148+0000 osd.0 (osd.0) 4689 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4689) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:12.833148+0000 osd.0 (osd.0) 4689 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4389> 2025-11-24T21:08:13.803+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:44.148319+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4690 sent 4689 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:13.803783+0000 osd.0 (osd.0) 4690 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4690) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:13.803783+0000 osd.0 (osd.0) 4690 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4378> 2025-11-24T21:08:14.830+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:45.148541+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4691 sent 4690 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:14.831719+0000 osd.0 (osd.0) 4691 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4691) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:14.831719+0000 osd.0 (osd.0) 4691 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4366> 2025-11-24T21:08:15.876+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:46.148714+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4692 sent 4691 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:15.876516+0000 osd.0 (osd.0) 4692 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4692) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:15.876516+0000 osd.0 (osd.0) 4692 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4355> 2025-11-24T21:08:16.924+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:47.148992+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4693 sent 4692 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:16.925155+0000 osd.0 (osd.0) 4693 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 32489472 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4693) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:16.925155+0000 osd.0 (osd.0) 4693 : cluster [WRN] 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'vms' : 15 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4343> 2025-11-24T21:08:17.912+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 ms_handle_reset con 0x560fd0e3d000 session 0x560fd218d2c0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: handle_auth_request added challenge on 0x560fd161d400
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:48.149285+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4694 sent 4693 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:17.913263+0000 osd.0 (osd.0) 4694 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4694) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:17.913263+0000 osd.0 (osd.0) 4694 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,1,6,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4326> 2025-11-24T21:08:18.960+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:49.149484+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4695 sent 4694 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:18.960776+0000 osd.0 (osd.0) 4695 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4695) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:18.960776+0000 osd.0 (osd.0) 4695 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4315> 2025-11-24T21:08:19.958+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:50.149764+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4696 sent 4695 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:19.959559+0000 osd.0 (osd.0) 4696 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4696) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:19.959559+0000 osd.0 (osd.0) 4696 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4304> 2025-11-24T21:08:20.992+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:51.149987+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4697 sent 4696 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:20.993076+0000 osd.0 (osd.0) 4697 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4697) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:20.993076+0000 osd.0 (osd.0) 4697 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4292> 2025-11-24T21:08:21.952+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:52.150210+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4698 sent 4697 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:21.953261+0000 osd.0 (osd.0) 4698 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4698) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:21.953261+0000 osd.0 (osd.0) 4698 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4281> 2025-11-24T21:08:22.905+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:53.150419+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4699 sent 4698 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:22.906335+0000 osd.0 (osd.0) 4699 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 8612 writes, 33K keys, 8612 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 8612 writes, 2117 syncs, 4.07 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 137 writes, 395 keys, 137 commit groups, 1.0 writes per commit group, ingest: 0.23 MB, 0.00 MB/s
                                           Interval WAL: 137 writes, 64 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4699) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:22.906335+0000 osd.0 (osd.0) 4699 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4265> 2025-11-24T21:08:23.943+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:54.150634+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4700 sent 4699 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:23.944565+0000 osd.0 (osd.0) 4700 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4700) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:23.944565+0000 osd.0 (osd.0) 4700 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4254> 2025-11-24T21:08:24.940+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:55.150846+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4701 sent 4700 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:24.941285+0000 osd.0 (osd.0) 4701 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4701) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:24.941285+0000 osd.0 (osd.0) 4701 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4243> 2025-11-24T21:08:25.950+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:56.151076+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4702 sent 4701 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:25.951245+0000 osd.0 (osd.0) 4702 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4702) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:25.951245+0000 osd.0 (osd.0) 4702 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4232> 2025-11-24T21:08:26.918+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:57.151488+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4703 sent 4702 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:26.919293+0000 osd.0 (osd.0) 4703 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4703) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:26.919293+0000 osd.0 (osd.0) 4703 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4220> 2025-11-24T21:08:27.968+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:58.151711+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4704 sent 4703 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:27.969540+0000 osd.0 (osd.0) 4704 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4704) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:27.969540+0000 osd.0 (osd.0) 4704 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4206> 2025-11-24T21:08:28.998+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:07:59.151888+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4705 sent 4704 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:28.999576+0000 osd.0 (osd.0) 4705 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4705) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:28.999576+0000 osd.0 (osd.0) 4705 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4195> 2025-11-24T21:08:30.016+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:00.152732+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4706 sent 4705 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:30.016942+0000 osd.0 (osd.0) 4706 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4706) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:30.016942+0000 osd.0 (osd.0) 4706 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4184> 2025-11-24T21:08:31.031+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:01.152949+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4707 sent 4706 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:31.031787+0000 osd.0 (osd.0) 4707 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4707) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:31.031787+0000 osd.0 (osd.0) 4707 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4173> 2025-11-24T21:08:32.048+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:02.153176+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4708 sent 4707 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:32.049479+0000 osd.0 (osd.0) 4708 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4708) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:32.049479+0000 osd.0 (osd.0) 4708 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4161> 2025-11-24T21:08:33.060+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:03.153395+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4709 sent 4708 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:33.061409+0000 osd.0 (osd.0) 4709 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4709) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:33.061409+0000 osd.0 (osd.0) 4709 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4147> 2025-11-24T21:08:34.019+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:04.153670+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4710 sent 4709 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:34.020194+0000 osd.0 (osd.0) 4710 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4138> 2025-11-24T21:08:34.990+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4710) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:34.020194+0000 osd.0 (osd.0) 4710 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:05.153932+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4711 sent 4710 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:34.991515+0000 osd.0 (osd.0) 4711 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4127> 2025-11-24T21:08:36.007+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4711) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:34.991515+0000 osd.0 (osd.0) 4711 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:06.154199+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4712 sent 4711 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:36.008548+0000 osd.0 (osd.0) 4712 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4116> 2025-11-24T21:08:36.971+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4712) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:36.008548+0000 osd.0 (osd.0) 4712 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:07.154419+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4713 sent 4712 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:36.972035+0000 osd.0 (osd.0) 4713 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4105> 2025-11-24T21:08:38.004+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4713) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:36.972035+0000 osd.0 (osd.0) 4713 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:08.154700+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4714 sent 4713 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:38.005177+0000 osd.0 (osd.0) 4714 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4090> 2025-11-24T21:08:39.033+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4714) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:38.005177+0000 osd.0 (osd.0) 4714 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:09.154944+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4715 sent 4714 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:39.034208+0000 osd.0 (osd.0) 4715 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4079> 2025-11-24T21:08:40.034+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4715) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:39.034208+0000 osd.0 (osd.0) 4715 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:10.155505+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4716 sent 4715 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:40.035722+0000 osd.0 (osd.0) 4716 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4068> 2025-11-24T21:08:41.061+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4716) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:40.035722+0000 osd.0 (osd.0) 4716 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:11.155712+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4717 sent 4716 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:41.063146+0000 osd.0 (osd.0) 4717 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4055> 2025-11-24T21:08:42.034+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4717) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:41.063146+0000 osd.0 (osd.0) 4717 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:12.155985+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4718 sent 4717 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:42.035917+0000 osd.0 (osd.0) 4718 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4044> 2025-11-24T21:08:43.002+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4718) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:42.035917+0000 osd.0 (osd.0) 4718 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:13.156165+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4719 sent 4718 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:43.003187+0000 osd.0 (osd.0) 4719 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4030> 2025-11-24T21:08:44.017+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4719) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:43.003187+0000 osd.0 (osd.0) 4719 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:14.156402+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4720 sent 4719 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:44.019236+0000 osd.0 (osd.0) 4720 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4019> 2025-11-24T21:08:45.019+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4720) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:44.019236+0000 osd.0 (osd.0) 4720 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:15.156661+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4721 sent 4720 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:45.021055+0000 osd.0 (osd.0) 4721 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -4007> 2025-11-24T21:08:46.067+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4721) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:45.021055+0000 osd.0 (osd.0) 4721 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:16.156930+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4722 sent 4721 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:46.069548+0000 osd.0 (osd.0) 4722 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3996> 2025-11-24T21:08:47.070+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 23 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4722) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:46.069548+0000 osd.0 (osd.0) 4722 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:17.157147+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4723 sent 4722 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:47.071993+0000 osd.0 (osd.0) 4723 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3984> 2025-11-24T21:08:48.110+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4723) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:47.071993+0000 osd.0 (osd.0) 4723 : cluster [WRN] 23 slow requests (by type [ 'delayed' : 23 ] most affected pool [ 'vms' : 23 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:18.157329+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4724 sent 4723 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:48.112430+0000 osd.0 (osd.0) 4724 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3970> 2025-11-24T21:08:49.093+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4724) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:48.112430+0000 osd.0 (osd.0) 4724 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:19.157548+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4725 sent 4724 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:49.094694+0000 osd.0 (osd.0) 4725 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3959> 2025-11-24T21:08:50.061+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4725) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:49.094694+0000 osd.0 (osd.0) 4725 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:20.157997+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4726 sent 4725 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:50.062716+0000 osd.0 (osd.0) 4726 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3948> 2025-11-24T21:08:51.060+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:21.158192+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4727 sent 4726 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:51.061520+0000 osd.0 (osd.0) 4727 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4726) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:50.062716+0000 osd.0 (osd.0) 4726 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3936> 2025-11-24T21:08:52.106+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:22.158394+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4728 sent 4727 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:52.107101+0000 osd.0 (osd.0) 4728 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4727) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:51.061520+0000 osd.0 (osd.0) 4727 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4728) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:52.107101+0000 osd.0 (osd.0) 4728 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3922> 2025-11-24T21:08:53.140+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:23.158691+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4729 sent 4728 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:53.141425+0000 osd.0 (osd.0) 4729 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4729) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:53.141425+0000 osd.0 (osd.0) 4729 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:24.158884+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3905> 2025-11-24T21:08:54.185+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:25.159027+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4730 sent 4729 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:54.186284+0000 osd.0 (osd.0) 4730 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3896> 2025-11-24T21:08:55.193+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4730) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:54.186284+0000 osd.0 (osd.0) 4730 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:26.159287+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4731 sent 4730 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:55.194201+0000 osd.0 (osd.0) 4731 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4731) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:55.194201+0000 osd.0 (osd.0) 4731 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3883> 2025-11-24T21:08:56.230+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:27.159525+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4732 sent 4731 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:56.231107+0000 osd.0 (osd.0) 4732 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4732) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:56.231107+0000 osd.0 (osd.0) 4732 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3871> 2025-11-24T21:08:57.248+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:28.159813+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4733 sent 4732 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:57.249181+0000 osd.0 (osd.0) 4733 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4733) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:57.249181+0000 osd.0 (osd.0) 4733 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3860> 2025-11-24T21:08:58.262+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:29.159999+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4734 sent 4733 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:58.263029+0000 osd.0 (osd.0) 4734 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4734) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:58.263029+0000 osd.0 (osd.0) 4734 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3846> 2025-11-24T21:08:59.259+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:30.160280+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4735 sent 4734 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:08:59.260152+0000 osd.0 (osd.0) 4735 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4735) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:08:59.260152+0000 osd.0 (osd.0) 4735 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3835> 2025-11-24T21:09:00.264+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:31.160502+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4736 sent 4735 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:00.265444+0000 osd.0 (osd.0) 4736 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4736) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:00.265444+0000 osd.0 (osd.0) 4736 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3822> 2025-11-24T21:09:01.281+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:32.160716+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4737 sent 4736 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:01.282513+0000 osd.0 (osd.0) 4737 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4737) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:01.282513+0000 osd.0 (osd.0) 4737 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3810> 2025-11-24T21:09:02.296+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:33.160949+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4738 sent 4737 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:02.296970+0000 osd.0 (osd.0) 4738 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3801> 2025-11-24T21:09:03.258+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4738) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:02.296970+0000 osd.0 (osd.0) 4738 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:34.161110+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4739 sent 4738 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:03.259216+0000 osd.0 (osd.0) 4739 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3787> 2025-11-24T21:09:04.244+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4739) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:03.259216+0000 osd.0 (osd.0) 4739 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:35.161286+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4740 sent 4739 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:04.244769+0000 osd.0 (osd.0) 4740 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3775> 2025-11-24T21:09:05.276+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4740) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:04.244769+0000 osd.0 (osd.0) 4740 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:36.161462+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4741 sent 4740 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:05.276949+0000 osd.0 (osd.0) 4741 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3764> 2025-11-24T21:09:06.279+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4741) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:05.276949+0000 osd.0 (osd.0) 4741 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:37.161698+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4742 sent 4741 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:06.280224+0000 osd.0 (osd.0) 4742 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3752> 2025-11-24T21:09:07.328+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4742) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:06.280224+0000 osd.0 (osd.0) 4742 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:38.161915+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4743 sent 4742 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:07.328856+0000 osd.0 (osd.0) 4743 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3741> 2025-11-24T21:09:08.304+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4743) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:07.328856+0000 osd.0 (osd.0) 4743 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:39.162118+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4744 sent 4743 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:08.305276+0000 osd.0 (osd.0) 4744 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3727> 2025-11-24T21:09:09.349+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4744) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:08.305276+0000 osd.0 (osd.0) 4744 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:40.162303+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4745 sent 4744 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:09.350072+0000 osd.0 (osd.0) 4745 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3716> 2025-11-24T21:09:10.339+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4745) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:09.350072+0000 osd.0 (osd.0) 4745 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:41.162506+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4746 sent 4745 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:10.339813+0000 osd.0 (osd.0) 4746 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3704> 2025-11-24T21:09:11.346+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4746) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:10.339813+0000 osd.0 (osd.0) 4746 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:42.162743+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4747 sent 4746 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:11.347072+0000 osd.0 (osd.0) 4747 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3692> 2025-11-24T21:09:12.391+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4747) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:11.347072+0000 osd.0 (osd.0) 4747 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:43.162916+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4748 sent 4747 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:12.392440+0000 osd.0 (osd.0) 4748 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3681> 2025-11-24T21:09:13.364+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4748) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:12.392440+0000 osd.0 (osd.0) 4748 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:44.163122+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4749 sent 4748 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:13.365029+0000 osd.0 (osd.0) 4749 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3667> 2025-11-24T21:09:14.353+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4749) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:13.365029+0000 osd.0 (osd.0) 4749 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:45.163461+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4750 sent 4749 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:14.354942+0000 osd.0 (osd.0) 4750 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3656> 2025-11-24T21:09:15.354+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4750) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:14.354942+0000 osd.0 (osd.0) 4750 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:46.163785+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4751 sent 4750 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:15.356550+0000 osd.0 (osd.0) 4751 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3645> 2025-11-24T21:09:16.380+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4751) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:15.356550+0000 osd.0 (osd.0) 4751 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:47.164093+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4752 sent 4751 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:16.382252+0000 osd.0 (osd.0) 4752 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3634> 2025-11-24T21:09:17.405+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4752) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:16.382252+0000 osd.0 (osd.0) 4752 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:48.164288+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4753 sent 4752 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:17.407069+0000 osd.0 (osd.0) 4753 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3621> 2025-11-24T21:09:18.363+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418978 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4753) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:17.407069+0000 osd.0 (osd.0) 4753 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:49.164672+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4754 sent 4753 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:18.365218+0000 osd.0 (osd.0) 4754 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3607> 2025-11-24T21:09:19.385+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,7,1,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4754) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:18.365218+0000 osd.0 (osd.0) 4754 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:50.165021+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4755 sent 4754 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:19.387399+0000 osd.0 (osd.0) 4755 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3595> 2025-11-24T21:09:20.368+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4755) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:19.387399+0000 osd.0 (osd.0) 4755 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:51.165282+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4756 sent 4755 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:20.369224+0000 osd.0 (osd.0) 4756 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3584> 2025-11-24T21:09:21.412+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:52.165688+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4757 sent 4756 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:21.413960+0000 osd.0 (osd.0) 4757 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3575> 2025-11-24T21:09:22.376+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 28286976 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4756) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:20.369224+0000 osd.0 (osd.0) 4756 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:53.165976+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4758 sent 4757 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:22.377891+0000 osd.0 (osd.0) 4758 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 549.287780762s of 549.434631348s, submitted: 45
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3563> 2025-11-24T21:09:23.411+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4757) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:21.413960+0000 osd.0 (osd.0) 4757 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4758) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:22.377891+0000 osd.0 (osd.0) 4758 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:54.166265+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4759 sent 4758 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:23.412501+0000 osd.0 (osd.0) 4759 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3547> 2025-11-24T21:09:24.420+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4759) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:23.412501+0000 osd.0 (osd.0) 4759 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:55.166505+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4760 sent 4759 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:24.422049+0000 osd.0 (osd.0) 4760 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3536> 2025-11-24T21:09:25.402+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:56.166787+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4761 sent 4760 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:25.403738+0000 osd.0 (osd.0) 4761 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4760) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:24.422049+0000 osd.0 (osd.0) 4760 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3524> 2025-11-24T21:09:26.379+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:57.167125+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4762 sent 4761 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:26.380757+0000 osd.0 (osd.0) 4762 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4761) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:25.403738+0000 osd.0 (osd.0) 4761 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4762) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:26.380757+0000 osd.0 (osd.0) 4762 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3511> 2025-11-24T21:09:27.387+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:58.168050+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4763 sent 4762 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:27.388681+0000 osd.0 (osd.0) 4763 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3502> 2025-11-24T21:09:28.391+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4763) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:27.388681+0000 osd.0 (osd.0) 4763 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:08:59.168321+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4764 sent 4763 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:28.393506+0000 osd.0 (osd.0) 4764 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3487> 2025-11-24T21:09:29.357+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4764) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:28.393506+0000 osd.0 (osd.0) 4764 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:00.168550+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4765 sent 4764 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:29.358568+0000 osd.0 (osd.0) 4765 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3476> 2025-11-24T21:09:30.324+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4765) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:29.358568+0000 osd.0 (osd.0) 4765 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:01.168843+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4766 sent 4765 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:30.324468+0000 osd.0 (osd.0) 4766 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3465> 2025-11-24T21:09:31.337+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4766) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:30.324468+0000 osd.0 (osd.0) 4766 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:02.169083+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4767 sent 4766 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:31.337761+0000 osd.0 (osd.0) 4767 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3453> 2025-11-24T21:09:32.383+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:03.169295+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4768 sent 4767 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:32.384307+0000 osd.0 (osd.0) 4768 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4767) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:31.337761+0000 osd.0 (osd.0) 4767 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3441> 2025-11-24T21:09:33.368+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:04.169509+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4769 sent 4768 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:33.369113+0000 osd.0 (osd.0) 4769 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4768) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:32.384307+0000 osd.0 (osd.0) 4768 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4769) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:33.369113+0000 osd.0 (osd.0) 4769 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3425> 2025-11-24T21:09:34.338+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:05.169755+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4770 sent 4769 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:34.339119+0000 osd.0 (osd.0) 4770 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3415> 2025-11-24T21:09:35.329+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4770) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:34.339119+0000 osd.0 (osd.0) 4770 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:06.170032+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4771 sent 4770 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:35.330014+0000 osd.0 (osd.0) 4771 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3403> 2025-11-24T21:09:36.369+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4771) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:35.330014+0000 osd.0 (osd.0) 4771 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:07.170214+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4772 sent 4771 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:36.370411+0000 osd.0 (osd.0) 4772 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3392> 2025-11-24T21:09:37.415+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4772) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:36.370411+0000 osd.0 (osd.0) 4772 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:08.170400+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4773 sent 4772 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:37.415773+0000 osd.0 (osd.0) 4773 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3381> 2025-11-24T21:09:38.366+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:09.170676+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4774 sent 4773 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:38.367539+0000 osd.0 (osd.0) 4774 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4773) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:37.415773+0000 osd.0 (osd.0) 4773 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3366> 2025-11-24T21:09:39.396+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:10.170988+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4775 sent 4774 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:39.397359+0000 osd.0 (osd.0) 4775 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3356> 2025-11-24T21:09:40.356+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4774) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:38.367539+0000 osd.0 (osd.0) 4774 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4775) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:39.397359+0000 osd.0 (osd.0) 4775 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:11.171283+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4776 sent 4775 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:40.357531+0000 osd.0 (osd.0) 4776 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3343> 2025-11-24T21:09:41.408+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4776) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:40.357531+0000 osd.0 (osd.0) 4776 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:12.171551+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4777 sent 4776 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:41.408869+0000 osd.0 (osd.0) 4777 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3332> 2025-11-24T21:09:42.431+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4777) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:41.408869+0000 osd.0 (osd.0) 4777 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:13.171848+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4778 sent 4777 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:42.432030+0000 osd.0 (osd.0) 4778 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3320> 2025-11-24T21:09:43.423+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4778) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:42.432030+0000 osd.0 (osd.0) 4778 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:14.172166+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4779 sent 4778 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:43.424031+0000 osd.0 (osd.0) 4779 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3306> 2025-11-24T21:09:44.402+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4779) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:43.424031+0000 osd.0 (osd.0) 4779 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:15.172356+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4780 sent 4779 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:44.403407+0000 osd.0 (osd.0) 4780 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3295> 2025-11-24T21:09:45.431+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4780) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:44.403407+0000 osd.0 (osd.0) 4780 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:16.172569+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4781 sent 4780 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:45.432487+0000 osd.0 (osd.0) 4781 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3284> 2025-11-24T21:09:46.451+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4781) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:45.432487+0000 osd.0 (osd.0) 4781 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:17.172857+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4782 sent 4781 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:46.451851+0000 osd.0 (osd.0) 4782 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3273> 2025-11-24T21:09:47.437+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:18.173059+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4783 sent 4782 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:47.438110+0000 osd.0 (osd.0) 4783 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4782) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:46.451851+0000 osd.0 (osd.0) 4782 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3261> 2025-11-24T21:09:48.390+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:19.173459+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4784 sent 4783 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:48.391346+0000 osd.0 (osd.0) 4784 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4783) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:47.438110+0000 osd.0 (osd.0) 4783 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4784) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:48.391346+0000 osd.0 (osd.0) 4784 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3245> 2025-11-24T21:09:49.425+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:20.173797+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4785 sent 4784 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:49.426071+0000 osd.0 (osd.0) 4785 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4785) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:49.426071+0000 osd.0 (osd.0) 4785 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3233> 2025-11-24T21:09:50.436+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:21.174065+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4786 sent 4785 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:50.436767+0000 osd.0 (osd.0) 4786 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4786) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:50.436767+0000 osd.0 (osd.0) 4786 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3221> 2025-11-24T21:09:51.411+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:22.174345+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4787 sent 4786 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:51.412533+0000 osd.0 (osd.0) 4787 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4787) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:51.412533+0000 osd.0 (osd.0) 4787 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3210> 2025-11-24T21:09:52.415+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:23.174615+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4788 sent 4787 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:52.416299+0000 osd.0 (osd.0) 4788 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3200> 2025-11-24T21:09:53.377+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4788) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:52.416299+0000 osd.0 (osd.0) 4788 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:24.174916+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4789 sent 4788 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:53.378838+0000 osd.0 (osd.0) 4789 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3186> 2025-11-24T21:09:54.415+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4789) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:53.378838+0000 osd.0 (osd.0) 4789 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:25.175214+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4790 sent 4789 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:54.416550+0000 osd.0 (osd.0) 4790 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3175> 2025-11-24T21:09:55.414+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4790) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:54.416550+0000 osd.0 (osd.0) 4790 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:26.175471+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4791 sent 4790 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:55.415943+0000 osd.0 (osd.0) 4791 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3164> 2025-11-24T21:09:56.433+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4791) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:55.415943+0000 osd.0 (osd.0) 4791 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:27.175787+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4792 sent 4791 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:56.434935+0000 osd.0 (osd.0) 4792 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3153> 2025-11-24T21:09:57.389+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4792) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:56.434935+0000 osd.0 (osd.0) 4792 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:28.176150+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4793 sent 4792 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:57.391000+0000 osd.0 (osd.0) 4793 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3141> 2025-11-24T21:09:58.392+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4793) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:57.391000+0000 osd.0 (osd.0) 4793 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:29.176367+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4794 sent 4793 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:58.394029+0000 osd.0 (osd.0) 4794 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3127> 2025-11-24T21:09:59.397+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4794) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:58.394029+0000 osd.0 (osd.0) 4794 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:30.176724+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4795 sent 4794 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:09:59.399086+0000 osd.0 (osd.0) 4795 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3116> 2025-11-24T21:10:00.442+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4795) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:09:59.399086+0000 osd.0 (osd.0) 4795 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:31.176967+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4796 sent 4795 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:00.443281+0000 osd.0 (osd.0) 4796 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3105> 2025-11-24T21:10:01.427+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4796) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:00.443281+0000 osd.0 (osd.0) 4796 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:32.177227+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4797 sent 4796 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:01.428849+0000 osd.0 (osd.0) 4797 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3093> 2025-11-24T21:10:02.457+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4797) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:01.428849+0000 osd.0 (osd.0) 4797 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:33.177483+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4798 sent 4797 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:02.458670+0000 osd.0 (osd.0) 4798 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3082> 2025-11-24T21:10:03.435+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4798) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:02.458670+0000 osd.0 (osd.0) 4798 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:34.179084+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4799 sent 4798 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:03.436451+0000 osd.0 (osd.0) 4799 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3068> 2025-11-24T21:10:04.476+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4799) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:03.436451+0000 osd.0 (osd.0) 4799 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:35.179315+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4800 sent 4799 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:04.478112+0000 osd.0 (osd.0) 4800 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3056> 2025-11-24T21:10:05.455+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4800) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:04.478112+0000 osd.0 (osd.0) 4800 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:36.179699+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4801 sent 4800 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:05.456668+0000 osd.0 (osd.0) 4801 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3045> 2025-11-24T21:10:06.427+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4801) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:05.456668+0000 osd.0 (osd.0) 4801 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:37.179929+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4802 sent 4801 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:06.427954+0000 osd.0 (osd.0) 4802 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3034> 2025-11-24T21:10:07.401+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4802) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:06.427954+0000 osd.0 (osd.0) 4802 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:38.180137+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4803 sent 4802 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:07.401453+0000 osd.0 (osd.0) 4803 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3023> 2025-11-24T21:10:08.413+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4803) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:07.401453+0000 osd.0 (osd.0) 4803 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:39.180371+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4804 sent 4803 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:08.414241+0000 osd.0 (osd.0) 4804 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -3009> 2025-11-24T21:10:09.461+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4804) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:08.414241+0000 osd.0 (osd.0) 4804 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:40.180679+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4805 sent 4804 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:09.462186+0000 osd.0 (osd.0) 4805 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2997> 2025-11-24T21:10:10.440+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4805) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:09.462186+0000 osd.0 (osd.0) 4805 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:41.181050+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4806 sent 4805 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:10.441109+0000 osd.0 (osd.0) 4806 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2986> 2025-11-24T21:10:11.449+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4806) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:10.441109+0000 osd.0 (osd.0) 4806 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:42.181369+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4807 sent 4806 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:11.449674+0000 osd.0 (osd.0) 4807 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2975> 2025-11-24T21:10:12.420+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4807) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:11.449674+0000 osd.0 (osd.0) 4807 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:43.181648+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4808 sent 4807 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:12.420872+0000 osd.0 (osd.0) 4808 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2964> 2025-11-24T21:10:13.441+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,6,2,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4808) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:12.420872+0000 osd.0 (osd.0) 4808 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:44.181898+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4809 sent 4808 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:13.442238+0000 osd.0 (osd.0) 4809 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2949> 2025-11-24T21:10:14.456+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4809) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:13.442238+0000 osd.0 (osd.0) 4809 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:45.182115+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4810 sent 4809 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:14.456847+0000 osd.0 (osd.0) 4810 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2938> 2025-11-24T21:10:15.440+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,5,3,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4810) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:14.456847+0000 osd.0 (osd.0) 4810 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:46.182384+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4811 sent 4810 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:15.440817+0000 osd.0 (osd.0) 4811 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2926> 2025-11-24T21:10:16.431+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4811) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:15.440817+0000 osd.0 (osd.0) 4811 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:47.182706+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4812 sent 4811 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:16.431915+0000 osd.0 (osd.0) 4812 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2915> 2025-11-24T21:10:17.453+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:48.182902+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4813 sent 4812 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:17.454289+0000 osd.0 (osd.0) 4813 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4812) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:16.431915+0000 osd.0 (osd.0) 4812 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2904> 2025-11-24T21:10:18.478+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:49.183068+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4814 sent 4813 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:18.478998+0000 osd.0 (osd.0) 4814 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4813) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:17.454289+0000 osd.0 (osd.0) 4813 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4814) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:18.478998+0000 osd.0 (osd.0) 4814 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2888> 2025-11-24T21:10:19.496+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:50.183310+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4815 sent 4814 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:19.496654+0000 osd.0 (osd.0) 4815 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4815) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:19.496654+0000 osd.0 (osd.0) 4815 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2877> 2025-11-24T21:10:20.528+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:51.183524+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4816 sent 4815 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:20.529755+0000 osd.0 (osd.0) 4816 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4816) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:20.529755+0000 osd.0 (osd.0) 4816 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2866> 2025-11-24T21:10:21.567+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,5,3,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:52.183840+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4817 sent 4816 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:21.568368+0000 osd.0 (osd.0) 4817 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2856> 2025-11-24T21:10:22.713+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4817) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:21.568368+0000 osd.0 (osd.0) 4817 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:53.184060+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4818 sent 4817 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:22.714040+0000 osd.0 (osd.0) 4818 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2845> 2025-11-24T21:10:23.715+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4818) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:22.714040+0000 osd.0 (osd.0) 4818 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:54.184284+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4819 sent 4818 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:23.715866+0000 osd.0 (osd.0) 4819 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2831> 2025-11-24T21:10:24.702+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4819) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:23.715866+0000 osd.0 (osd.0) 4819 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,5,3,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:55.184425+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4820 sent 4819 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:24.702766+0000 osd.0 (osd.0) 4820 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2819> 2025-11-24T21:10:25.653+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4820) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:24.702766+0000 osd.0 (osd.0) 4820 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:56.184602+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4821 sent 4820 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:25.654407+0000 osd.0 (osd.0) 4821 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2808> 2025-11-24T21:10:26.614+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4821) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:25.654407+0000 osd.0 (osd.0) 4821 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:57.184774+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4822 sent 4821 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:26.615524+0000 osd.0 (osd.0) 4822 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2797> 2025-11-24T21:10:27.579+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:58.184941+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4823 sent 4822 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:27.580133+0000 osd.0 (osd.0) 4823 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4822) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:26.615524+0000 osd.0 (osd.0) 4822 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2786> 2025-11-24T21:10:28.595+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:09:59.185172+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4824 sent 4823 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:28.596946+0000 osd.0 (osd.0) 4824 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4823) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:27.580133+0000 osd.0 (osd.0) 4823 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4824) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:28.596946+0000 osd.0 (osd.0) 4824 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2770> 2025-11-24T21:10:29.566+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:00.185430+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4825 sent 4824 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:29.568190+0000 osd.0 (osd.0) 4825 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,5,3,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4825) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:29.568190+0000 osd.0 (osd.0) 4825 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2758> 2025-11-24T21:10:30.546+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:01.185655+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4826 sent 4825 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:30.548153+0000 osd.0 (osd.0) 4826 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4826) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:30.548153+0000 osd.0 (osd.0) 4826 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2747> 2025-11-24T21:10:31.508+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:02.185856+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4827 sent 4826 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:31.509769+0000 osd.0 (osd.0) 4827 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2738> 2025-11-24T21:10:32.487+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4827) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:31.509769+0000 osd.0 (osd.0) 4827 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:03.186105+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4828 sent 4827 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:32.489071+0000 osd.0 (osd.0) 4828 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2727> 2025-11-24T21:10:33.476+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4828) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:32.489071+0000 osd.0 (osd.0) 4828 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:04.186374+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4829 sent 4828 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:33.477715+0000 osd.0 (osd.0) 4829 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,5,3,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2712> 2025-11-24T21:10:34.466+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4829) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:33.477715+0000 osd.0 (osd.0) 4829 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:05.186577+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4830 sent 4829 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:34.467754+0000 osd.0 (osd.0) 4830 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2701> 2025-11-24T21:10:35.475+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4830) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:34.467754+0000 osd.0 (osd.0) 4830 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:06.186898+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4831 sent 4830 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:35.476703+0000 osd.0 (osd.0) 4831 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2690> 2025-11-24T21:10:36.442+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4831) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:35.476703+0000 osd.0 (osd.0) 4831 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:07.187112+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4832 sent 4831 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:36.443564+0000 osd.0 (osd.0) 4832 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2679> 2025-11-24T21:10:37.453+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4832) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:36.443564+0000 osd.0 (osd.0) 4832 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:08.187332+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4833 sent 4832 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:37.455219+0000 osd.0 (osd.0) 4833 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2668> 2025-11-24T21:10:38.453+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,5,3,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:09.187526+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4834 sent 4833 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:38.454864+0000 osd.0 (osd.0) 4834 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4833) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:37.455219+0000 osd.0 (osd.0) 4833 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2653> 2025-11-24T21:10:39.430+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:10.187762+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4835 sent 4834 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:39.431456+0000 osd.0 (osd.0) 4835 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4834) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:38.454864+0000 osd.0 (osd.0) 4834 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4835) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:39.431456+0000 osd.0 (osd.0) 4835 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2640> 2025-11-24T21:10:40.454+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:11.188034+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4836 sent 4835 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:40.456067+0000 osd.0 (osd.0) 4836 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,5,3,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2630> 2025-11-24T21:10:41.408+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4836) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:40.456067+0000 osd.0 (osd.0) 4836 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:12.188273+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4837 sent 4836 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:41.409482+0000 osd.0 (osd.0) 4837 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2619> 2025-11-24T21:10:42.366+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4837) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:41.409482+0000 osd.0 (osd.0) 4837 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:13.188511+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4838 sent 4837 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:42.368065+0000 osd.0 (osd.0) 4838 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2608> 2025-11-24T21:10:43.331+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4838) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:42.368065+0000 osd.0 (osd.0) 4838 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:14.188784+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4839 sent 4838 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:43.332334+0000 osd.0 (osd.0) 4839 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,5,3,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2596> 2025-11-24T21:10:44.328+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4839) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:43.332334+0000 osd.0 (osd.0) 4839 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:15.188999+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4840 sent 4839 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:44.329397+0000 osd.0 (osd.0) 4840 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2582> 2025-11-24T21:10:45.360+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4840) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:44.329397+0000 osd.0 (osd.0) 4840 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:16.189268+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4841 sent 4840 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:45.360564+0000 osd.0 (osd.0) 4841 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2571> 2025-11-24T21:10:46.358+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:17.189487+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4842 sent 4841 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:46.358838+0000 osd.0 (osd.0) 4842 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4841) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:45.360564+0000 osd.0 (osd.0) 4841 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2559> 2025-11-24T21:10:47.313+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:18.189713+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4843 sent 4842 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:47.313896+0000 osd.0 (osd.0) 4843 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2550> 2025-11-24T21:10:48.268+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4842) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:46.358838+0000 osd.0 (osd.0) 4842 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4843) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:47.313896+0000 osd.0 (osd.0) 4843 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:19.189962+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4844 sent 4843 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:48.268403+0000 osd.0 (osd.0) 4844 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2537> 2025-11-24T21:10:49.256+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4844) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:48.268403+0000 osd.0 (osd.0) 4844 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:20.190193+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4845 sent 4844 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:49.257177+0000 osd.0 (osd.0) 4845 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2523> 2025-11-24T21:10:50.296+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4845) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:49.257177+0000 osd.0 (osd.0) 4845 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:21.190368+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4846 sent 4845 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:50.297241+0000 osd.0 (osd.0) 4846 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2511> 2025-11-24T21:10:51.289+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4846) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:50.297241+0000 osd.0 (osd.0) 4846 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:22.190676+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4847 sent 4846 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:51.290524+0000 osd.0 (osd.0) 4847 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2500> 2025-11-24T21:10:52.287+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4847) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:51.290524+0000 osd.0 (osd.0) 4847 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:23.190936+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4848 sent 4847 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:52.287877+0000 osd.0 (osd.0) 4848 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2489> 2025-11-24T21:10:53.291+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4848) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:52.287877+0000 osd.0 (osd.0) 4848 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:24.191183+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4849 sent 4848 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:53.292379+0000 osd.0 (osd.0) 4849 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2478> 2025-11-24T21:10:54.298+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4849) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:53.292379+0000 osd.0 (osd.0) 4849 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:25.191488+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4850 sent 4849 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:54.298791+0000 osd.0 (osd.0) 4850 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2463> 2025-11-24T21:10:55.326+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4850) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:54.298791+0000 osd.0 (osd.0) 4850 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:26.191820+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4851 sent 4850 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:55.327454+0000 osd.0 (osd.0) 4851 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2452> 2025-11-24T21:10:56.314+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4851) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:55.327454+0000 osd.0 (osd.0) 4851 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:27.192002+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4852 sent 4851 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:56.315408+0000 osd.0 (osd.0) 4852 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2441> 2025-11-24T21:10:57.315+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4852) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:56.315408+0000 osd.0 (osd.0) 4852 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:28.192220+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4853 sent 4852 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:57.316518+0000 osd.0 (osd.0) 4853 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2429> 2025-11-24T21:10:58.366+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4853) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:57.316518+0000 osd.0 (osd.0) 4853 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:29.192509+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4854 sent 4853 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:58.366809+0000 osd.0 (osd.0) 4854 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2417> 2025-11-24T21:10:59.341+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4854) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:58.366809+0000 osd.0 (osd.0) 4854 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:30.192821+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4855 sent 4854 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:10:59.341783+0000 osd.0 (osd.0) 4855 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2403> 2025-11-24T21:11:00.354+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4855) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:10:59.341783+0000 osd.0 (osd.0) 4855 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:31.193089+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4856 sent 4855 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:00.355274+0000 osd.0 (osd.0) 4856 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2392> 2025-11-24T21:11:01.393+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4856) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:00.355274+0000 osd.0 (osd.0) 4856 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:32.193335+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4857 sent 4856 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:01.394188+0000 osd.0 (osd.0) 4857 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2380> 2025-11-24T21:11:02.350+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4857) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:01.394188+0000 osd.0 (osd.0) 4857 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:33.193653+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4858 sent 4857 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:02.351119+0000 osd.0 (osd.0) 4858 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2369> 2025-11-24T21:11:03.312+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4858) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:02.351119+0000 osd.0 (osd.0) 4858 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:34.193925+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4859 sent 4858 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:03.313132+0000 osd.0 (osd.0) 4859 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2358> 2025-11-24T21:11:04.299+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4859) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:03.313132+0000 osd.0 (osd.0) 4859 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:35.194175+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4860 sent 4859 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:04.300468+0000 osd.0 (osd.0) 4860 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2344> 2025-11-24T21:11:05.328+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4860) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:04.300468+0000 osd.0 (osd.0) 4860 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:36.194405+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4861 sent 4860 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:05.329385+0000 osd.0 (osd.0) 4861 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2333> 2025-11-24T21:11:06.361+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4861) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:05.329385+0000 osd.0 (osd.0) 4861 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:37.194649+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4862 sent 4861 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:06.362715+0000 osd.0 (osd.0) 4862 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2321> 2025-11-24T21:11:07.410+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4862) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:06.362715+0000 osd.0 (osd.0) 4862 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:38.194905+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4863 sent 4862 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:07.411284+0000 osd.0 (osd.0) 4863 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2310> 2025-11-24T21:11:08.397+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4863) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:07.411284+0000 osd.0 (osd.0) 4863 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:39.195134+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4864 sent 4863 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:08.398978+0000 osd.0 (osd.0) 4864 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2299> 2025-11-24T21:11:09.393+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4864) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:08.398978+0000 osd.0 (osd.0) 4864 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:40.195398+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4865 sent 4864 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:09.394561+0000 osd.0 (osd.0) 4865 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2285> 2025-11-24T21:11:10.421+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4865) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:09.394561+0000 osd.0 (osd.0) 4865 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:41.195689+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4866 sent 4865 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:10.422196+0000 osd.0 (osd.0) 4866 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2274> 2025-11-24T21:11:11.385+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4866) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:10.422196+0000 osd.0 (osd.0) 4866 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:42.195879+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4867 sent 4866 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:11.386155+0000 osd.0 (osd.0) 4867 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2262> 2025-11-24T21:11:12.382+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4867) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:11.386155+0000 osd.0 (osd.0) 4867 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:43.196070+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4868 sent 4867 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:12.384545+0000 osd.0 (osd.0) 4868 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2251> 2025-11-24T21:11:13.375+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4868) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:12.384545+0000 osd.0 (osd.0) 4868 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:44.196282+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4869 sent 4868 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:13.376570+0000 osd.0 (osd.0) 4869 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2239> 2025-11-24T21:11:14.329+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4869) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:13.376570+0000 osd.0 (osd.0) 4869 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:45.196488+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4870 sent 4869 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:14.330859+0000 osd.0 (osd.0) 4870 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2225> 2025-11-24T21:11:15.370+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:46.196773+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4871 sent 4870 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:15.371362+0000 osd.0 (osd.0) 4871 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4870) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:14.330859+0000 osd.0 (osd.0) 4870 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2213> 2025-11-24T21:11:16.406+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:47.197007+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4872 sent 4871 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:16.408404+0000 osd.0 (osd.0) 4872 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2204> 2025-11-24T21:11:17.380+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4871) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:15.371362+0000 osd.0 (osd.0) 4871 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4872) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:16.408404+0000 osd.0 (osd.0) 4872 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:48.197264+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4873 sent 4872 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:17.381973+0000 osd.0 (osd.0) 4873 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2191> 2025-11-24T21:11:18.403+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4873) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:17.381973+0000 osd.0 (osd.0) 4873 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:49.197512+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4874 sent 4873 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:18.404523+0000 osd.0 (osd.0) 4874 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2180> 2025-11-24T21:11:19.416+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4874) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:18.404523+0000 osd.0 (osd.0) 4874 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:50.197774+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4875 sent 4874 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:19.418062+0000 osd.0 (osd.0) 4875 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2166> 2025-11-24T21:11:20.459+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4875) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:19.418062+0000 osd.0 (osd.0) 4875 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:51.197975+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4876 sent 4875 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:20.460886+0000 osd.0 (osd.0) 4876 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2154> 2025-11-24T21:11:21.429+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4876) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:20.460886+0000 osd.0 (osd.0) 4876 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:52.198189+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4877 sent 4876 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:21.430759+0000 osd.0 (osd.0) 4877 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2143> 2025-11-24T21:11:22.471+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4877) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:21.430759+0000 osd.0 (osd.0) 4877 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:53.198416+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4878 sent 4877 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:22.471426+0000 osd.0 (osd.0) 4878 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2132> 2025-11-24T21:11:23.481+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4878) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:22.471426+0000 osd.0 (osd.0) 4878 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:54.198638+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4879 sent 4878 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:23.482204+0000 osd.0 (osd.0) 4879 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2121> 2025-11-24T21:11:24.488+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4879) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:23.482204+0000 osd.0 (osd.0) 4879 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:55.198844+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4880 sent 4879 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:24.489074+0000 osd.0 (osd.0) 4880 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2107> 2025-11-24T21:11:25.479+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4880) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:24.489074+0000 osd.0 (osd.0) 4880 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:56.199038+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4881 sent 4880 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:25.480475+0000 osd.0 (osd.0) 4881 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2096> 2025-11-24T21:11:26.449+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4881) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:25.480475+0000 osd.0 (osd.0) 4881 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2195cba/0x22ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:57.199305+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4882 sent 4881 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:26.449519+0000 osd.0 (osd.0) 4882 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2084> 2025-11-24T21:11:27.433+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4882) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:26.449519+0000 osd.0 (osd.0) 4882 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:58.199542+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4883 sent 4882 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:27.434434+0000 osd.0 (osd.0) 4883 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2073> 2025-11-24T21:11:28.452+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4883) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:27.434434+0000 osd.0 (osd.0) 4883 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:10:59.199797+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4884 sent 4883 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:28.453286+0000 osd.0 (osd.0) 4884 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2062> 2025-11-24T21:11:29.499+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418098 data_alloc: 218103808 data_used: 397312
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4884) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:28.453286+0000 osd.0 (osd.0) 4884 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 28295168 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:00.200083+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4885 sent 4884 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:29.499977+0000 osd.0 (osd.0) 4885 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2048> 2025-11-24T21:11:30.484+0000 7f2ca3ee7640 -1 osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: handle_auth_request added challenge on 0x560fd5d5fc00
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 127.307907104s of 127.357864380s, submitted: 14
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4885) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:29.499977+0000 osd.0 (osd.0) 4885 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _renew_subs
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125476864 unmapped: 28278784 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 193 ms_handle_reset con 0x560fd5d5fc00 session 0x560fd30052c0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:01.200292+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4886 sent 4885 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:30.484926+0000 osd.0 (osd.0) 4886 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2031> 2025-11-24T21:11:31.504+0000 7f2ca3ee7640 -1 osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4886) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:30.484926+0000 osd.0 (osd.0) 4886 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 193 heartbeat osd_stat(store_statfs(0x4f9bbd000/0x0/0x4ffc00000, data 0x19978be/0x1aaf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 28237824 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:02.200534+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4887 sent 4886 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:31.504827+0000 osd.0 (osd.0) 4887 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2019> 2025-11-24T21:11:32.490+0000 7f2ca3ee7640 -1 osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 193 heartbeat osd_stat(store_statfs(0x4f9bbd000/0x0/0x4ffc00000, data 0x19978be/0x1aaf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4887) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:31.504827+0000 osd.0 (osd.0) 4887 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 28237824 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:03.201634+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4888 sent 4887 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:32.491285+0000 osd.0 (osd.0) 4888 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: handle_auth_request added challenge on 0x560fd5357c00
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -2006> 2025-11-24T21:11:33.445+0000 7f2ca3ee7640 -1 osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 193 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4888) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:32.491285+0000 osd.0 (osd.0) 4888 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _renew_subs
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 194 ms_handle_reset con 0x560fd5357c00 session 0x560fd3504000
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125534208 unmapped: 28221440 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:04.201925+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4889 sent 4888 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:33.446533+0000 osd.0 (osd.0) 4889 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1991> 2025-11-24T21:11:34.495+0000 7f2ca3ee7640 -1 osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317383 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4889) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:33.446533+0000 osd.0 (osd.0) 4889 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125534208 unmapped: 28221440 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:05.202202+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4890 sent 4889 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:34.496145+0000 osd.0 (osd.0) 4890 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1977> 2025-11-24T21:11:35.489+0000 7f2ca3ee7640 -1 osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4890) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:34.496145+0000 osd.0 (osd.0) 4890 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125534208 unmapped: 28221440 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:06.202454+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4891 sent 4890 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:35.490306+0000 osd.0 (osd.0) 4891 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1966> 2025-11-24T21:11:36.487+0000 7f2ca3ee7640 -1 osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4891) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:35.490306+0000 osd.0 (osd.0) 4891 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 125534208 unmapped: 28221440 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:07.202656+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4892 sent 4891 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:36.488509+0000 osd.0 (osd.0) 4892 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 194 heartbeat osd_stat(store_statfs(0x4fa3bb000/0x0/0x4ffc00000, data 0x11994e5/0x12b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1954> 2025-11-24T21:11:37.476+0000 7f2ca3ee7640 -1 osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 194 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 194 heartbeat osd_stat(store_statfs(0x4fa3bb000/0x0/0x4ffc00000, data 0x11994e5/0x12b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4892) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:36.488509+0000 osd.0 (osd.0) 4892 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: handle_auth_request added challenge on 0x560fd5357c00
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126590976 unmapped: 27164672 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:08.202858+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4893 sent 4892 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:37.477221+0000 osd.0 (osd.0) 4893 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1940> 2025-11-24T21:11:38.523+0000 7f2ca3ee7640 -1 osd.0 195 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 195 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4893) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:37.477221+0000 osd.0 (osd.0) 4893 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126590976 unmapped: 27164672 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:09.203158+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4894 sent 4893 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:38.524087+0000 osd.0 (osd.0) 4894 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1929> 2025-11-24T21:11:39.522+0000 7f2ca3ee7640 -1 osd.0 195 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 195 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356833 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4894) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:38.524087+0000 osd.0 (osd.0) 4894 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _renew_subs
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 196 ms_handle_reset con 0x560fd5357c00 session 0x560fd1545680
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:10.203427+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4895 sent 4894 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:39.523030+0000 osd.0 (osd.0) 4895 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1911> 2025-11-24T21:11:40.569+0000 7f2ca3ee7640 -1 osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4895) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:39.523030+0000 osd.0 (osd.0) 4895 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:11.203694+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4896 sent 4895 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:40.570231+0000 osd.0 (osd.0) 4896 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1900> 2025-11-24T21:11:41.561+0000 7f2ca3ee7640 -1 osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4896) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:40.570231+0000 osd.0 (osd.0) 4896 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:12.203920+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4897 sent 4896 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:41.561726+0000 osd.0 (osd.0) 4897 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1889> 2025-11-24T21:11:42.552+0000 7f2ca3ee7640 -1 osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 196 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4897) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:41.561726+0000 osd.0 (osd.0) 4897 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 196 heartbeat osd_stat(store_statfs(0x4f9f43000/0x0/0x4ffc00000, data 0x160cbdc/0x172a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.006432533s of 12.493660927s, submitted: 70
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:13.204162+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4898 sent 4897 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:42.552725+0000 osd.0 (osd.0) 4898 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1875> 2025-11-24T21:11:43.503+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: get_auth_request con 0x560fd0a74400 auth_method 0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4898) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:42.552725+0000 osd.0 (osd.0) 4898 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:14.204387+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4899 sent 4898 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:43.503799+0000 osd.0 (osd.0) 4899 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1862> 2025-11-24T21:11:44.477+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4899) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:43.503799+0000 osd.0 (osd.0) 4899 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:15.204661+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4900 sent 4899 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:44.478339+0000 osd.0 (osd.0) 4900 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1848> 2025-11-24T21:11:45.449+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4900) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:44.478339+0000 osd.0 (osd.0) 4900 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:16.204884+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4901 sent 4900 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:45.450912+0000 osd.0 (osd.0) 4901 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1837> 2025-11-24T21:11:46.482+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4901) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:45.450912+0000 osd.0 (osd.0) 4901 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:17.205143+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4902 sent 4901 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:46.483464+0000 osd.0 (osd.0) 4902 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1826> 2025-11-24T21:11:47.443+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4902) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:46.483464+0000 osd.0 (osd.0) 4902 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:18.205367+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4903 sent 4902 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:47.444275+0000 osd.0 (osd.0) 4903 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1814> 2025-11-24T21:11:48.400+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4903) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:47.444275+0000 osd.0 (osd.0) 4903 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:19.205635+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4904 sent 4903 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:48.401511+0000 osd.0 (osd.0) 4904 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1803> 2025-11-24T21:11:49.414+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4904) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:48.401511+0000 osd.0 (osd.0) 4904 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:20.206095+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4905 sent 4904 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:49.416235+0000 osd.0 (osd.0) 4905 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1788> 2025-11-24T21:11:50.423+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4905) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:49.416235+0000 osd.0 (osd.0) 4905 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:21.206393+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4906 sent 4905 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:50.424148+0000 osd.0 (osd.0) 4906 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1776> 2025-11-24T21:11:51.404+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4906) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:50.424148+0000 osd.0 (osd.0) 4906 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:22.206637+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4907 sent 4906 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:51.405929+0000 osd.0 (osd.0) 4907 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1765> 2025-11-24T21:11:52.412+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4907) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:51.405929+0000 osd.0 (osd.0) 4907 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:23.206992+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4908 sent 4907 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:52.413252+0000 osd.0 (osd.0) 4908 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1754> 2025-11-24T21:11:53.433+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4908) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:52.413252+0000 osd.0 (osd.0) 4908 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:24.207346+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4909 sent 4908 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:53.435195+0000 osd.0 (osd.0) 4909 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1742> 2025-11-24T21:11:54.442+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4909) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:53.435195+0000 osd.0 (osd.0) 4909 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:25.207798+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4910 sent 4909 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:54.443427+0000 osd.0 (osd.0) 4910 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1728> 2025-11-24T21:11:55.473+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4910) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:54.443427+0000 osd.0 (osd.0) 4910 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:26.208231+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4911 sent 4910 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:55.475320+0000 osd.0 (osd.0) 4911 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1717> 2025-11-24T21:11:56.477+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4911) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:55.475320+0000 osd.0 (osd.0) 4911 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:27.208670+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4912 sent 4911 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:56.478656+0000 osd.0 (osd.0) 4912 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1706> 2025-11-24T21:11:57.527+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4912) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:56.478656+0000 osd.0 (osd.0) 4912 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:28.209298+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4913 sent 4912 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:57.528274+0000 osd.0 (osd.0) 4913 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1694> 2025-11-24T21:11:58.477+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4913) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:57.528274+0000 osd.0 (osd.0) 4913 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:29.209491+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4914 sent 4913 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:58.479261+0000 osd.0 (osd.0) 4914 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1683> 2025-11-24T21:11:59.507+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4914) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:58.479261+0000 osd.0 (osd.0) 4914 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:30.209800+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4915 sent 4914 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:11:59.508460+0000 osd.0 (osd.0) 4915 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1669> 2025-11-24T21:12:00.470+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4915) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:11:59.508460+0000 osd.0 (osd.0) 4915 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:31.210036+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4916 sent 4915 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:00.471388+0000 osd.0 (osd.0) 4916 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1658> 2025-11-24T21:12:01.511+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4916) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:00.471388+0000 osd.0 (osd.0) 4916 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:32.210256+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4917 sent 4916 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:01.512151+0000 osd.0 (osd.0) 4917 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1647> 2025-11-24T21:12:02.523+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4917) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:01.512151+0000 osd.0 (osd.0) 4917 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:33.231130+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4918 sent 4917 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:02.524890+0000 osd.0 (osd.0) 4918 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1635> 2025-11-24T21:12:03.497+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4918) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:02.524890+0000 osd.0 (osd.0) 4918 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:34.231617+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4919 sent 4918 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:03.498116+0000 osd.0 (osd.0) 4919 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1624> 2025-11-24T21:12:04.450+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4919) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:03.498116+0000 osd.0 (osd.0) 4919 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:35.231921+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4920 sent 4919 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:04.450645+0000 osd.0 (osd.0) 4920 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1609> 2025-11-24T21:12:05.411+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4920) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:04.450645+0000 osd.0 (osd.0) 4920 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:36.232258+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4921 sent 4920 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:05.411871+0000 osd.0 (osd.0) 4921 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1598> 2025-11-24T21:12:06.371+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4921) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:05.411871+0000 osd.0 (osd.0) 4921 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:37.232521+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4922 sent 4921 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:06.372202+0000 osd.0 (osd.0) 4922 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1586> 2025-11-24T21:12:07.368+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4922) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:06.372202+0000 osd.0 (osd.0) 4922 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:38.232769+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4923 sent 4922 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:07.369221+0000 osd.0 (osd.0) 4923 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1575> 2025-11-24T21:12:08.338+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4923) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:07.369221+0000 osd.0 (osd.0) 4923 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:39.232978+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4924 sent 4923 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:08.338664+0000 osd.0 (osd.0) 4924 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1564> 2025-11-24T21:12:09.335+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4924) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:08.338664+0000 osd.0 (osd.0) 4924 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:40.233518+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4925 sent 4924 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:09.335770+0000 osd.0 (osd.0) 4925 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1550> 2025-11-24T21:12:10.361+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:41.234015+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4926 sent 4925 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:10.362314+0000 osd.0 (osd.0) 4926 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4925) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:09.335770+0000 osd.0 (osd.0) 4925 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1539> 2025-11-24T21:12:11.397+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:42.234242+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4927 sent 4926 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:11.398849+0000 osd.0 (osd.0) 4927 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4926) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:10.362314+0000 osd.0 (osd.0) 4926 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4927) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:11.398849+0000 osd.0 (osd.0) 4927 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1525> 2025-11-24T21:12:12.418+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:43.234461+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4928 sent 4927 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:12.419271+0000 osd.0 (osd.0) 4928 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4928) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:12.419271+0000 osd.0 (osd.0) 4928 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1514> 2025-11-24T21:12:13.400+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:44.234707+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4929 sent 4928 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:13.401284+0000 osd.0 (osd.0) 4929 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4929) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:13.401284+0000 osd.0 (osd.0) 4929 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1503> 2025-11-24T21:12:14.434+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:45.234925+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4930 sent 4929 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:14.435845+0000 osd.0 (osd.0) 4930 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4930) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:14.435845+0000 osd.0 (osd.0) 4930 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1489> 2025-11-24T21:12:15.398+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:46.235107+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4931 sent 4930 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:15.399694+0000 osd.0 (osd.0) 4931 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4931) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:15.399694+0000 osd.0 (osd.0) 4931 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1477> 2025-11-24T21:12:16.371+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:47.235359+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4932 sent 4931 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:16.372528+0000 osd.0 (osd.0) 4932 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4932) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:16.372528+0000 osd.0 (osd.0) 4932 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1466> 2025-11-24T21:12:17.408+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:48.235647+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4933 sent 4932 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:17.409187+0000 osd.0 (osd.0) 4933 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4933) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:17.409187+0000 osd.0 (osd.0) 4933 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1455> 2025-11-24T21:12:18.428+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:49.235904+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4934 sent 4933 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:18.429243+0000 osd.0 (osd.0) 4934 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4934) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:18.429243+0000 osd.0 (osd.0) 4934 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1444> 2025-11-24T21:12:19.410+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:50.236212+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4935 sent 4934 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:19.410750+0000 osd.0 (osd.0) 4935 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4935) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:19.410750+0000 osd.0 (osd.0) 4935 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1430> 2025-11-24T21:12:20.456+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:51.236569+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4936 sent 4935 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:20.457209+0000 osd.0 (osd.0) 4936 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4936) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:20.457209+0000 osd.0 (osd.0) 4936 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1418> 2025-11-24T21:12:21.472+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:52.274188+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4937 sent 4936 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:21.473281+0000 osd.0 (osd.0) 4937 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1409> 2025-11-24T21:12:22.474+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4937) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:21.473281+0000 osd.0 (osd.0) 4937 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:53.274387+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4938 sent 4937 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:22.475832+0000 osd.0 (osd.0) 4938 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1398> 2025-11-24T21:12:23.464+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4938) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:22.475832+0000 osd.0 (osd.0) 4938 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:54.274669+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4939 sent 4938 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:23.465902+0000 osd.0 (osd.0) 4939 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1387> 2025-11-24T21:12:24.491+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4939) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:23.465902+0000 osd.0 (osd.0) 4939 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:55.274877+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4940 sent 4939 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:24.492439+0000 osd.0 (osd.0) 4940 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1372> 2025-11-24T21:12:25.502+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4940) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:24.492439+0000 osd.0 (osd.0) 4940 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:56.275242+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4941 sent 4940 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:25.503472+0000 osd.0 (osd.0) 4941 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1360> 2025-11-24T21:12:26.526+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4941) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:25.503472+0000 osd.0 (osd.0) 4941 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:57.275672+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4942 sent 4941 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:26.527856+0000 osd.0 (osd.0) 4942 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1348> 2025-11-24T21:12:27.529+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4942) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:26.527856+0000 osd.0 (osd.0) 4942 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:58.276048+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4943 sent 4942 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:27.530033+0000 osd.0 (osd.0) 4943 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1337> 2025-11-24T21:12:28.497+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4943) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:27.530033+0000 osd.0 (osd.0) 4943 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:11:59.276408+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4944 sent 4943 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:28.498455+0000 osd.0 (osd.0) 4944 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1325> 2025-11-24T21:12:29.464+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4944) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:28.498455+0000 osd.0 (osd.0) 4944 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:00.276687+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4945 sent 4944 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:29.465872+0000 osd.0 (osd.0) 4945 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1310> 2025-11-24T21:12:30.415+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4945) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:29.465872+0000 osd.0 (osd.0) 4945 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:01.276916+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4946 sent 4945 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:30.416743+0000 osd.0 (osd.0) 4946 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1300> 2025-11-24T21:12:31.373+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4946) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:30.416743+0000 osd.0 (osd.0) 4946 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:02.277245+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4947 sent 4946 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:31.374720+0000 osd.0 (osd.0) 4947 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1289> 2025-11-24T21:12:32.386+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4947) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:31.374720+0000 osd.0 (osd.0) 4947 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:03.277527+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4948 sent 4947 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:32.388129+0000 osd.0 (osd.0) 4948 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1278> 2025-11-24T21:12:33.361+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4948) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:32.388129+0000 osd.0 (osd.0) 4948 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:04.277944+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4949 sent 4948 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:33.362809+0000 osd.0 (osd.0) 4949 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1267> 2025-11-24T21:12:34.315+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 27140096 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4949) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:33.362809+0000 osd.0 (osd.0) 4949 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362037 data_alloc: 218103808 data_used: 405504
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: handle_auth_request added challenge on 0x560fd0e3d800
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.773933411s of 51.784355164s, submitted: 9
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:05.278176+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4950 sent 4949 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:34.317141+0000 osd.0 (osd.0) 4950 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1251> 2025-11-24T21:12:35.309+0000 7f2ca3ee7640 -1 osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f9f41000/0x0/0x4ffc00000, data 0x160e695/0x172d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 27131904 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4950) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:34.317141+0000 osd.0 (osd.0) 4950 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _renew_subs
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 ms_handle_reset con 0x560fd0e3d800 session 0x560fd2090780
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:06.278412+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4951 sent 4950 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:35.311042+0000 osd.0 (osd.0) 4951 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1235> 2025-11-24T21:12:36.291+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4951) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:35.311042+0000 osd.0 (osd.0) 4951 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:07.278763+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4952 sent 4951 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:36.292205+0000 osd.0 (osd.0) 4952 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1224> 2025-11-24T21:12:37.286+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4952) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:36.292205+0000 osd.0 (osd.0) 4952 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1219> 2025-11-24T21:12:38.244+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:08.278964+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4954 sent 4952 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:37.286495+0000 osd.0 (osd.0) 4953 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:38.245183+0000 osd.0 (osd.0) 4954 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4954) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:37.286495+0000 osd.0 (osd.0) 4953 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:38.245183+0000 osd.0 (osd.0) 4954 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1206> 2025-11-24T21:12:39.274+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:09.279148+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4955 sent 4954 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:39.275164+0000 osd.0 (osd.0) 4955 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4955) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:39.275164+0000 osd.0 (osd.0) 4955 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333642 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 heartbeat osd_stat(store_statfs(0x4fa3ae000/0x0/0x4ffc00000, data 0x11a0289/0x12be000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:10.279488+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1188> 2025-11-24T21:12:40.302+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:11.279669+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4956 sent 4955 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:40.302788+0000 osd.0 (osd.0) 4956 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1179> 2025-11-24T21:12:41.298+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4956) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:40.302788+0000 osd.0 (osd.0) 4956 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:12.279860+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4957 sent 4956 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:41.299113+0000 osd.0 (osd.0) 4957 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1168> 2025-11-24T21:12:42.319+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4957) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:41.299113+0000 osd.0 (osd.0) 4957 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:13.280027+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4958 sent 4957 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:42.319937+0000 osd.0 (osd.0) 4958 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1157> 2025-11-24T21:12:43.322+0000 7f2ca3ee7640 -1 osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4958) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:42.319937+0000 osd.0 (osd.0) 4958 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:14.280247+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4959 sent 4958 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:43.323284+0000 osd.0 (osd.0) 4959 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1145> 2025-11-24T21:12:44.329+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4959) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:43.323284+0000 osd.0 (osd.0) 4959 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:15.280514+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4960 sent 4959 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:44.330571+0000 osd.0 (osd.0) 4960 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1130> 2025-11-24T21:12:45.337+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4960) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:44.330571+0000 osd.0 (osd.0) 4960 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:16.280674+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4961 sent 4960 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:45.338132+0000 osd.0 (osd.0) 4961 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1119> 2025-11-24T21:12:46.313+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4961) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:45.338132+0000 osd.0 (osd.0) 4961 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:17.280871+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4962 sent 4961 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:46.313752+0000 osd.0 (osd.0) 4962 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1108> 2025-11-24T21:12:47.349+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4962) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:46.313752+0000 osd.0 (osd.0) 4962 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:18.281062+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4963 sent 4962 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:47.349982+0000 osd.0 (osd.0) 4963 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1097> 2025-11-24T21:12:48.340+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4963) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:47.349982+0000 osd.0 (osd.0) 4963 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:19.281281+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4964 sent 4963 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:48.340916+0000 osd.0 (osd.0) 4964 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1086> 2025-11-24T21:12:49.379+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:20.281526+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4965 sent 4964 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:49.379905+0000 osd.0 (osd.0) 4965 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4964) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:48.340916+0000 osd.0 (osd.0) 4964 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1072> 2025-11-24T21:12:50.399+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:21.281746+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4966 sent 4965 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:50.400332+0000 osd.0 (osd.0) 4966 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4965) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:49.379905+0000 osd.0 (osd.0) 4965 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4966) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:50.400332+0000 osd.0 (osd.0) 4966 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1058> 2025-11-24T21:12:51.393+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:22.281946+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4967 sent 4966 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:51.393945+0000 osd.0 (osd.0) 4967 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4967) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:51.393945+0000 osd.0 (osd.0) 4967 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1047> 2025-11-24T21:12:52.366+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:23.282167+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4968 sent 4967 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:52.366778+0000 osd.0 (osd.0) 4968 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4968) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:52.366778+0000 osd.0 (osd.0) 4968 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1036> 2025-11-24T21:12:53.337+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:24.282401+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4969 sent 4968 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:53.337703+0000 osd.0 (osd.0) 4969 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4969) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:53.337703+0000 osd.0 (osd.0) 4969 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1025> 2025-11-24T21:12:54.342+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:25.282657+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4970 sent 4969 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:54.343663+0000 osd.0 (osd.0) 4970 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4970) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:54.343663+0000 osd.0 (osd.0) 4970 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1011> 2025-11-24T21:12:55.362+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:26.282902+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4971 sent 4970 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:55.363194+0000 osd.0 (osd.0) 4971 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:  -1001> 2025-11-24T21:12:56.341+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4971) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:55.363194+0000 osd.0 (osd.0) 4971 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:27.283155+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4972 sent 4971 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:56.342017+0000 osd.0 (osd.0) 4972 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -990> 2025-11-24T21:12:57.294+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4972) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:56.342017+0000 osd.0 (osd.0) 4972 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -985> 2025-11-24T21:12:58.275+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:28.283353+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4974 sent 4972 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:57.295533+0000 osd.0 (osd.0) 4973 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:58.276452+0000 osd.0 (osd.0) 4974 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4974) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:57.295533+0000 osd.0 (osd.0) 4973 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:58.276452+0000 osd.0 (osd.0) 4974 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:29.283661+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -969> 2025-11-24T21:12:59.288+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:30.283866+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4975 sent 4974 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:12:59.289312+0000 osd.0 (osd.0) 4975 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -957> 2025-11-24T21:13:00.282+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4975) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:12:59.289312+0000 osd.0 (osd.0) 4975 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -951> 2025-11-24T21:13:01.253+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:31.284068+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4977 sent 4975 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:00.284270+0000 osd.0 (osd.0) 4976 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:01.254958+0000 osd.0 (osd.0) 4977 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4977) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:00.284270+0000 osd.0 (osd.0) 4976 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:01.254958+0000 osd.0 (osd.0) 4977 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -938> 2025-11-24T21:13:02.213+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:32.284322+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4978 sent 4977 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:02.214879+0000 osd.0 (osd.0) 4978 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4978) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:02.214879+0000 osd.0 (osd.0) 4978 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -927> 2025-11-24T21:13:03.258+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:33.284554+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4979 sent 4978 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:03.259934+0000 osd.0 (osd.0) 4979 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4979) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:03.259934+0000 osd.0 (osd.0) 4979 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:34.284873+0000)
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -913> 2025-11-24T21:13:04.295+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -907> 2025-11-24T21:13:05.270+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:35.285073+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 4981 sent 4979 num 2 unsent 2 sending 2
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:04.296955+0000 osd.0 (osd.0) 4980 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:05.272000+0000 osd.0 (osd.0) 4981 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4981) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:04.296955+0000 osd.0 (osd.0) 4980 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:05.272000+0000 osd.0 (osd.0) 4981 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -894> 2025-11-24T21:13:06.260+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:36.285318+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4982 sent 4981 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:06.261426+0000 osd.0 (osd.0) 4982 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4982) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:06.261426+0000 osd.0 (osd.0) 4982 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -882> 2025-11-24T21:13:07.256+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:37.285558+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4983 sent 4982 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:07.257734+0000 osd.0 (osd.0) 4983 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4983) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:07.257734+0000 osd.0 (osd.0) 4983 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -871> 2025-11-24T21:13:08.223+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:38.285845+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4984 sent 4983 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:08.225029+0000 osd.0 (osd.0) 4984 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4984) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:08.225029+0000 osd.0 (osd.0) 4984 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -859> 2025-11-24T21:13:09.187+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:39.290224+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4985 sent 4984 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:09.188996+0000 osd.0 (osd.0) 4985 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4985) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:09.188996+0000 osd.0 (osd.0) 4985 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -845> 2025-11-24T21:13:10.168+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:40.290497+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4986 sent 4985 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:10.169962+0000 osd.0 (osd.0) 4986 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4986) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:10.169962+0000 osd.0 (osd.0) 4986 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -833> 2025-11-24T21:13:11.153+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:41.290772+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4987 sent 4986 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:11.154981+0000 osd.0 (osd.0) 4987 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 27017216 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4987) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:11.154981+0000 osd.0 (osd.0) 4987 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -822> 2025-11-24T21:13:12.158+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 24 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:42.291105+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4988 sent 4987 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:12.159780+0000 osd.0 (osd.0) 4988 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 ms_handle_reset con 0x560fd0e3c000 session 0x560fd2ddf680
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: handle_auth_request added challenge on 0x560fd0e3dc00
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4988) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:12.159780+0000 osd.0 (osd.0) 4988 : cluster [WRN] 24 slow requests (by type [ 'delayed' : 24 ] most affected pool [ 'vms' : 24 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -809> 2025-11-24T21:13:13.121+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:43.291382+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4989 sent 4988 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:13.122537+0000 osd.0 (osd.0) 4989 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4989) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:13.122537+0000 osd.0 (osd.0) 4989 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -797> 2025-11-24T21:13:14.133+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:44.291748+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4990 sent 4989 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:14.135055+0000 osd.0 (osd.0) 4990 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4990) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:14.135055+0000 osd.0 (osd.0) 4990 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -783> 2025-11-24T21:13:15.085+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:45.292049+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4991 sent 4990 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:15.085958+0000 osd.0 (osd.0) 4991 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4991) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:15.085958+0000 osd.0 (osd.0) 4991 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -771> 2025-11-24T21:13:16.124+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:46.292279+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4992 sent 4991 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:16.124883+0000 osd.0 (osd.0) 4992 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4992) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:16.124883+0000 osd.0 (osd.0) 4992 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -759> 2025-11-24T21:13:17.078+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:47.292660+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4993 sent 4992 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:17.078748+0000 osd.0 (osd.0) 4993 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4993) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:17.078748+0000 osd.0 (osd.0) 4993 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -748> 2025-11-24T21:13:18.102+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:48.292955+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4994 sent 4993 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:18.103132+0000 osd.0 (osd.0) 4994 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4994) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:18.103132+0000 osd.0 (osd.0) 4994 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -737> 2025-11-24T21:13:19.100+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:49.293204+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4995 sent 4994 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:19.100667+0000 osd.0 (osd.0) 4995 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4995) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:19.100667+0000 osd.0 (osd.0) 4995 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -723> 2025-11-24T21:13:20.139+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:50.293435+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4996 sent 4995 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:20.139679+0000 osd.0 (osd.0) 4996 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4996) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:20.139679+0000 osd.0 (osd.0) 4996 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -711> 2025-11-24T21:13:21.176+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:51.293670+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4997 sent 4996 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:21.177076+0000 osd.0 (osd.0) 4997 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -702> 2025-11-24T21:13:22.128+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4997) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:21.177076+0000 osd.0 (osd.0) 4997 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:52.293903+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4998 sent 4997 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:22.128968+0000 osd.0 (osd.0) 4998 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -691> 2025-11-24T21:13:23.162+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4998) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:22.128968+0000 osd.0 (osd.0) 4998 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:53.294128+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 4999 sent 4998 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:23.163103+0000 osd.0 (osd.0) 4999 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -680> 2025-11-24T21:13:24.140+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 4999) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:23.163103+0000 osd.0 (osd.0) 4999 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:54.294411+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5000 sent 4999 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:24.141467+0000 osd.0 (osd.0) 5000 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -666> 2025-11-24T21:13:25.129+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5000) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:24.141467+0000 osd.0 (osd.0) 5000 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:55.294670+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5001 sent 5000 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:25.130453+0000 osd.0 (osd.0) 5001 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -655> 2025-11-24T21:13:26.129+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5001) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:25.130453+0000 osd.0 (osd.0) 5001 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:56.294867+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5002 sent 5001 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:26.130145+0000 osd.0 (osd.0) 5002 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -643> 2025-11-24T21:13:27.096+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5002) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:26.130145+0000 osd.0 (osd.0) 5002 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:57.295080+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5003 sent 5002 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:27.097083+0000 osd.0 (osd.0) 5003 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -632> 2025-11-24T21:13:28.093+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5003) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:27.097083+0000 osd.0 (osd.0) 5003 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:58.295305+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5004 sent 5003 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:28.094022+0000 osd.0 (osd.0) 5004 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -621> 2025-11-24T21:13:29.135+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5004) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:28.094022+0000 osd.0 (osd.0) 5004 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:12:59.295568+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5005 sent 5004 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:29.135691+0000 osd.0 (osd.0) 5005 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -606> 2025-11-24T21:13:30.107+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5005) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:29.135691+0000 osd.0 (osd.0) 5005 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:00.295876+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5006 sent 5005 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:30.107781+0000 osd.0 (osd.0) 5006 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -595> 2025-11-24T21:13:31.062+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5006) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:30.107781+0000 osd.0 (osd.0) 5006 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:01.296036+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5007 sent 5006 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:31.063358+0000 osd.0 (osd.0) 5007 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -584> 2025-11-24T21:13:32.016+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5007) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:31.063358+0000 osd.0 (osd.0) 5007 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:02.296204+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5008 sent 5007 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:32.017600+0000 osd.0 (osd.0) 5008 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -573> 2025-11-24T21:13:32.976+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:03.296402+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 5009 sent 5008 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:32.977346+0000 osd.0 (osd.0) 5009 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5008) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:32.017600+0000 osd.0 (osd.0) 5008 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -562> 2025-11-24T21:13:33.932+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:04.296649+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 5010 sent 5009 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:33.932701+0000 osd.0 (osd.0) 5010 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5009) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:32.977346+0000 osd.0 (osd.0) 5009 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5010) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:33.932701+0000 osd.0 (osd.0) 5010 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -548> 2025-11-24T21:13:34.936+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:05.296799+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5011 sent 5010 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:34.937369+0000 osd.0 (osd.0) 5011 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5011) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:34.937369+0000 osd.0 (osd.0) 5011 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -534> 2025-11-24T21:13:35.942+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:06.296946+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5012 sent 5011 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:35.943047+0000 osd.0 (osd.0) 5012 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5012) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:35.943047+0000 osd.0 (osd.0) 5012 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -523> 2025-11-24T21:13:36.989+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:07.297144+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5013 sent 5012 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:36.991299+0000 osd.0 (osd.0) 5013 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5013) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:36.991299+0000 osd.0 (osd.0) 5013 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -512> 2025-11-24T21:13:37.947+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:08.297327+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5014 sent 5013 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:37.948188+0000 osd.0 (osd.0) 5014 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5014) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:37.948188+0000 osd.0 (osd.0) 5014 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -501> 2025-11-24T21:13:38.941+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:09.297504+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5015 sent 5014 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:38.942366+0000 osd.0 (osd.0) 5015 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5015) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:38.942366+0000 osd.0 (osd.0) 5015 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -490> 2025-11-24T21:13:39.943+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:10.297983+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5016 sent 5015 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:39.944563+0000 osd.0 (osd.0) 5016 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5016) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:39.944563+0000 osd.0 (osd.0) 5016 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -475> 2025-11-24T21:13:40.933+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:11.298228+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5017 sent 5016 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:40.933983+0000 osd.0 (osd.0) 5017 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5017) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:40.933983+0000 osd.0 (osd.0) 5017 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -464> 2025-11-24T21:13:41.972+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:12.298433+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5018 sent 5017 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:41.973831+0000 osd.0 (osd.0) 5018 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -455> 2025-11-24T21:13:42.930+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5018) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:41.973831+0000 osd.0 (osd.0) 5018 : cluster [WRN] 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 8 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:13.298693+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5019 sent 5018 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:42.931964+0000 osd.0 (osd.0) 5019 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -444> 2025-11-24T21:13:43.928+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5019) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:42.931964+0000 osd.0 (osd.0) 5019 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:14.298829+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5020 sent 5019 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:43.929171+0000 osd.0 (osd.0) 5020 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -433> 2025-11-24T21:13:44.932+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5020) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:43.929171+0000 osd.0 (osd.0) 5020 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:15.298985+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5021 sent 5020 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:44.933327+0000 osd.0 (osd.0) 5021 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -418> 2025-11-24T21:13:45.972+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5021) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:44.933327+0000 osd.0 (osd.0) 5021 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:16.299132+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5022 sent 5021 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:45.974081+0000 osd.0 (osd.0) 5022 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -406> 2025-11-24T21:13:47.016+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:17.299353+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 5023 sent 5022 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:47.017973+0000 osd.0 (osd.0) 5023 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5022) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:45.974081+0000 osd.0 (osd.0) 5022 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 23871488 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -395> 2025-11-24T21:13:48.040+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:18.299552+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 5024 sent 5023 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:48.042062+0000 osd.0 (osd.0) 5024 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5023) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:47.017973+0000 osd.0 (osd.0) 5023 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5024) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:48.042062+0000 osd.0 (osd.0) 5024 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -382> 2025-11-24T21:13:49.050+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:19.299759+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5025 sent 5024 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:49.052353+0000 osd.0 (osd.0) 5025 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5025) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:49.052353+0000 osd.0 (osd.0) 5025 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -371> 2025-11-24T21:13:50.046+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:20.299971+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5026 sent 5025 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:50.047805+0000 osd.0 (osd.0) 5026 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5026) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:50.047805+0000 osd.0 (osd.0) 5026 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -356> 2025-11-24T21:13:50.997+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:21.300181+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5027 sent 5026 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:50.999612+0000 osd.0 (osd.0) 5027 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5027) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:50.999612+0000 osd.0 (osd.0) 5027 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -345> 2025-11-24T21:13:52.027+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:22.300382+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5028 sent 5027 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:52.029201+0000 osd.0 (osd.0) 5028 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5028) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:52.029201+0000 osd.0 (osd.0) 5028 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -334> 2025-11-24T21:13:53.022+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:23.300555+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5029 sent 5028 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:53.022893+0000 osd.0 (osd.0) 5029 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5029) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:53.022893+0000 osd.0 (osd.0) 5029 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -322> 2025-11-24T21:13:54.030+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:24.300741+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5030 sent 5029 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:54.031166+0000 osd.0 (osd.0) 5030 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5030) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:54.031166+0000 osd.0 (osd.0) 5030 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -310> 2025-11-24T21:13:55.039+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:25.300936+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5031 sent 5030 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:55.039896+0000 osd.0 (osd.0) 5031 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5031) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:55.039896+0000 osd.0 (osd.0) 5031 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -296> 2025-11-24T21:13:56.056+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:26.301130+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5032 sent 5031 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:56.057284+0000 osd.0 (osd.0) 5032 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5032) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:56.057284+0000 osd.0 (osd.0) 5032 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -285> 2025-11-24T21:13:57.058+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:27.301390+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5033 sent 5032 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:57.059268+0000 osd.0 (osd.0) 5033 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5033) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:57.059268+0000 osd.0 (osd.0) 5033 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -274> 2025-11-24T21:13:58.088+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:28.301614+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5034 sent 5033 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:58.089553+0000 osd.0 (osd.0) 5034 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5034) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:58.089553+0000 osd.0 (osd.0) 5034 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -263> 2025-11-24T21:13:59.125+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:29.301795+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5035 sent 5034 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:13:59.126472+0000 osd.0 (osd.0) 5035 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5035) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:13:59.126472+0000 osd.0 (osd.0) 5035 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -251> 2025-11-24T21:14:00.083+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:30.302020+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5036 sent 5035 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:00.083800+0000 osd.0 (osd.0) 5036 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5036) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:00.083800+0000 osd.0 (osd.0) 5036 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -237> 2025-11-24T21:14:01.087+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:31.302198+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5037 sent 5036 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:01.087917+0000 osd.0 (osd.0) 5037 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5037) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:01.087917+0000 osd.0 (osd.0) 5037 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -226> 2025-11-24T21:14:02.100+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:32.302407+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5038 sent 5037 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:02.101384+0000 osd.0 (osd.0) 5038 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5038) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:02.101384+0000 osd.0 (osd.0) 5038 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -215> 2025-11-24T21:14:03.133+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:33.302679+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5039 sent 5038 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:03.133914+0000 osd.0 (osd.0) 5039 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5039) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:03.133914+0000 osd.0 (osd.0) 5039 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -204> 2025-11-24T21:14:04.134+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:34.302862+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5040 sent 5039 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:04.135240+0000 osd.0 (osd.0) 5040 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5040) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:04.135240+0000 osd.0 (osd.0) 5040 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -191> 2025-11-24T21:14:05.169+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:35.303042+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5041 sent 5040 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:05.169917+0000 osd.0 (osd.0) 5041 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5041) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:05.169917+0000 osd.0 (osd.0) 5041 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -176> 2025-11-24T21:14:06.128+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:36.303233+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5042 sent 5041 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:06.129503+0000 osd.0 (osd.0) 5042 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5042) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:06.129503+0000 osd.0 (osd.0) 5042 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -165> 2025-11-24T21:14:07.154+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:37.303437+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5043 sent 5042 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:07.155372+0000 osd.0 (osd.0) 5043 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5043) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:07.155372+0000 osd.0 (osd.0) 5043 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -154> 2025-11-24T21:14:08.180+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:38.303649+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5044 sent 5043 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:08.181223+0000 osd.0 (osd.0) 5044 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5044) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:08.181223+0000 osd.0 (osd.0) 5044 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -142> 2025-11-24T21:14:09.185+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:39.303843+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5045 sent 5044 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:09.186138+0000 osd.0 (osd.0) 5045 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5045) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:09.186138+0000 osd.0 (osd.0) 5045 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -131> 2025-11-24T21:14:10.163+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:40.304020+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5046 sent 5045 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:10.164612+0000 osd.0 (osd.0) 5046 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -119> 2025-11-24T21:14:11.118+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5046) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:10.164612+0000 osd.0 (osd.0) 5046 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:41.304220+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5047 sent 5046 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:11.118859+0000 osd.0 (osd.0) 5047 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:   -108> 2025-11-24T21:14:12.093+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:42.304376+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 5048 sent 5047 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:12.094253+0000 osd.0 (osd.0) 5048 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5047) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:11.118859+0000 osd.0 (osd.0) 5047 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:    -97> 2025-11-24T21:14:13.106+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:43.304529+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 5049 sent 5048 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:13.107089+0000 osd.0 (osd.0) 5049 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:    -88> 2025-11-24T21:14:14.080+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:44.304659+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 3 last_log 5050 sent 5049 num 3 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:14.080948+0000 osd.0 (osd.0) 5050 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:    -78> 2025-11-24T21:14:15.047+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5048) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:12.094253+0000 osd.0 (osd.0) 5048 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5049) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:13.107089+0000 osd.0 (osd.0) 5049 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:45.304911+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 5051 sent 5050 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:15.049440+0000 osd.0 (osd.0) 5051 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:    -62> 2025-11-24T21:14:16.029+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:46.305086+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 3 last_log 5052 sent 5051 num 3 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:16.030702+0000 osd.0 (osd.0) 5052 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5050) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:14.080948+0000 osd.0 (osd.0) 5050 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5051) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:15.049440+0000 osd.0 (osd.0) 5051 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129892352 unmapped: 23863296 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:    -49> 2025-11-24T21:14:17.064+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:47.305242+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 2 last_log 5053 sent 5052 num 2 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:17.066544+0000 osd.0 (osd.0) 5053 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5052) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:16.030702+0000 osd.0 (osd.0) 5052 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5053) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:17.066544+0000 osd.0 (osd.0) 5053 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: do_command 'config diff' '{prefix=config diff}'
Nov 24 21:14:20 compute-0 ceph-osd[88624]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 23764992 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-osd[88624]: do_command 'config show' '{prefix=config show}'
Nov 24 21:14:20 compute-0 ceph-osd[88624]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:    -32> 2025-11-24T21:14:18.040+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 21:14:20 compute-0 ceph-osd[88624]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa3ac000/0x0/0x4ffc00000, data 0x11a1d42/0x12c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,3,4,4,12,1])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 21:14:20 compute-0 ceph-osd[88624]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:48.305397+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5054 sent 5053 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:18.042002+0000 osd.0 (osd.0) 5054 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5054) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:18.042002+0000 osd.0 (osd.0) 5054 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 130293760 unmapped: 23461888 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:    -16> 2025-11-24T21:14:19.069+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: tick
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_tickets
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T21:13:49.305576+0000)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  log_queue is 1 last_log 5055 sent 5054 num 1 unsent 1 sending 1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  will send 2025-11-24T21:14:19.070987+0000 osd.0 (osd.0) 5055 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client handle_log_ack log(last 5055) v1
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_client  logged 2025-11-24T21:14:19.070987+0000 osd.0 (osd.0) 5055 : cluster [WRN] 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 23502848 heap: 153755648 old mem: 2845415832 new mem: 2845415832
Nov 24 21:14:20 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]:     -5> 2025-11-24T21:14:20.113+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:20 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 21:14:20 compute-0 ceph-osd[88624]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 21:14:20 compute-0 ceph-osd[88624]: bluestore.MempoolThread(0x560fcfb1fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336424 data_alloc: 218103808 data_used: 413696
Nov 24 21:14:20 compute-0 ceph-osd[88624]: do_command 'log dump' '{prefix=log dump}'
Nov 24 21:14:20 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15445 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:20 compute-0 ceph-mon[75677]: from='client.15431 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:20 compute-0 ceph-mon[75677]: pgmap v2796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:20 compute-0 ceph-mon[75677]: from='client.15433 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:20 compute-0 ceph-mon[75677]: from='client.15435 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:20 compute-0 ceph-mon[75677]: from='client.15438 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:20 compute-0 ceph-mon[75677]: from='client.15439 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:20 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:20 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:20 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 24 21:14:20 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489263155' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 21:14:20 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15449 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:21 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:21.087+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:21 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:21 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:21.119+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:21 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:21 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 24 21:14:21 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1980136209' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 21:14:21 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:21 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 24 21:14:21 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1641082161' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 21:14:21 compute-0 ceph-mon[75677]: from='client.15441 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:21 compute-0 ceph-mon[75677]: from='client.15445 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:21 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2489263155' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 21:14:21 compute-0 ceph-mon[75677]: from='client.15449 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 21:14:21 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:21 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:21 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1980136209' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 21:14:21 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1641082161' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 21:14:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 24 21:14:22 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936475190' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 21:14:22 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:22 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:22.082+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:22 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:22 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:22 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:22.082+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:22 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 21:14:22 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 21:14:22 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 24 21:14:22 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349397188' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 21:14:22 compute-0 ceph-mon[75677]: pgmap v2797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:22 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1936475190' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 21:14:22 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:22 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:22 compute-0 ceph-mon[75677]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 21:14:22 compute-0 ceph-mon[75677]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 21:14:22 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/349397188' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 21:14:22 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15463 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:23.060+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:23 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:23 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:23 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:23 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:23.103+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:23 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:23 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 48 slow ops, oldest one blocked for 4982 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:14:23 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:23 compute-0 systemd[1]: Starting Hostname Service...
Nov 24 21:14:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 24 21:14:23 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964153707' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 21:14:23 compute-0 systemd[1]: Started Hostname Service.
Nov 24 21:14:23 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:23 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:23 compute-0 ceph-mon[75677]: Health check update: 48 slow ops, oldest one blocked for 4982 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:23 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3964153707' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 21:14:23 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 24 21:14:23 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3764183530' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 21:14:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:24.081+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:24 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:24 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:24 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:24.146+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:24 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:24 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 24 21:14:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82989625' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Optimize plan auto_2025-11-24_21:14:24
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [balancer INFO root] do_upmap
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [balancer INFO root] pools ['backups', 'volumes', 'images', 'vms', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 24 21:14:24 compute-0 ceph-mgr[75975]: [balancer INFO root] prepared 0/10 changes
Nov 24 21:14:24 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 24 21:14:24 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1985409802' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 21:14:24 compute-0 ceph-mon[75677]: from='client.15463 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:24 compute-0 ceph-mon[75677]: pgmap v2798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:24 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3764183530' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 21:14:24 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:24 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:24 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/82989625' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 21:14:24 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1985409802' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 21:14:25 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15473 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:25.104+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:25 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:25 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:25 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:25.151+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:25 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:25 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:25 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:25 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 24 21:14:25 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618477967' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 21:14:25 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:25 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:25 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2618477967' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 21:14:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 24 21:14:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/157941502' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 21:14:26 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:26.128+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:26 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:26 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:26.149+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:26 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:26 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:26 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15479 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:26 compute-0 ceph-mon[75677]: from='client.15473 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:26 compute-0 ceph-mon[75677]: pgmap v2799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:26 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/157941502' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 21:14:26 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:26 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:26 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 24 21:14:26 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/662679610' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 21:14:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:27.112+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:27 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:27 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:27 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:27 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:27.162+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:27 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:27 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15483 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:27 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:27 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15485 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:27 compute-0 ceph-mon[75677]: from='client.15479 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:27 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/662679610' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 21:14:27 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:27 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:27 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 24 21:14:27 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1299681722' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 21:14:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:28.073+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:28 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:28 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:28 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:28.189+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:28 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:28 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:28 compute-0 ceph-mon[75677]: log_channel(cluster) log [WRN] : Health check update: 48 slow ops, oldest one blocked for 4987 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 21:14:28 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 24 21:14:28 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601208079' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 21:14:28 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15491 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:28 compute-0 ceph-mon[75677]: from='client.15483 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:28 compute-0 ceph-mon[75677]: pgmap v2800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:28 compute-0 ceph-mon[75677]: from='client.15485 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:28 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/1299681722' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 21:14:28 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:28 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:28 compute-0 ceph-mon[75677]: Health check update: 48 slow ops, oldest one blocked for 4987 sec, daemons [osd.0,osd.1] have slow ops. (SLOW_OPS)
Nov 24 21:14:28 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/2601208079' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: log_channel(audit) log [DBG] : from='client.15493 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008637525843263658 of space, bias 1.0, pg target 0.25912577529790976 quantized to 32 (current 32)
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 21:14:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-1[89636]: 2025-11-24T21:14:29.101+0000 7f1a67169640 -1 osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:29 compute-0 ceph-osd[89640]: osd.1 199 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14257.0:531 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch ping cookie 93906793325568 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e106)
Nov 24 21:14:29 compute-0 ceph-osd[89640]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:29 compute-0 ceph-osd[88624]: osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:29 compute-0 ceph-05e060a3-406b-57f0-89d2-ec35f5b09305-osd-0[88620]: 2025-11-24T21:14:29.198+0000 7f2ca3ee7640 -1 osd.0 199 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14138.0:17 2.11 2:88c1567c:::rbd_trash_purge_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e78)
Nov 24 21:14:29 compute-0 ceph-osd[88624]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:29 compute-0 ceph-mgr[75975]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 128 MiB data, 269 MiB used, 60 GiB / 60 GiB avail
Nov 24 21:14:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 24 21:14:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364200753' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 21:14:29 compute-0 ceph-mon[75677]: from='client.15491 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 21:14:29 compute-0 ceph-mon[75677]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'default.rgw.log' : 21 ])
Nov 24 21:14:29 compute-0 ceph-mon[75677]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 27 ])
Nov 24 21:14:29 compute-0 ceph-mon[75677]: from='client.? 192.168.122.100:0/3364200753' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 21:14:29 compute-0 ceph-mon[75677]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 24 21:14:29 compute-0 ceph-mon[75677]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3879632658' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
